Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmenta...Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.展开更多
3D laser scanning technology is widely used in underground openings for high-precision,rapid,and nondestructive structural evaluations.Segmenting large 3D point cloud datasets,particularly in coal mine roadways with m...3D laser scanning technology is widely used in underground openings for high-precision,rapid,and nondestructive structural evaluations.Segmenting large 3D point cloud datasets,particularly in coal mine roadways with multi-scale targets,remains challenging.This paper proposes an enhanced segmentation method integrating improved PointNet++with a coverage-voted strategy.The coverage-voted strategy reduces data while preserving multi-scale target topology.The segmentation is achieved using an enhanced PointNet++algorithm with a normalization preprocessing head,resulting in a 94%accuracy for common supporting components.Ablation experiments show that the preprocessing head and coverage strategies increase segmentation accuracy by 20%and 2%,respectively,and improve Intersection over Union(IoU)for bearing plate segmentation by 58%and 20%.The accuracy of the current pretraining segmentation model may be affected by variations in surface support components,but it can be readily enhanced through re-optimization with additional labeled point cloud data.This proposed method,combined with a previously developed machine learning model that links rock bolt load and the deformation field of its bearing plate,provides a robust technique for simultaneously measuring the load of multiple rock bolts in a single laser scan.展开更多
Monitoring biogenic amines,which are metabolic byproducts of shrimp spoilage,is crucial for assessing food quality.Currently,most detection methods for biogenic amines suffer from limitations such as time-consuming pr...Monitoring biogenic amines,which are metabolic byproducts of shrimp spoilage,is crucial for assessing food quality.Currently,most detection methods for biogenic amines suffer from limitations such as time-consuming procedures,complex operations,and delayed results.Colorimetric analysis techniques have gained attention in recent years due to their advantages of short analysis time,simple operation,and suitability for on-site testing.This study successfully developed a series of colorimetric sensor platforms for biogenic amines by loading the natural active ingredient curcumin(CUR)and its derivative of Boron complex BFCUR onto filter paper and electrospun nanofibre films(ENFs),respectively.By analyzing the color response differences of these sensors upon contact with biogenic amines,the colorimetric sensors with superior detection performance were selected and further applied to the visual monitoring and indication of shrimp spoilage processes.展开更多
Flexible fiber sensors,However,traditional methods face challenges in fabricating low-cost,large-scale fiber sensors.In recent years,the thermal drawing process has rapidly advanced,offering a novel approach to flexib...Flexible fiber sensors,However,traditional methods face challenges in fabricating low-cost,large-scale fiber sensors.In recent years,the thermal drawing process has rapidly advanced,offering a novel approach to flexible fiber sensors.Through the preform-tofiber manufacturing technique,a variety of fiber sensors with complex functionalities spanning from the nanoscale to kilometer scale can be automated in a short time.Examples include temperature,acoustic,mechanical,chemical,biological,optoelectronic,and multifunctional sensors,which operate on diverse sensing principles such as resistance,capacitance,piezoelectricity,triboelectricity,photoelectricity,and thermoelectricity.This review outlines the principles of the thermal drawing process and provides a detailed overview of the latest advancements in various thermally drawn fiber sensors.Finally,the future developments of thermally drawn fiber sensors are discussed.展开更多
The Savitzky-Golay(SG)filter,which employs polynomial least-squares approximations to smooth data and estimate derivatives,is widely used for processing noisy data.However,noise suppression by the SG filter is recogni...The Savitzky-Golay(SG)filter,which employs polynomial least-squares approximations to smooth data and estimate derivatives,is widely used for processing noisy data.However,noise suppression by the SG filter is recognized to be limited at data boundaries and high frequencies,which can significantly reduce the signal-to-noise ratio(SNR).To solve this problem,a novel method synergistically integrating Principal Component Analysis(PCA)with SG filtering is proposed in this paper.This approach avoids the is-sue of excessive smoothing associated with larger window sizes.The proposed PCA-SG filtering algorithm was applied to a CO gas sensing system based on Cavity Ring-Down Spectroscopy(CRDS).The perform-ance of the PCA-SG filtering algorithm is demonstrated through comparison with Moving Average Filtering(MAF),Wavelet Transformation(WT),Kalman Filtering(KF),and the SG filter.The results demonstrate that the proposed algorithm exhibits superior noise reduction capabilities compared to the other algorithms evaluated.The SNR of the ring-down signal was improved from 11.8612 dB to 29.0913 dB,and the stand-ard deviation of the extracted ring-down time constant was reduced from 0.037μs to 0.018μs.These results confirm that the proposed PCA-SG filtering algorithm effectively improves the smoothness of the ring-down curve data,demonstrating its feasibility.展开更多
Tactile sensing of subcutaneous organ vibrations provides a promising route toward human-machine interfaces and wear-able diagnostics,particularly for voice rehabilitation and silent-speech communication.Here,we prese...Tactile sensing of subcutaneous organ vibrations provides a promising route toward human-machine interfaces and wear-able diagnostics,particularly for voice rehabilitation and silent-speech communication.Here,we present a bioinspired piezoelectric vibration sensor that mimics the graded stiffness and stress-based transduction mechanism of otolithic cilia in the human vestibular system.The device consists of a trapezoidal cantilever array with tip inertial masses,fabricated through a hybrid stereolithography 3D printing and laser micromachining process for rapid prototyping without cleanroom facilities.Finite-element modeling and experimental measurements demonstrate a fundamental resonance near 1.2 kHz,a 5%flat-bandwidth of 350 Hz,and an in-band charge sensitivity of 3.17 pC/g.A wearable proof-of-concept test further verifies the sensor's ability to reproducibly distinguish phoneme-specific vibration patterns in both time and frequency domains.This work establishes a foundation for bioinspired tactile sensing front-ends in wearable voice interfaces and other intelligent diagnostic systems integrated with machine-learning algorithms.展开更多
With the rapid expansion of the Internet of Things(IoT),user data has experienced exponential growth,leading to increasing concerns about the security and integrity of data stored in the cloud.Traditional schemes rely...With the rapid expansion of the Internet of Things(IoT),user data has experienced exponential growth,leading to increasing concerns about the security and integrity of data stored in the cloud.Traditional schemes relying on untrusted third-party auditors suffer from both security and efficiency issues,while existing decentralized blockchain-based auditing solutions still face shortcomings in correctness and security.This paper proposes an improved blockchain-based cloud auditing scheme,with the following core contributions:Identifying critical logical contradictions in the original scheme,thereby establishing the foundation for the correctness of cloud auditing;Designing an enhanced mechanism that integrates multiple hashing with dynamic aggregate signatures,binding encrypted blocks through bilinear pairings and BLS signatures,and improving the scheme by setting parameters based on the Computational Diffie-Hellman(CDH)problem,significantly strengthening data integrity protection and anti-forgery capabilities;Introducing a random challenge mechanism and dynamic parameter adjustment strategy,effectively resisting various attacks such as forgery,tampering,and deletion,significantly improving the detection probability of malicious Cloud Service Providers(CSPs),and significantly reducing the proof generation overhead for CSPswhilemaintaining the same computational cost forDataOwners.Theoretical analysis and performance evaluation experiments demonstrate that the proposed scheme achieves significant improvements in both security and efficiency.Finally,the paper explores potential applications of the Enhanced Security Scheme in fields such as healthcare,drone swarms,and government office attendance systems,providing an effective approach for building secure,efficient,and decentralized cloud auditing systems.展开更多
Clouds play an important role in global atmospheric energy and water vapor budgets, and the low cloud simulations suffer from large biases in many atmospheric general circulation models. In this study, cloud microphys...Clouds play an important role in global atmospheric energy and water vapor budgets, and the low cloud simulations suffer from large biases in many atmospheric general circulation models. In this study, cloud microphysical processes such as raindrop evaporation and cloud water accretion in a double-moment six-class cloud microphysics scheme were revised to enhance the simulation of low clouds using the Global-Regional Integrated Forecast System(GRIST)model. The validation of the revised scheme using a single-column version of the GRIST demonstrated a reasonable reduction in liquid water biases. The revised parameterization simulated medium-and low-level cloud fractions that were in better agreement with the observations than the original scheme. Long-term global simulations indicate the mitigation of the originally overestimated low-level cloud fraction and cloud-water mixing ratio in mid-to high-latitude regions,primarily owing to enhanced accretion processes and weakened raindrop evaporation. The reduced low clouds with the revised scheme showed better consistency with satellite observations, particularly at mid-and high-latitudes. Further improvements can be observed in the simulated cloud shortwave radiative forcing and vertical distribution of total cloud cover. Annual precipitation in mid-latitude regions has also improved, particularly over the oceans, with significantly increased large-scale and decreased convective precipitation.展开更多
Dear Editor,This letter presents a new approach to developing interpretable and reliable soft sensors for Industry 5.0 applications.Although sophisticated machine learning methods have made remarkable strides in soft-...Dear Editor,This letter presents a new approach to developing interpretable and reliable soft sensors for Industry 5.0 applications.Although sophisticated machine learning methods have made remarkable strides in soft-sensor predictive accuracy,ensuring interpretability and reliable performance across varying industrial operating conditions remains a challenge[1]–[4].This is precisely what Industry 5.0,proposed by the European Commission in 2021,advocates[5],[6].It integrates various cutting-edge technologies,such as human-machine interaction,digital twins,cybersecurity and artificial intelligence,to facilitate the development of better soft sensors.展开更多
Noise interference critically impairs the stability and data accuracy of sensing systems.However,current suppression strategies fail to concurrently mitigate intrinsic system noise and extrinsic environmental noise.Th...Noise interference critically impairs the stability and data accuracy of sensing systems.However,current suppression strategies fail to concurrently mitigate intrinsic system noise and extrinsic environmental noise.This study introduces a composite denoising approach to address this challenge.This method is based on the ameliorated ellipse fitting algorithm(AEFA)and adaptive successive variational mode decomposition(ASVMD).This algorithm employs AEFA to eliminate system noise tightly coupled with direct-current and alternating-current components in the interference signal,thereby obtaining a phase signal containing only environmental noise.The ASVMD technique adaptively extracts environmental noise components predominantly present in the phase signal.To achieve optimal decomposition results automatically,the permutation entropy criterion is employed to refine decomposition parameters.The correlation coefficient is utilized to differentiate effective components from noise components in the decomposition results.Experimental results indicate that the combined AEFA and ASVMD algorithm effectively suppresses both system and environmental noises.When applied to 50 Hz vibration signal processing,the proposed approach achieves a noise reduction of 17.81 dB and a phase resolution of 35.14μrad/√Hz.Given the excellent performance of the noise suppression,the proposed approach holds great application potential in high-performance interferometric sensing systems.展开更多
Developing effective,versatile,and high-precision sensing interfaces remains a crucial challenge in human-machine-environment interaction applications.Despite progress in interaction-oriented sensing skins,limitations...Developing effective,versatile,and high-precision sensing interfaces remains a crucial challenge in human-machine-environment interaction applications.Despite progress in interaction-oriented sensing skins,limitations remain in unit-level reconfiguration,multiaxial force and motion sensing,and robust operation across dynamically changing or irregular surfaces.Herein,we develop a reconfigurable omnidirectional triboelectric whisker sensor array(RO-TWSA)comprising multiple sensing units that integrate a triboelectric whisker structure(TWS)with an untethered hydro-sealing vacuum sucker(UHSVS),enabling reversibly portable deployment and omnidirectional perception across diverse surfaces.Using a simple dual-triangular electrode layout paired with MXene/silicone nanocomposite dielectric layer,the sensor unit achieves precise omnidirectional force and motion sensing with a detection threshold as low as 0.024 N and an angular resolution of 5°,while the UHSVS provides reliable and reversible multi-surface anchoring for the sensor units by involving a newly designed hydrogel combining high mechanical robustness and superior water absorption.Extensive experiments demonstrate the effectiveness of RO-TWSA across various interactive scenarios,including teleoperation,tactile diagnostics,and robotic autonomous exploration.Overall,RO-TWSA presents a versatile and high-resolution tactile interface,offering new avenues for intelligent perception and interaction in complex real-world environments.展开更多
This survey presents a comprehensive examination of sensor fusion research spanning four decades,tracing the methodological evolution,application domains,and alignment with classical hierarchical models.Building on th...This survey presents a comprehensive examination of sensor fusion research spanning four decades,tracing the methodological evolution,application domains,and alignment with classical hierarchical models.Building on this long-term trajectory,the foundational approaches such as probabilistic inference,early neural networks,rulebasedmethods,and feature-level fusion established the principles of uncertainty handling andmulti-sensor integration in the 1990s.The fusion methods of 2000s marked the consolidation of these ideas through advanced Kalman and particle filtering,Bayesian–Dempster–Shafer hybrids,distributed consensus algorithms,and machine learning ensembles for more robust and domain-specific implementations.From 2011 to 2020,the widespread adoption of deep learning transformed the field driving some major breakthroughs in the autonomous vehicles domain.A key contribution of this work is the assessment of contemporary methods against the JDL model,revealing gaps at higher levels-especially in situation and impact assessment.Contemporary methods offer only limited implementation of higher-level fusion.The survey also reviews the benchmark multi-sensor datasets,noting their role in advancing the field while identifying major shortcomings like the lack of domain diversity and hierarchical coverage.By synthesizing developments across decades and paradigms,this survey provides both a historical narrative and a forward-looking perspective.It highlights unresolved challenges in transparency,scalability,robustness,and trustworthiness,while identifying emerging paradigms such as neuromorphic fusion and explainable AI as promising directions.This paves the way forward for advancing sensor fusion towards transparent and adaptive next-generation autonomous systems.展开更多
In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task schedul...In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.展开更多
Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.Howev...Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.展开更多
Possessing excellent mechanical properties,a high-coverage slide-ring conductive gel is constructed by in situ polymerization ofα-cyclodextrin(α-CD)polyrotaxane(PR)and 1-vinyl-3-ethylimidazolium bromide([VEIM]Br)ion...Possessing excellent mechanical properties,a high-coverage slide-ring conductive gel is constructed by in situ polymerization ofα-cyclodextrin(α-CD)polyrotaxane(PR)and 1-vinyl-3-ethylimidazolium bromide([VEIM]Br)ionic liquid(IL),using 1-ethyl-3-methylimidazolium bromide([EMIM]Br)IL as solvent.Benefiting from the compatibility of ILs and alkene-PR,the cross-linked network slide-ring gel not only maintains excellent conductivity(1.52×10^(−2) S/m),but also has effectively improved mechanical properties(513%fracture strain,0.713 MPa fracture stress,211 kPa elastic modulus and 1366 kJ/m^(3) toughness)and adhesive properties(472.3±25.9 kPa).The supramolecular gel can be used as a strain sensor to efficiently monitor deformation signals in real-time at least 200 times.Especially,the slide-ring gel can self-power generated by triboelectric effect and electrostatic induction between the skin layer and the polydimethylsiloxane(PDMS)layer that encapsulates the gel,achieving reversible and durable motion sensing,which provides a convenient pathway for constructing supramolecular self-powered flexible electronic materials.展开更多
MXene is a promising conductive nanofiller for hydrogels due to its excellent electricity conductivity and water dispersibility.However,MXene is prone to oxidize in the presence of air and water,resulting in a signifi...MXene is a promising conductive nanofiller for hydrogels due to its excellent electricity conductivity and water dispersibility.However,MXene is prone to oxidize in the presence of air and water,resulting in a significant loss of conductivity.Polydopamine(PDA)has been coated on MXene to enhance its antioxidation stability via the physical barrier and chemical reducing ability of PDA,which unavoidably causes severe aggregation and a significant decrease in conductivity due to the crosslinking and insulation of PDA.Herein,we propose a facile strategy to construct a highly conductive,stable,and self-healing MXene-based polyvinyl alcohol(PVA)hydrogel by a controlled assembly of PDA and cellulose nanocrystal(CNC).PDA is first formed by oxidation self-polymerization in PVA solution without the presence of CNC and MXene,which can effectively reduce the content of aggregation-inducing groups and avoid the formation of an insulating PDA layer on the surface of MXene.The addition of CNCs results in the easy dispersion of a high content of MXene via hydrogen bonding and electrostatic interactions.The PVA-PDA hydrogel with MXene and CNC as conductive and reinforcing nanofillers(PP-CM)is cross-linked by dynamic borax covalent bonds and shows a conductivity of 7.14 S m^(-1).The introduction of PDA effectively protects MXene and results in only a 14%decrease in conductivity after 7 days,significantly improving antioxidant stability.This hydrogel also possesses rapid self-healing capabilities,achieving 90.5%self-healing efficiency within 10 min.This versatile approach opens new avenues for the preparation and application of MXene-based conductive hydrogels.展开更多
Evaluating rock mass quality using three-dimensional(3D)point clouds is crucial for discontinuity extraction and is widely applied in various industrial sectors.However,the utilization of this method in geological sur...Evaluating rock mass quality using three-dimensional(3D)point clouds is crucial for discontinuity extraction and is widely applied in various industrial sectors.However,the utilization of this method in geological surveys remains limited.Notable limitations of current research include the scarcity of validation using simple geometric shapes for discontinuity extraction methods,and the lack of studies that target both planar and linear discontinuity.To address these gaps,this study proposes a workflow for identifying discontinuity planes and traces in rock outcrops from photogrammetric 3D modeling,employing the Compass and Facets plugins in the open-source CloudCompare software.Prior to field application,the efficacy of the extraction methods was first evaluated using experimental datasets of a cube and an isosceles triangular prism generated under laboratory-controlled conditions.This validation demonstrated exceptional accuracy,with the dip and dip direction(DDD)of extracted structures consistently within±2°of the actual values.Following this rigorous laboratory validation,this methodology was applied to a more complex natural rock outcrop(Miocene–Pliocene deposits in Japan),demonstrating its applicability in realistic geological settings for identifying structures.The results showed that the dip and dip direction trends of the extracted bedding planes and faults were consistent with field measurements,achieving a time reduction of approximately 40%compared to traditional methods.In conclusion,through strictly controlled initial verification and subsequent successful application to a complex natural setting,this study confirmed that the proposed workflow can effectively and efficiently extract discontinuous geological structures from point clouds.展开更多
基金Postgraduate Innovation Top notch Talent Training Project of Hunan Province,Grant/Award Number:CX20220045Scientific Research Project of National University of Defense Technology,Grant/Award Number:22-ZZCX-07+2 种基金New Era Education Quality Project of Anhui Province,Grant/Award Number:2023cxcysj194National Natural Science Foundation of China,Grant/Award Numbers:62201597,62205372,1210456foundation of Hefei Comprehensive National Science Center,Grant/Award Number:KY23C502。
文摘Large-scale point cloud datasets form the basis for training various deep learning networks and achieving high-quality network processing tasks.Due to the diversity and robustness constraints of the data,data augmentation(DA)methods are utilised to expand dataset diversity and scale.However,due to the complex and distinct characteristics of LiDAR point cloud data from different platforms(such as missile-borne and vehicular LiDAR data),directly applying traditional 2D visual domain DA methods to 3D data can lead to networks trained using this approach not robustly achieving the corresponding tasks.To address this issue,the present study explores DA for missile-borne LiDAR point cloud using a Monte Carlo(MC)simulation method that closely resembles practical application.Firstly,the model of multi-sensor imaging system is established,taking into account the joint errors arising from the platform itself and the relative motion during the imaging process.A distortion simulation method based on MC simulation for augmenting missile-borne LiDAR point cloud data is proposed,underpinned by an analysis of combined errors between different modal sensors,achieving high-quality augmentation of point cloud data.The effectiveness of the proposed method in addressing imaging system errors and distortion simulation is validated using the imaging scene dataset constructed in this paper.Comparative experiments between the proposed point cloud DA algorithm and the current state-of-the-art algorithms in point cloud detection and single object tracking tasks demonstrate that the proposed method can improve the network performance obtained from unaugmented datasets by over 17.3%and 17.9%,surpassing SOTA performance of current point cloud DA algorithms.
基金supported by the National Natural Science Foundation of China(Grant Nos.52304139,52325403)the CCTEG Coal Mining Research Institute funding(Grant No.KCYJY-2024-MS-10).
文摘3D laser scanning technology is widely used in underground openings for high-precision,rapid,and nondestructive structural evaluations.Segmenting large 3D point cloud datasets,particularly in coal mine roadways with multi-scale targets,remains challenging.This paper proposes an enhanced segmentation method integrating improved PointNet++with a coverage-voted strategy.The coverage-voted strategy reduces data while preserving multi-scale target topology.The segmentation is achieved using an enhanced PointNet++algorithm with a normalization preprocessing head,resulting in a 94%accuracy for common supporting components.Ablation experiments show that the preprocessing head and coverage strategies increase segmentation accuracy by 20%and 2%,respectively,and improve Intersection over Union(IoU)for bearing plate segmentation by 58%and 20%.The accuracy of the current pretraining segmentation model may be affected by variations in surface support components,but it can be readily enhanced through re-optimization with additional labeled point cloud data.This proposed method,combined with a previously developed machine learning model that links rock bolt load and the deformation field of its bearing plate,provides a robust technique for simultaneously measuring the load of multiple rock bolts in a single laser scan.
基金Supported by the Guangdong-Hong Kong-Macao Joint Laboratory on Micro-Nano Manufacturing Technology,China(No.2021LSYS004)Guangdong Provincial Key Laboratory of Sustainable Biomimetic Materials and Green Energy,China(No.2024B1212010003)。
文摘Monitoring biogenic amines,which are metabolic byproducts of shrimp spoilage,is crucial for assessing food quality.Currently,most detection methods for biogenic amines suffer from limitations such as time-consuming procedures,complex operations,and delayed results.Colorimetric analysis techniques have gained attention in recent years due to their advantages of short analysis time,simple operation,and suitability for on-site testing.This study successfully developed a series of colorimetric sensor platforms for biogenic amines by loading the natural active ingredient curcumin(CUR)and its derivative of Boron complex BFCUR onto filter paper and electrospun nanofibre films(ENFs),respectively.By analyzing the color response differences of these sensors upon contact with biogenic amines,the colorimetric sensors with superior detection performance were selected and further applied to the visual monitoring and indication of shrimp spoilage processes.
基金supported by the National Key Research and Development Program of China(2023YFB3809800)the National Natural Science Foundation of China(52172249,52525601)+2 种基金the Chinese Academy of Sciences Talents Program(E2290701)the Jiangsu Province Talents Program(JSSCRC2023545)the Special Fund Project of Carbon Peaking Carbon Neutrality Science and Technology Innovation of Jiangsu Province(BE2022011).
文摘Flexible fiber sensors,However,traditional methods face challenges in fabricating low-cost,large-scale fiber sensors.In recent years,the thermal drawing process has rapidly advanced,offering a novel approach to flexible fiber sensors.Through the preform-tofiber manufacturing technique,a variety of fiber sensors with complex functionalities spanning from the nanoscale to kilometer scale can be automated in a short time.Examples include temperature,acoustic,mechanical,chemical,biological,optoelectronic,and multifunctional sensors,which operate on diverse sensing principles such as resistance,capacitance,piezoelectricity,triboelectricity,photoelectricity,and thermoelectricity.This review outlines the principles of the thermal drawing process and provides a detailed overview of the latest advancements in various thermally drawn fiber sensors.Finally,the future developments of thermally drawn fiber sensors are discussed.
文摘The Savitzky-Golay(SG)filter,which employs polynomial least-squares approximations to smooth data and estimate derivatives,is widely used for processing noisy data.However,noise suppression by the SG filter is recognized to be limited at data boundaries and high frequencies,which can significantly reduce the signal-to-noise ratio(SNR).To solve this problem,a novel method synergistically integrating Principal Component Analysis(PCA)with SG filtering is proposed in this paper.This approach avoids the is-sue of excessive smoothing associated with larger window sizes.The proposed PCA-SG filtering algorithm was applied to a CO gas sensing system based on Cavity Ring-Down Spectroscopy(CRDS).The perform-ance of the PCA-SG filtering algorithm is demonstrated through comparison with Moving Average Filtering(MAF),Wavelet Transformation(WT),Kalman Filtering(KF),and the SG filter.The results demonstrate that the proposed algorithm exhibits superior noise reduction capabilities compared to the other algorithms evaluated.The SNR of the ring-down signal was improved from 11.8612 dB to 29.0913 dB,and the stand-ard deviation of the extracted ring-down time constant was reduced from 0.037μs to 0.018μs.These results confirm that the proposed PCA-SG filtering algorithm effectively improves the smoothness of the ring-down curve data,demonstrating its feasibility.
文摘Tactile sensing of subcutaneous organ vibrations provides a promising route toward human-machine interfaces and wear-able diagnostics,particularly for voice rehabilitation and silent-speech communication.Here,we present a bioinspired piezoelectric vibration sensor that mimics the graded stiffness and stress-based transduction mechanism of otolithic cilia in the human vestibular system.The device consists of a trapezoidal cantilever array with tip inertial masses,fabricated through a hybrid stereolithography 3D printing and laser micromachining process for rapid prototyping without cleanroom facilities.Finite-element modeling and experimental measurements demonstrate a fundamental resonance near 1.2 kHz,a 5%flat-bandwidth of 350 Hz,and an in-band charge sensitivity of 3.17 pC/g.A wearable proof-of-concept test further verifies the sensor's ability to reproducibly distinguish phoneme-specific vibration patterns in both time and frequency domains.This work establishes a foundation for bioinspired tactile sensing front-ends in wearable voice interfaces and other intelligent diagnostic systems integrated with machine-learning algorithms.
基金funded by the National Natural Science Foundation of China(New Design and Analysis of Fully Homomorphic Signatures,Grant No.62172436).
文摘With the rapid expansion of the Internet of Things(IoT),user data has experienced exponential growth,leading to increasing concerns about the security and integrity of data stored in the cloud.Traditional schemes relying on untrusted third-party auditors suffer from both security and efficiency issues,while existing decentralized blockchain-based auditing solutions still face shortcomings in correctness and security.This paper proposes an improved blockchain-based cloud auditing scheme,with the following core contributions:Identifying critical logical contradictions in the original scheme,thereby establishing the foundation for the correctness of cloud auditing;Designing an enhanced mechanism that integrates multiple hashing with dynamic aggregate signatures,binding encrypted blocks through bilinear pairings and BLS signatures,and improving the scheme by setting parameters based on the Computational Diffie-Hellman(CDH)problem,significantly strengthening data integrity protection and anti-forgery capabilities;Introducing a random challenge mechanism and dynamic parameter adjustment strategy,effectively resisting various attacks such as forgery,tampering,and deletion,significantly improving the detection probability of malicious Cloud Service Providers(CSPs),and significantly reducing the proof generation overhead for CSPswhilemaintaining the same computational cost forDataOwners.Theoretical analysis and performance evaluation experiments demonstrate that the proposed scheme achieves significant improvements in both security and efficiency.Finally,the paper explores potential applications of the Enhanced Security Scheme in fields such as healthcare,drone swarms,and government office attendance systems,providing an effective approach for building secure,efficient,and decentralized cloud auditing systems.
基金National Natural Science Foundation of China(42375153,42105153,42205157)Development of Science and Technology at Chinese Academy of Meteorological Sciences(2023KJ038)。
文摘Clouds play an important role in global atmospheric energy and water vapor budgets, and the low cloud simulations suffer from large biases in many atmospheric general circulation models. In this study, cloud microphysical processes such as raindrop evaporation and cloud water accretion in a double-moment six-class cloud microphysics scheme were revised to enhance the simulation of low clouds using the Global-Regional Integrated Forecast System(GRIST)model. The validation of the revised scheme using a single-column version of the GRIST demonstrated a reasonable reduction in liquid water biases. The revised parameterization simulated medium-and low-level cloud fractions that were in better agreement with the observations than the original scheme. Long-term global simulations indicate the mitigation of the originally overestimated low-level cloud fraction and cloud-water mixing ratio in mid-to high-latitude regions,primarily owing to enhanced accretion processes and weakened raindrop evaporation. The reduced low clouds with the revised scheme showed better consistency with satellite observations, particularly at mid-and high-latitudes. Further improvements can be observed in the simulated cloud shortwave radiative forcing and vertical distribution of total cloud cover. Annual precipitation in mid-latitude regions has also improved, particularly over the oceans, with significantly increased large-scale and decreased convective precipitation.
文摘Dear Editor,This letter presents a new approach to developing interpretable and reliable soft sensors for Industry 5.0 applications.Although sophisticated machine learning methods have made remarkable strides in soft-sensor predictive accuracy,ensuring interpretability and reliable performance across varying industrial operating conditions remains a challenge[1]–[4].This is precisely what Industry 5.0,proposed by the European Commission in 2021,advocates[5],[6].It integrates various cutting-edge technologies,such as human-machine interaction,digital twins,cybersecurity and artificial intelligence,to facilitate the development of better soft sensors.
文摘Noise interference critically impairs the stability and data accuracy of sensing systems.However,current suppression strategies fail to concurrently mitigate intrinsic system noise and extrinsic environmental noise.This study introduces a composite denoising approach to address this challenge.This method is based on the ameliorated ellipse fitting algorithm(AEFA)and adaptive successive variational mode decomposition(ASVMD).This algorithm employs AEFA to eliminate system noise tightly coupled with direct-current and alternating-current components in the interference signal,thereby obtaining a phase signal containing only environmental noise.The ASVMD technique adaptively extracts environmental noise components predominantly present in the phase signal.To achieve optimal decomposition results automatically,the permutation entropy criterion is employed to refine decomposition parameters.The correlation coefficient is utilized to differentiate effective components from noise components in the decomposition results.Experimental results indicate that the combined AEFA and ASVMD algorithm effectively suppresses both system and environmental noises.When applied to 50 Hz vibration signal processing,the proposed approach achieves a noise reduction of 17.81 dB and a phase resolution of 35.14μrad/√Hz.Given the excellent performance of the noise suppression,the proposed approach holds great application potential in high-performance interferometric sensing systems.
基金supported by the National Natural Science Foundation of China(General Program)under Grant 52571385National Key R&D Program of China(Grant No.2024YFC2815000 and No.2024YFB3816000)+12 种基金Open Fund of State Key Laboratory of Deep-sea Manned Vehicles(Grant No.2025SKLDMV07)Shenzhen Science and Technology Program(WDZC20231128114452001,JCYJ20240813112107010 and JCYJ20240813111910014)the Tsinghua SIGS Scientific Research Startup Fund(QD2022021C)the Dreams Foundation of Jianghuai Advance Technology Center(2023-ZM 01 Z006)the Ocean Decade International Cooperation Center(ODCC)(GHZZ3702840002024020000026)Shenzhen Key Laboratory of Advanced Technology for Marine Ecology(ZDSYS20230626091459009)Shenzhen Science and Technology Program(No.KJZD20240903100905008)the National Natural Science Foundation of China(No.22305141)Pearl River Talent Program(No.2023QN10C114)General Program of Guangdong Province(No.2025A1515011700)the Guangdong Innovative and Entrepreneurial Research Team Program(2023ZT10C040)Scientific Research Foundation from Shenzhen Finance Bureau(No.GJHZ20240218113600002)Tsinghua University(JC2023001).
文摘Developing effective,versatile,and high-precision sensing interfaces remains a crucial challenge in human-machine-environment interaction applications.Despite progress in interaction-oriented sensing skins,limitations remain in unit-level reconfiguration,multiaxial force and motion sensing,and robust operation across dynamically changing or irregular surfaces.Herein,we develop a reconfigurable omnidirectional triboelectric whisker sensor array(RO-TWSA)comprising multiple sensing units that integrate a triboelectric whisker structure(TWS)with an untethered hydro-sealing vacuum sucker(UHSVS),enabling reversibly portable deployment and omnidirectional perception across diverse surfaces.Using a simple dual-triangular electrode layout paired with MXene/silicone nanocomposite dielectric layer,the sensor unit achieves precise omnidirectional force and motion sensing with a detection threshold as low as 0.024 N and an angular resolution of 5°,while the UHSVS provides reliable and reversible multi-surface anchoring for the sensor units by involving a newly designed hydrogel combining high mechanical robustness and superior water absorption.Extensive experiments demonstrate the effectiveness of RO-TWSA across various interactive scenarios,including teleoperation,tactile diagnostics,and robotic autonomous exploration.Overall,RO-TWSA presents a versatile and high-resolution tactile interface,offering new avenues for intelligent perception and interaction in complex real-world environments.
文摘This survey presents a comprehensive examination of sensor fusion research spanning four decades,tracing the methodological evolution,application domains,and alignment with classical hierarchical models.Building on this long-term trajectory,the foundational approaches such as probabilistic inference,early neural networks,rulebasedmethods,and feature-level fusion established the principles of uncertainty handling andmulti-sensor integration in the 1990s.The fusion methods of 2000s marked the consolidation of these ideas through advanced Kalman and particle filtering,Bayesian–Dempster–Shafer hybrids,distributed consensus algorithms,and machine learning ensembles for more robust and domain-specific implementations.From 2011 to 2020,the widespread adoption of deep learning transformed the field driving some major breakthroughs in the autonomous vehicles domain.A key contribution of this work is the assessment of contemporary methods against the JDL model,revealing gaps at higher levels-especially in situation and impact assessment.Contemporary methods offer only limited implementation of higher-level fusion.The survey also reviews the benchmark multi-sensor datasets,noting their role in advancing the field while identifying major shortcomings like the lack of domain diversity and hierarchical coverage.By synthesizing developments across decades and paradigms,this survey provides both a historical narrative and a forward-looking perspective.It highlights unresolved challenges in transparency,scalability,robustness,and trustworthiness,while identifying emerging paradigms such as neuromorphic fusion and explainable AI as promising directions.This paves the way forward for advancing sensor fusion towards transparent and adaptive next-generation autonomous systems.
基金supported and funded by theDeanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2503).
文摘In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.
文摘Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.
基金National Natural Science Foundation of China(NSFC,No.22131008)Natural Science Foundation of Tianjin(No.22JCYBJC00500)the Haihe Laboratory of Sustainable Chemical Transformations for financial support.
文摘Possessing excellent mechanical properties,a high-coverage slide-ring conductive gel is constructed by in situ polymerization ofα-cyclodextrin(α-CD)polyrotaxane(PR)and 1-vinyl-3-ethylimidazolium bromide([VEIM]Br)ionic liquid(IL),using 1-ethyl-3-methylimidazolium bromide([EMIM]Br)IL as solvent.Benefiting from the compatibility of ILs and alkene-PR,the cross-linked network slide-ring gel not only maintains excellent conductivity(1.52×10^(−2) S/m),but also has effectively improved mechanical properties(513%fracture strain,0.713 MPa fracture stress,211 kPa elastic modulus and 1366 kJ/m^(3) toughness)and adhesive properties(472.3±25.9 kPa).The supramolecular gel can be used as a strain sensor to efficiently monitor deformation signals in real-time at least 200 times.Especially,the slide-ring gel can self-power generated by triboelectric effect and electrostatic induction between the skin layer and the polydimethylsiloxane(PDMS)layer that encapsulates the gel,achieving reversible and durable motion sensing,which provides a convenient pathway for constructing supramolecular self-powered flexible electronic materials.
基金support from Youth Promotion of Guangdong Natural Science Foundation(2024A1515030005)Guangdong Province Ordinary Universities Characteristic Innovation Project(2024KTSCX096)+4 种基金Guangdong Province University Key Field Special Program(2023ZDZX3002)Key Laboratory of Advanced Energy Materials Chemistry(Ministry of Education)Naikai University,Guangdong Provincial Key Laboratory of Optical Information Materials and Technology(No.2023B1212060065)Programs of Science and Technology Department of Yunnan Province(202301AT070217)MOE International Laboratory for Optical Information Technologies,the 111 Project,Science and Technology Bureau of Huzhou(2022GG24)ScienceK Ltd.
文摘MXene is a promising conductive nanofiller for hydrogels due to its excellent electricity conductivity and water dispersibility.However,MXene is prone to oxidize in the presence of air and water,resulting in a significant loss of conductivity.Polydopamine(PDA)has been coated on MXene to enhance its antioxidation stability via the physical barrier and chemical reducing ability of PDA,which unavoidably causes severe aggregation and a significant decrease in conductivity due to the crosslinking and insulation of PDA.Herein,we propose a facile strategy to construct a highly conductive,stable,and self-healing MXene-based polyvinyl alcohol(PVA)hydrogel by a controlled assembly of PDA and cellulose nanocrystal(CNC).PDA is first formed by oxidation self-polymerization in PVA solution without the presence of CNC and MXene,which can effectively reduce the content of aggregation-inducing groups and avoid the formation of an insulating PDA layer on the surface of MXene.The addition of CNCs results in the easy dispersion of a high content of MXene via hydrogen bonding and electrostatic interactions.The PVA-PDA hydrogel with MXene and CNC as conductive and reinforcing nanofillers(PP-CM)is cross-linked by dynamic borax covalent bonds and shows a conductivity of 7.14 S m^(-1).The introduction of PDA effectively protects MXene and results in only a 14%decrease in conductivity after 7 days,significantly improving antioxidant stability.This hydrogel also possesses rapid self-healing capabilities,achieving 90.5%self-healing efficiency within 10 min.This versatile approach opens new avenues for the preparation and application of MXene-based conductive hydrogels.
文摘Evaluating rock mass quality using three-dimensional(3D)point clouds is crucial for discontinuity extraction and is widely applied in various industrial sectors.However,the utilization of this method in geological surveys remains limited.Notable limitations of current research include the scarcity of validation using simple geometric shapes for discontinuity extraction methods,and the lack of studies that target both planar and linear discontinuity.To address these gaps,this study proposes a workflow for identifying discontinuity planes and traces in rock outcrops from photogrammetric 3D modeling,employing the Compass and Facets plugins in the open-source CloudCompare software.Prior to field application,the efficacy of the extraction methods was first evaluated using experimental datasets of a cube and an isosceles triangular prism generated under laboratory-controlled conditions.This validation demonstrated exceptional accuracy,with the dip and dip direction(DDD)of extracted structures consistently within±2°of the actual values.Following this rigorous laboratory validation,this methodology was applied to a more complex natural rock outcrop(Miocene–Pliocene deposits in Japan),demonstrating its applicability in realistic geological settings for identifying structures.The results showed that the dip and dip direction trends of the extracted bedding planes and faults were consistent with field measurements,achieving a time reduction of approximately 40%compared to traditional methods.In conclusion,through strictly controlled initial verification and subsequent successful application to a complex natural setting,this study confirmed that the proposed workflow can effectively and efficiently extract discontinuous geological structures from point clouds.