This paper presents a comprehensive framework that enables communication scene recognition through deep learning and multi-sensor fusion.This study aims to address the challenge of current communication scene recognit...This paper presents a comprehensive framework that enables communication scene recognition through deep learning and multi-sensor fusion.This study aims to address the challenge of current communication scene recognition methods that struggle to adapt in dynamic environments,as they typically rely on post-response mechanisms that fail to detect scene changes before users experience latency.The proposed framework leverages data from multiple smartphone sensors,including acceleration sensors,gyroscopes,magnetic field sensors,and orientation sensors,to identify different communication scenes,such as walking,running,cycling,and various modes of transportation.Extensive experimental comparative analysis with existing methods on the open-source SHL-2018 dataset confirmed the superior performance of our approach in terms of F1 score and processing speed.Additionally,tests using a Microsoft Surface Pro tablet and a self-collected Beijing-2023 dataset have validated the framework's efficiency and generalization capability.The results show that our framework achieved an F1 score of 95.15%on SHL-2018and 94.6%on Beijing-2023,highlighting its robustness across different datasets and conditions.Furthermore,the levels of computational complexity and power consumption associated with the algorithm are moderate,making it suitable for deployment on mobile devices.展开更多
With the development of unmanned driving technology,intelligent robots and drones,high-precision localization,navigation and state estimation technologies have also made great progress.Traditional global navigation sa...With the development of unmanned driving technology,intelligent robots and drones,high-precision localization,navigation and state estimation technologies have also made great progress.Traditional global navigation satellite system/inertial navigation system(GNSS/INS)integrated navigation systems can provide high-precision navigation information continuously.However,when this system is applied to indoor or GNSS-denied environments,such as outdoor substations with strong electromagnetic interference and complex dense spaces,it is often unable to obtain high-precision GNSS positioning data.The positioning and orientation errors will diverge and accumulate rapidly,which cannot meet the high-precision localization requirements in large-scale and long-distance navigation scenarios.This paper proposes a method of high-precision state estimation with fusion of GNSS/INS/Vision using a nonlinear optimizer factor graph optimization as the basis for multi-source optimization.Through the collected experimental data and simulation results,this system shows good performance in the indoor environment and the environment with partial GNSS signal loss.展开更多
Although laser powder bed fusion(LPBF)technology is considered one of the most promising additive man-ufacturing techniques,the fabricated parts still suffer from porosity defects,which can severely impact their mecha...Although laser powder bed fusion(LPBF)technology is considered one of the most promising additive man-ufacturing techniques,the fabricated parts still suffer from porosity defects,which can severely impact their mechanical performance.Monitoring the printing process using a variety of sensors to collect process signals can realize a comprehensive capture of the processing status;thus,the monitoring accuracy can be improved.However,existing multi-sensing signals are mainly optical and acoustic,and camera-based signals are mostly layer-wise images captured after printing,preventing real-time monitoring.This paper proposes a real-time melt-pool-based in-situ quality monitoring method for LPBF using multiple sensors.High-speed cameras,photodiodes,and microphones were used to collect signals during the experimental process.All three types of signals were transformed from one-dimensional time-domain signals into corresponding two-dimensional grayscale images,which enabled the capture of more localized features.Based on an improved LeNet-5 model and the weighted Dempster-Shafer evidence theory,single-sensor,dual-sensor and triple-sensor fusion monitoring models were in-vestigated with the three types of signals,and their performances were compared.The results showed that the triple-sensor fusion monitoring model achieved the highest recognition accuracy,with accuracy rates of 97.98%,92.63%,and 100%for high-,medium-,and low-quality samples,respectively.Hence,a multi-sensor fusion based melt pool monitoring system can improve the accuracy of quality monitoring in the LPBF process,which has the potential to reduce porosity defects.Finally,the experimental analysis demonstrates that the convolutional neural network proposed in this study has better classification accuracy compared to other machine learning models.展开更多
The perception module of advanced driver assistance systems plays a vital role.Perception schemes often use a single sensor for data processing and environmental perception or adopt the information processing results ...The perception module of advanced driver assistance systems plays a vital role.Perception schemes often use a single sensor for data processing and environmental perception or adopt the information processing results of various sensors for the fusion of the detection layer.This paper proposes a multi-scale and multi-sensor data fusion strategy in the front end of perception and accomplishes a multi-sensor function disparity map generation scheme.A binocular stereo vision sensor composed of two cameras and a light deterction and ranging(LiDAR)sensor is used to jointly perceive the environment,and a multi-scale fusion scheme is employed to improve the accuracy of the disparity map.This solution not only has the advantages of dense perception of binocular stereo vision sensors but also considers the perception accuracy of LiDAR sensors.Experiments demonstrate that the multi-scale multi-sensor scheme proposed in this paper significantly improves disparity map estimation.展开更多
In order to address the issue of sensor configuration redundancy in intelligent driving,this paper constructs a multi-objective optimization model that considers cost,coverage ability,and perception performance.And th...In order to address the issue of sensor configuration redundancy in intelligent driving,this paper constructs a multi-objective optimization model that considers cost,coverage ability,and perception performance.And then,combining a specific set of parameters,the NSGA-II algorithm is used to solve the multi-objective model established in this paper,and a Pareto front containing 24 typical configuration schemes is extracted after considering empirical constraints.Finally,using the decision preference method proposed in this paper that combines subjective and objective factors,decision scores are calculated and ranked for various configuration schemes from both cost and performance preferences.The research results indicate that the multi-objective optimization model established in this paper can screen and optimize various configuration schemes from the optimal principle of the vehicle,and the optimized configuration schemes can be quantitatively ranked to obtain the decision results for the vehicle under different preference tendencies.展开更多
Cable-driven soft robots exhibit complex deformations,making state estimation challenging.Hence,this paper develops a multi-sensor fusion approach using a gradient descent strategy to estimate the weighting coefficien...Cable-driven soft robots exhibit complex deformations,making state estimation challenging.Hence,this paper develops a multi-sensor fusion approach using a gradient descent strategy to estimate the weighting coefficients.These coefficients combine measurements from proprioceptive sensors,such as resistive flex sensors,to determine the bending angle.Additionally,the fusion strategy adopted provides robust state estimates,overcoming mismatches between the flex sensors and soft robot dimensions.Furthermore,a nonlinear differentiator is introduced to filter the differentiated sensor signals to address noise and irrational values generated by the Analog-to-Digital Converter.A rational polynomial equation is also introduced to compensate for temperature drift exhibited by the resistive flex sensors,which affect the accuracy of state estimation and control.The processed multi-sensor data is then utilized in an improved PD controller for closed-loop control of the soft robot.The controller incorporates the nonlinear differentiator and drift compensation,enhancing tracking performance.Experimental results validate the effectiveness of the integrated approach,demonstrating improved tracking accuracy and robustness compared to traditional PD controllers.展开更多
The construction of high-precision urban rail maps is crucial for the safe and efficient operation of railway transportation systems.However,the repetitive features and sparse textures in urban rail environments pose ...The construction of high-precision urban rail maps is crucial for the safe and efficient operation of railway transportation systems.However,the repetitive features and sparse textures in urban rail environments pose challenges for map construction with high-precision.Motivated by this,this paper proposes a high-precision urban rail map construction algorithm based on multi-sensor fusion.The algorithm integrates laser radar and Inertial Measurement Unit(IMU)data to construct the geometric structure map of the urban rail.It utilizes image point-line features and color information to improve map accuracy by minimizing photometric errors and incorporating color information,thus generating high-precision maps.Experimental results on a real urban rail dataset demonstrate that the proposed algorithm achieves root mean square errors of 0.345 and 1.033m for ground and tunnel scenes,respectively,representing a 19.31%and 56.80%improvement compared to state-ofthe-art methods.展开更多
Global Navigation Satellite System(GNSS)can provide all-weather,all-time,high-precision positioning,navigation and timing services,which plays an important role in national security,national economy,public life and ot...Global Navigation Satellite System(GNSS)can provide all-weather,all-time,high-precision positioning,navigation and timing services,which plays an important role in national security,national economy,public life and other aspects.However,in environments with limited satellite signals such as urban canyons,tunnels,and indoor spaces,it is difficult to provide accurate and reliable positioning services only by satellite navigation.Multi-source sensor integrated navigation can effectively overcome the limitations of single-sensor navigation through the fusion of different types of sensor data such as Inertial Measurement Unit(IMU),vision sensor,and LiDAR,and provide more accurate,stable and robust navigation information in complex environments.We summarizes the research status of multi-source sensor integrated navigation technology,and focuses on the representative innovations and applications of integrated navigation and positioning technology by major domestic scientific research institutions in China during 2019—2023.展开更多
Multi-sensor measurement iswidely employed in rotatingmachinery to ensure the safety ofmachines.The information provided by the single sensor is not comprehensive.Multi-sensor signals can provide complementary informa...Multi-sensor measurement iswidely employed in rotatingmachinery to ensure the safety ofmachines.The information provided by the single sensor is not comprehensive.Multi-sensor signals can provide complementary information in characterizing the health condition of machines.This paper proposed a multi-sensor fusion convolution neural network(MF-CNN)model.The proposed model adds a 2-D convolution layer before the classical 1-D CNN to automatically extract complementary features of multi-sensor signals and minimize the loss of information.A series of experiments are carried out on a rolling bearing test rig to verify the model.Vibration and sound signals are fused to achieve higher classification accuracy than typical machine learning model.In addition,the model is further applied to gas turbine abnormal detection,and shows great robustness and generalization.展开更多
This paper presents an obstacle detection approach for blind pedestrians by fusing data from camera and laser sensor.For purely vision-based blind guidance system,it is difficult to discriminate low-level obstacles wi...This paper presents an obstacle detection approach for blind pedestrians by fusing data from camera and laser sensor.For purely vision-based blind guidance system,it is difficult to discriminate low-level obstacles with cluttered road surface,while for purely laser-based system,it usually requires to scan the forward environment,which turns out to be very inconvenient.To overcome these inherent problems when using camera and laser sensor independently,a sensor-fusion model is proposed to associate range data from laser domain with edges from image domain.Based on this fusion model,obstacle's position,size and shape can be estimated.The proposed method is tested in several indoor scenes,and its efficiency is confirmed.展开更多
Maneuvering targets tracking is a fundamental task in intelligent vehicle research. Thispaper focuses on the problem of fusion between radar and image sensors in targets tracking. Inorder to improve positioning accura...Maneuvering targets tracking is a fundamental task in intelligent vehicle research. Thispaper focuses on the problem of fusion between radar and image sensors in targets tracking. Inorder to improve positioning accuracy and narrow down the image working area, a novel methodthat integrates radar filter with image intensity is proposed to establish an adaptive vision window.A weighted Hausdor? distance is introduced to define the functional relationship between image andmodel projection, and a modified simulated annealing algorithm is used to find optimum orientationparameter. Furthermore, the global state is estimated, which refers to the distributed data fusionalgorithm. Experiment results show that our method is accurate.展开更多
Ensuring that autonomous vehicles maintain high precision and rapid response capabilities in complex and dynamic driving environments is a critical challenge in the field of autonomous driving.This study aims to enhan...Ensuring that autonomous vehicles maintain high precision and rapid response capabilities in complex and dynamic driving environments is a critical challenge in the field of autonomous driving.This study aims to enhance the learning efficiency ofmulti-sensor feature fusion in autonomous driving tasks,thereby improving the safety and responsiveness of the system.To achieve this goal,we propose an innovative multi-sensor feature fusion model that integrates three distinct modalities:visual,radar,and lidar data.The model optimizes the feature fusion process through the introduction of two novel mechanisms:Sparse Channel Pooling(SCP)and Residual Triplet-Attention(RTA).Firstly,the SCP mechanism enables the model to adaptively filter out salient feature channels while eliminating the interference of redundant features.This enhances the model’s emphasis on critical features essential for decisionmaking and strengthens its robustness to environmental variability.Secondly,the RTA mechanism addresses the issue of feature misalignment across different modalities by effectively aligning key cross-modal features.This alignment reduces the computational overhead associated with redundant features and enhances the overall efficiency of the system.Furthermore,this study incorporates a reinforcement learning module designed to optimize strategies within a continuous action space.By integrating thismodulewith the feature fusion learning process,the entire system is capable of learning efficient driving strategies in an end-to-end manner within the CARLA autonomous driving simulator.Experimental results demonstrate that the proposedmodel significantly enhances the perception and decision-making accuracy of the autonomous driving system in complex traffic scenarios while maintaining real-time responsiveness.This work provides a novel perspective and technical pathway for the application of multi-sensor data fusion in autonomous driving.展开更多
This study investigates a consistent fusion algorithm for distributed multi-rate multi-sensor systems operating in feedback-memory configurations, where each sensor's sampling period is uniform and an integer mult...This study investigates a consistent fusion algorithm for distributed multi-rate multi-sensor systems operating in feedback-memory configurations, where each sensor's sampling period is uniform and an integer multiple of the state update period. The focus is on scenarios where the correlations among Measurement Noises(MNs) from different sensors are unknown. Firstly, a non-augmented local estimator that applies to sampling cases is designed to provide unbiased Local Estimates(LEs) at the fusion points. Subsequently, a measurement-equivalent approach is then developed to parameterize the correlation structure between LEs and reformulate LEs into a unified form, thereby constraining the correlations arising from MNs to an admissible range. Simultaneously, a family of upper bounds on the joint error covariance matrix of LEs is derived based on the constrained correlations, avoiding the need to calculate the exact error cross-covariance matrix of LEs. Finally, a sequential fusion estimator is proposed in the sense of Weighted Minimum Mean Square Error(WMMSE), and it is proven to be unbiased, consistent, and more accurate than the well-known covariance intersection method. Simulation results illustrate the effectiveness of the proposed algorithm by highlighting improvements in consistency and accuracy.展开更多
In recent years,Simultaneous Localization And Mapping(SLAM)technology has prevailed in a wide range of applications,such as autonomous driving,intelligent robots,Augmented Reality(AR),and Virtual Reality(VR).Multi-sen...In recent years,Simultaneous Localization And Mapping(SLAM)technology has prevailed in a wide range of applications,such as autonomous driving,intelligent robots,Augmented Reality(AR),and Virtual Reality(VR).Multi-sensor fusion using the most popular three types of sensors(e.g.,visual sensor,LiDAR sensor,and IMU)is becoming ubiquitous in SLAM,in part because of the complementary sensing capabilities and the inevitable shortages(e.g.,low precision and long-term drift)of the stand-alone sensor in challenging environments.In this article,we survey thoroughly the research efforts taken in this field and strive to provide a concise but complete review of the related work.Firstly,a brief introduction of the state estimator formation in SLAM is presented.Secondly,the state-of-the-art algorithms of different multi-sensor fusion algorithms are given.Then we analyze the deficiencies associated with the reviewed approaches and formulate some future research considerations.This paper can be considered as a brief guide to newcomers and a comprehensive reference for experienced researchers and engineers to explore new interesting orientations.展开更多
With the advancement of artificial intelligence,the dominance of deep learning(DL)models over ordinary machine learning(ML)algorithms has become a reality in recent years due to its capability of handling complex patt...With the advancement of artificial intelligence,the dominance of deep learning(DL)models over ordinary machine learning(ML)algorithms has become a reality in recent years due to its capability of handling complex pattern recognition without manual feature pre-definition.With the growing demands for power savings,building energy loss reduction could benefit from DL techniques.For buildings/rooms with the varying number of occupants,heating,ventilation,and air conditioning(HVAC)systems are often found in operations without much necessity.To reduce the building’s energy loss,accurate occupancy detection/prediction(ODP)results could be used to control the proper operations of HVACs.However,ODP is a challenging issue due to multiple reasons,such as improper selection/deployment of sensors,inefficient learning algorithms for pattern recognition,varying room conditions,etc.To overcome the above challenges,we propose a DL-based framework,i.e.,Deep Weighted Fusion Learning(DWFL),to detect and predict occupancy counts with optimal multi-sensor fusion structure.DWFL fuses the extracted features from multiple types of sensors with the priority/weight assignment to each sensor.Such weight assignment considers different room conditions and the pros/cons of each type of sensor.To evaluate DWFL model in terms of occupancy prediction accuracy,we have set up an experimental testbed with low-cost cameras,carbon dioxide(CO_(2)),and passive infrared(PIR)sensors.Among the recently proposed occupancy detection models,DeepFusion utilized deep learning model on heterogeneous sensor data and achieved 88%accuracy in occupancy count estimation(Xue et al.,2019).Another deep learning-based model MI-PIR achieved 91%accuracy on raw analog data from PIR sensors(Andrews et al.,2020).Our research outcome is 94%.Therefore,the experiment results show that our DWFL scheme outperforms the state-of-the-art ODP methods by 3%.展开更多
Quadruped robots with body joints exhibit enhanced mobility,however,in outdoor environments,the energy that the robot can carry is limited,necessitating optimization of energy consumption to accomplish more tasks with...Quadruped robots with body joints exhibit enhanced mobility,however,in outdoor environments,the energy that the robot can carry is limited,necessitating optimization of energy consumption to accomplish more tasks within these constraints.Inspired by quadruped animals,this paper proposes an energy-saving strategy for a body joint quadruped robot based on Central Pattern Generator(CPG)with multi-sensor fusion bio-reflexes.First,an energy consumption model for the robot is established,and energy characteristic tests are conducted under different gait parameters.Based on these energy characteristics,optimal energy-efficient gait parameters are determined for various environmental conditions.Second,biological reflex mechanisms are studied,and a motion control model based on multi-sensor fusion biological reflexes is established using CPG as the foundation.By integrating the reflex model and gait parameters,real-time adaptive adjustments to the robot’s motion gait are achieved on complex terrains,reducing energy loss caused by terrain disturbances.Finally,a prototype of the body joint quadruped robot is built for experimental verification.Simulation and experimental results demonstrate that the proposed algorithm effectively reduces the robot’s Cost of Transport(COT)and significantly improves energy efficiency.The related research results can provide a useful reference for the research on energy efficiency of quadruped robots on complex terrain.展开更多
Satellite Interferometric Synthetic Aperture Radar(InSAR)is widely used for topographic,geological and natural resource investigations.However,most of the existing InSAR studies of ground deformation are based on rela...Satellite Interferometric Synthetic Aperture Radar(InSAR)is widely used for topographic,geological and natural resource investigations.However,most of the existing InSAR studies of ground deformation are based on relatively short periods and single sensors.This paper introduces a new multi-sensor InSAR time series data fusion method for time-overlapping and time-interval datasets,to address cases when partial overlaps and/or temporal gaps exist.A new Power Exponential Knothe Model(PEKM)fits and fuses overlaps in the deformation curves,while a Long Short-Term Memory(LSTM)neural network predicts and fuses any temporal gaps in the series.Taking the city of Wuhan(China)as experiment area,COSMO-SkyMed(2011-2015),TerraSAR-X(2015-2019)and Sentinel-1(2019-2021)SAR datasets were fused to map long-term surface deformation over the last decade.An independent 2011-2020 InSAR time series analysis based on 230 COSMO-SkyMed scenes was also used as reference for comparison.The correlation coefficient between the results of the fusion algorithm and the reference data is 0.87 in the time overlapping region and 0.97 in the time-interval dataset.The correlation coefficient of the overall results is 0.78,which fully demonstrates that the algorithm proposed in our paper achieves a similar trend as the reference deformation curve.The experimental results are consistent with existing studies of surface deformation at Wuhan,demonstrating the accuracy of the proposed new fusion method to provide robust time series for the analysis of long-term land subsidence mechanisms.展开更多
In this paper, the problem of cubature Kalman fusion filtering(CKFF) is addressed for multi-sensor systems under amplify-and-forward(AaF) relays. For the purpose of facilitating data transmission, AaF relays are utili...In this paper, the problem of cubature Kalman fusion filtering(CKFF) is addressed for multi-sensor systems under amplify-and-forward(AaF) relays. For the purpose of facilitating data transmission, AaF relays are utilized to regulate signal communication between sensors and filters. Here, the randomly varying channel parameters are represented by a set of stochastic variables whose occurring probabilities are permitted to exhibit bounded uncertainty. Employing the spherical-radial cubature principle, a local filter under AaF relays is initially constructed. This construction ensures and minimizes an upper bound of the filtering error covariance by designing an appropriate filter gain. Subsequently, the local filters are fused through the application of the covariance intersection fusion rule. Furthermore, the uniform boundedness of the filtering error covariance's upper bound is investigated through establishing certain sufficient conditions. The effectiveness of the proposed CKFF scheme is ultimately validated via a simulation experiment concentrating on a three-phase induction machine.展开更多
Thunderstorm wind gusts are small in scale,typically occurring within a range of a few kilometers.It is extremely challenging to monitor and forecast thunderstorm wind gusts using only automatic weather stations.There...Thunderstorm wind gusts are small in scale,typically occurring within a range of a few kilometers.It is extremely challenging to monitor and forecast thunderstorm wind gusts using only automatic weather stations.Therefore,it is necessary to establish thunderstorm wind gust identification techniques based on multisource high-resolution observations.This paper introduces a new algorithm,called thunderstorm wind gust identification network(TGNet).It leverages multimodal feature fusion to fuse the temporal and spatial features of thunderstorm wind gust events.The shapelet transform is first used to extract the temporal features of wind speeds from automatic weather stations,which is aimed at distinguishing thunderstorm wind gusts from those caused by synoptic-scale systems or typhoons.Then,the encoder,structured upon the U-shaped network(U-Net)and incorporating recurrent residual convolutional blocks(R2U-Net),is employed to extract the corresponding spatial convective characteristics of satellite,radar,and lightning observations.Finally,by using the multimodal deep fusion module based on multi-head cross-attention,the temporal features of wind speed at each automatic weather station are incorporated into the spatial features to obtain 10-minutely classification of thunderstorm wind gusts.TGNet products have high accuracy,with a critical success index reaching 0.77.Compared with those of U-Net and R2U-Net,the false alarm rate of TGNet products decreases by 31.28%and 24.15%,respectively.The new algorithm provides grid products of thunderstorm wind gusts with a spatial resolution of 0.01°,updated every 10minutes.The results are finer and more accurate,thereby helping to improve the accuracy of operational warnings for thunderstorm wind gusts.展开更多
基金supported by National 2011 Collaborative Innovation Center of Wireless Communication Technologies under Grant 2242022k60006。
文摘This paper presents a comprehensive framework that enables communication scene recognition through deep learning and multi-sensor fusion.This study aims to address the challenge of current communication scene recognition methods that struggle to adapt in dynamic environments,as they typically rely on post-response mechanisms that fail to detect scene changes before users experience latency.The proposed framework leverages data from multiple smartphone sensors,including acceleration sensors,gyroscopes,magnetic field sensors,and orientation sensors,to identify different communication scenes,such as walking,running,cycling,and various modes of transportation.Extensive experimental comparative analysis with existing methods on the open-source SHL-2018 dataset confirmed the superior performance of our approach in terms of F1 score and processing speed.Additionally,tests using a Microsoft Surface Pro tablet and a self-collected Beijing-2023 dataset have validated the framework's efficiency and generalization capability.The results show that our framework achieved an F1 score of 95.15%on SHL-2018and 94.6%on Beijing-2023,highlighting its robustness across different datasets and conditions.Furthermore,the levels of computational complexity and power consumption associated with the algorithm are moderate,making it suitable for deployment on mobile devices.
基金supported in part by the Guangxi Power Grid Company’s 2023 Science and Technol-ogy Innovation Project(No.GXKJXM20230169)。
文摘With the development of unmanned driving technology,intelligent robots and drones,high-precision localization,navigation and state estimation technologies have also made great progress.Traditional global navigation satellite system/inertial navigation system(GNSS/INS)integrated navigation systems can provide high-precision navigation information continuously.However,when this system is applied to indoor or GNSS-denied environments,such as outdoor substations with strong electromagnetic interference and complex dense spaces,it is often unable to obtain high-precision GNSS positioning data.The positioning and orientation errors will diverge and accumulate rapidly,which cannot meet the high-precision localization requirements in large-scale and long-distance navigation scenarios.This paper proposes a method of high-precision state estimation with fusion of GNSS/INS/Vision using a nonlinear optimizer factor graph optimization as the basis for multi-source optimization.Through the collected experimental data and simulation results,this system shows good performance in the indoor environment and the environment with partial GNSS signal loss.
基金supported by Key Research and Development Pro-gram of Jiangsu Province(Grant Nos.BE2022069-1 and BE2022069-2)Natural Science Research Project of Jiangsu Higher Education Institu-tions(Grant Nos.22KJB460030 and 22KJB460004)+2 种基金Suzhou Science and Technology Development Plan(Grant No.SYC2022020)startup fund-ing at the Nanjing Normal University(Grant No.184080H202B318)2022 Nanjing Carbon Peak and Neutrality Technology Innovation Special Fund(Grant No.202211017).
文摘Although laser powder bed fusion(LPBF)technology is considered one of the most promising additive man-ufacturing techniques,the fabricated parts still suffer from porosity defects,which can severely impact their mechanical performance.Monitoring the printing process using a variety of sensors to collect process signals can realize a comprehensive capture of the processing status;thus,the monitoring accuracy can be improved.However,existing multi-sensing signals are mainly optical and acoustic,and camera-based signals are mostly layer-wise images captured after printing,preventing real-time monitoring.This paper proposes a real-time melt-pool-based in-situ quality monitoring method for LPBF using multiple sensors.High-speed cameras,photodiodes,and microphones were used to collect signals during the experimental process.All three types of signals were transformed from one-dimensional time-domain signals into corresponding two-dimensional grayscale images,which enabled the capture of more localized features.Based on an improved LeNet-5 model and the weighted Dempster-Shafer evidence theory,single-sensor,dual-sensor and triple-sensor fusion monitoring models were in-vestigated with the three types of signals,and their performances were compared.The results showed that the triple-sensor fusion monitoring model achieved the highest recognition accuracy,with accuracy rates of 97.98%,92.63%,and 100%for high-,medium-,and low-quality samples,respectively.Hence,a multi-sensor fusion based melt pool monitoring system can improve the accuracy of quality monitoring in the LPBF process,which has the potential to reduce porosity defects.Finally,the experimental analysis demonstrates that the convolutional neural network proposed in this study has better classification accuracy compared to other machine learning models.
基金the National Key R&D Program of China(2018AAA0103103).
文摘The perception module of advanced driver assistance systems plays a vital role.Perception schemes often use a single sensor for data processing and environmental perception or adopt the information processing results of various sensors for the fusion of the detection layer.This paper proposes a multi-scale and multi-sensor data fusion strategy in the front end of perception and accomplishes a multi-sensor function disparity map generation scheme.A binocular stereo vision sensor composed of two cameras and a light deterction and ranging(LiDAR)sensor is used to jointly perceive the environment,and a multi-scale fusion scheme is employed to improve the accuracy of the disparity map.This solution not only has the advantages of dense perception of binocular stereo vision sensors but also considers the perception accuracy of LiDAR sensors.Experiments demonstrate that the multi-scale multi-sensor scheme proposed in this paper significantly improves disparity map estimation.
文摘In order to address the issue of sensor configuration redundancy in intelligent driving,this paper constructs a multi-objective optimization model that considers cost,coverage ability,and perception performance.And then,combining a specific set of parameters,the NSGA-II algorithm is used to solve the multi-objective model established in this paper,and a Pareto front containing 24 typical configuration schemes is extracted after considering empirical constraints.Finally,using the decision preference method proposed in this paper that combines subjective and objective factors,decision scores are calculated and ranked for various configuration schemes from both cost and performance preferences.The research results indicate that the multi-objective optimization model established in this paper can screen and optimize various configuration schemes from the optimal principle of the vehicle,and the optimized configuration schemes can be quantitatively ranked to obtain the decision results for the vehicle under different preference tendencies.
基金financial support from the National Natural Science Foundation of China(62103039,62073030)the Joint Fund of Ministry of Education for Equipment Pre-Research(8091B03032303).
文摘Cable-driven soft robots exhibit complex deformations,making state estimation challenging.Hence,this paper develops a multi-sensor fusion approach using a gradient descent strategy to estimate the weighting coefficients.These coefficients combine measurements from proprioceptive sensors,such as resistive flex sensors,to determine the bending angle.Additionally,the fusion strategy adopted provides robust state estimates,overcoming mismatches between the flex sensors and soft robot dimensions.Furthermore,a nonlinear differentiator is introduced to filter the differentiated sensor signals to address noise and irrational values generated by the Analog-to-Digital Converter.A rational polynomial equation is also introduced to compensate for temperature drift exhibited by the resistive flex sensors,which affect the accuracy of state estimation and control.The processed multi-sensor data is then utilized in an improved PD controller for closed-loop control of the soft robot.The controller incorporates the nonlinear differentiator and drift compensation,enhancing tracking performance.Experimental results validate the effectiveness of the integrated approach,demonstrating improved tracking accuracy and robustness compared to traditional PD controllers.
基金supported by the Beijing Natural Science Foundation(No.L221003).
文摘The construction of high-precision urban rail maps is crucial for the safe and efficient operation of railway transportation systems.However,the repetitive features and sparse textures in urban rail environments pose challenges for map construction with high-precision.Motivated by this,this paper proposes a high-precision urban rail map construction algorithm based on multi-sensor fusion.The algorithm integrates laser radar and Inertial Measurement Unit(IMU)data to construct the geometric structure map of the urban rail.It utilizes image point-line features and color information to improve map accuracy by minimizing photometric errors and incorporating color information,thus generating high-precision maps.Experimental results on a real urban rail dataset demonstrate that the proposed algorithm achieves root mean square errors of 0.345 and 1.033m for ground and tunnel scenes,respectively,representing a 19.31%and 56.80%improvement compared to state-ofthe-art methods.
基金National Key R&D Program of China(No.2021YFB2501102)。
文摘Global Navigation Satellite System(GNSS)can provide all-weather,all-time,high-precision positioning,navigation and timing services,which plays an important role in national security,national economy,public life and other aspects.However,in environments with limited satellite signals such as urban canyons,tunnels,and indoor spaces,it is difficult to provide accurate and reliable positioning services only by satellite navigation.Multi-source sensor integrated navigation can effectively overcome the limitations of single-sensor navigation through the fusion of different types of sensor data such as Inertial Measurement Unit(IMU),vision sensor,and LiDAR,and provide more accurate,stable and robust navigation information in complex environments.We summarizes the research status of multi-source sensor integrated navigation technology,and focuses on the representative innovations and applications of integrated navigation and positioning technology by major domestic scientific research institutions in China during 2019—2023.
基金support from the National Natural Science Foundation of China (Grant No.U1809219)the Key Research and Development Project of Zhejiang Province (Grant No.2020C01088).
文摘Multi-sensor measurement iswidely employed in rotatingmachinery to ensure the safety ofmachines.The information provided by the single sensor is not comprehensive.Multi-sensor signals can provide complementary information in characterizing the health condition of machines.This paper proposed a multi-sensor fusion convolution neural network(MF-CNN)model.The proposed model adds a 2-D convolution layer before the classical 1-D CNN to automatically extract complementary features of multi-sensor signals and minimize the loss of information.A series of experiments are carried out on a rolling bearing test rig to verify the model.Vibration and sound signals are fused to achieve higher classification accuracy than typical machine learning model.In addition,the model is further applied to gas turbine abnormal detection,and shows great robustness and generalization.
基金The MSIP(Ministry of Science,ICT&Future Planning),Korea,under the ITRC(Information Technology Research Center) support program(NIPA-2013-H0301-13-2006)supervised by the NIPA(National IT Industry Promotion Agency)
文摘This paper presents an obstacle detection approach for blind pedestrians by fusing data from camera and laser sensor.For purely vision-based blind guidance system,it is difficult to discriminate low-level obstacles with cluttered road surface,while for purely laser-based system,it usually requires to scan the forward environment,which turns out to be very inconvenient.To overcome these inherent problems when using camera and laser sensor independently,a sensor-fusion model is proposed to associate range data from laser domain with edges from image domain.Based on this fusion model,obstacle's position,size and shape can be estimated.The proposed method is tested in several indoor scenes,and its efficiency is confirmed.
基金Supported by the Special Funds for Major State Basic Research Program of P.R.China(2001CB309403)
文摘Maneuvering targets tracking is a fundamental task in intelligent vehicle research. Thispaper focuses on the problem of fusion between radar and image sensors in targets tracking. Inorder to improve positioning accuracy and narrow down the image working area, a novel methodthat integrates radar filter with image intensity is proposed to establish an adaptive vision window.A weighted Hausdor? distance is introduced to define the functional relationship between image andmodel projection, and a modified simulated annealing algorithm is used to find optimum orientationparameter. Furthermore, the global state is estimated, which refers to the distributed data fusionalgorithm. Experiment results show that our method is accurate.
文摘Ensuring that autonomous vehicles maintain high precision and rapid response capabilities in complex and dynamic driving environments is a critical challenge in the field of autonomous driving.This study aims to enhance the learning efficiency ofmulti-sensor feature fusion in autonomous driving tasks,thereby improving the safety and responsiveness of the system.To achieve this goal,we propose an innovative multi-sensor feature fusion model that integrates three distinct modalities:visual,radar,and lidar data.The model optimizes the feature fusion process through the introduction of two novel mechanisms:Sparse Channel Pooling(SCP)and Residual Triplet-Attention(RTA).Firstly,the SCP mechanism enables the model to adaptively filter out salient feature channels while eliminating the interference of redundant features.This enhances the model’s emphasis on critical features essential for decisionmaking and strengthens its robustness to environmental variability.Secondly,the RTA mechanism addresses the issue of feature misalignment across different modalities by effectively aligning key cross-modal features.This alignment reduces the computational overhead associated with redundant features and enhances the overall efficiency of the system.Furthermore,this study incorporates a reinforcement learning module designed to optimize strategies within a continuous action space.By integrating thismodulewith the feature fusion learning process,the entire system is capable of learning efficient driving strategies in an end-to-end manner within the CARLA autonomous driving simulator.Experimental results demonstrate that the proposedmodel significantly enhances the perception and decision-making accuracy of the autonomous driving system in complex traffic scenarios while maintaining real-time responsiveness.This work provides a novel perspective and technical pathway for the application of multi-sensor data fusion in autonomous driving.
基金supported by the National Natural Science Foundation of China (Nos. 62276204, 62203343)。
文摘This study investigates a consistent fusion algorithm for distributed multi-rate multi-sensor systems operating in feedback-memory configurations, where each sensor's sampling period is uniform and an integer multiple of the state update period. The focus is on scenarios where the correlations among Measurement Noises(MNs) from different sensors are unknown. Firstly, a non-augmented local estimator that applies to sampling cases is designed to provide unbiased Local Estimates(LEs) at the fusion points. Subsequently, a measurement-equivalent approach is then developed to parameterize the correlation structure between LEs and reformulate LEs into a unified form, thereby constraining the correlations arising from MNs to an admissible range. Simultaneously, a family of upper bounds on the joint error covariance matrix of LEs is derived based on the constrained correlations, avoiding the need to calculate the exact error cross-covariance matrix of LEs. Finally, a sequential fusion estimator is proposed in the sense of Weighted Minimum Mean Square Error(WMMSE), and it is proven to be unbiased, consistent, and more accurate than the well-known covariance intersection method. Simulation results illustrate the effectiveness of the proposed algorithm by highlighting improvements in consistency and accuracy.
基金supported by the Scientific and Technological Innovation 2030(No.2021ZD0110900).
文摘In recent years,Simultaneous Localization And Mapping(SLAM)technology has prevailed in a wide range of applications,such as autonomous driving,intelligent robots,Augmented Reality(AR),and Virtual Reality(VR).Multi-sensor fusion using the most popular three types of sensors(e.g.,visual sensor,LiDAR sensor,and IMU)is becoming ubiquitous in SLAM,in part because of the complementary sensing capabilities and the inevitable shortages(e.g.,low precision and long-term drift)of the stand-alone sensor in challenging environments.In this article,we survey thoroughly the research efforts taken in this field and strive to provide a concise but complete review of the related work.Firstly,a brief introduction of the state estimator formation in SLAM is presented.Secondly,the state-of-the-art algorithms of different multi-sensor fusion algorithms are given.Then we analyze the deficiencies associated with the reviewed approaches and formulate some future research considerations.This paper can be considered as a brief guide to newcomers and a comprehensive reference for experienced researchers and engineers to explore new interesting orientations.
基金supported by the Advanced Research Projects Agency - Energy (ARPA-E), USA under award number DE-AR0001316.
文摘With the advancement of artificial intelligence,the dominance of deep learning(DL)models over ordinary machine learning(ML)algorithms has become a reality in recent years due to its capability of handling complex pattern recognition without manual feature pre-definition.With the growing demands for power savings,building energy loss reduction could benefit from DL techniques.For buildings/rooms with the varying number of occupants,heating,ventilation,and air conditioning(HVAC)systems are often found in operations without much necessity.To reduce the building’s energy loss,accurate occupancy detection/prediction(ODP)results could be used to control the proper operations of HVACs.However,ODP is a challenging issue due to multiple reasons,such as improper selection/deployment of sensors,inefficient learning algorithms for pattern recognition,varying room conditions,etc.To overcome the above challenges,we propose a DL-based framework,i.e.,Deep Weighted Fusion Learning(DWFL),to detect and predict occupancy counts with optimal multi-sensor fusion structure.DWFL fuses the extracted features from multiple types of sensors with the priority/weight assignment to each sensor.Such weight assignment considers different room conditions and the pros/cons of each type of sensor.To evaluate DWFL model in terms of occupancy prediction accuracy,we have set up an experimental testbed with low-cost cameras,carbon dioxide(CO_(2)),and passive infrared(PIR)sensors.Among the recently proposed occupancy detection models,DeepFusion utilized deep learning model on heterogeneous sensor data and achieved 88%accuracy in occupancy count estimation(Xue et al.,2019).Another deep learning-based model MI-PIR achieved 91%accuracy on raw analog data from PIR sensors(Andrews et al.,2020).Our research outcome is 94%.Therefore,the experiment results show that our DWFL scheme outperforms the state-of-the-art ODP methods by 3%.
基金supported by the National Natural Science Foundation of China(Grant no.52075488)the Natural Science Foundation of Zhejiang Province(LY20E050023).
文摘Quadruped robots with body joints exhibit enhanced mobility,however,in outdoor environments,the energy that the robot can carry is limited,necessitating optimization of energy consumption to accomplish more tasks within these constraints.Inspired by quadruped animals,this paper proposes an energy-saving strategy for a body joint quadruped robot based on Central Pattern Generator(CPG)with multi-sensor fusion bio-reflexes.First,an energy consumption model for the robot is established,and energy characteristic tests are conducted under different gait parameters.Based on these energy characteristics,optimal energy-efficient gait parameters are determined for various environmental conditions.Second,biological reflex mechanisms are studied,and a motion control model based on multi-sensor fusion biological reflexes is established using CPG as the foundation.By integrating the reflex model and gait parameters,real-time adaptive adjustments to the robot’s motion gait are achieved on complex terrains,reducing energy loss caused by terrain disturbances.Finally,a prototype of the body joint quadruped robot is built for experimental verification.Simulation and experimental results demonstrate that the proposed algorithm effectively reduces the robot’s Cost of Transport(COT)and significantly improves energy efficiency.The related research results can provide a useful reference for the research on energy efficiency of quadruped robots on complex terrain.
基金funded by the National Natural Science Foundation of China[grant number 42250610212]the China Scholarship Council[No.202106270150].
文摘Satellite Interferometric Synthetic Aperture Radar(InSAR)is widely used for topographic,geological and natural resource investigations.However,most of the existing InSAR studies of ground deformation are based on relatively short periods and single sensors.This paper introduces a new multi-sensor InSAR time series data fusion method for time-overlapping and time-interval datasets,to address cases when partial overlaps and/or temporal gaps exist.A new Power Exponential Knothe Model(PEKM)fits and fuses overlaps in the deformation curves,while a Long Short-Term Memory(LSTM)neural network predicts and fuses any temporal gaps in the series.Taking the city of Wuhan(China)as experiment area,COSMO-SkyMed(2011-2015),TerraSAR-X(2015-2019)and Sentinel-1(2019-2021)SAR datasets were fused to map long-term surface deformation over the last decade.An independent 2011-2020 InSAR time series analysis based on 230 COSMO-SkyMed scenes was also used as reference for comparison.The correlation coefficient between the results of the fusion algorithm and the reference data is 0.87 in the time overlapping region and 0.97 in the time-interval dataset.The correlation coefficient of the overall results is 0.78,which fully demonstrates that the algorithm proposed in our paper achieves a similar trend as the reference deformation curve.The experimental results are consistent with existing studies of surface deformation at Wuhan,demonstrating the accuracy of the proposed new fusion method to provide robust time series for the analysis of long-term land subsidence mechanisms.
基金supported in part by the National Natural Science Foundation of China(12171124,61933007)the Natural Science Foundation of Heilongjiang Province of China(ZD2022F003)+2 种基金the National High-End Foreign Experts Recruitment Plan of China(G2023012004L)the Royal Society of UKthe Alexander von Humboldt Foundation of Germany
文摘In this paper, the problem of cubature Kalman fusion filtering(CKFF) is addressed for multi-sensor systems under amplify-and-forward(AaF) relays. For the purpose of facilitating data transmission, AaF relays are utilized to regulate signal communication between sensors and filters. Here, the randomly varying channel parameters are represented by a set of stochastic variables whose occurring probabilities are permitted to exhibit bounded uncertainty. Employing the spherical-radial cubature principle, a local filter under AaF relays is initially constructed. This construction ensures and minimizes an upper bound of the filtering error covariance by designing an appropriate filter gain. Subsequently, the local filters are fused through the application of the covariance intersection fusion rule. Furthermore, the uniform boundedness of the filtering error covariance's upper bound is investigated through establishing certain sufficient conditions. The effectiveness of the proposed CKFF scheme is ultimately validated via a simulation experiment concentrating on a three-phase induction machine.
基金supported by the National Key Research and Development Program of China(Grant No.2022YFC3004104)the National Natural Science Foundation of China(Grant No.U2342204)+4 种基金the Innovation and Development Program of the China Meteorological Administration(Grant No.CXFZ2024J001)the Open Research Project of the Key Open Laboratory of Hydrology and Meteorology of the China Meteorological Administration(Grant No.23SWQXZ010)the Science and Technology Plan Project of Zhejiang Province(Grant No.2022C03150)the Open Research Fund Project of Anyang National Climate Observatory(Grant No.AYNCOF202401)the Open Bidding for Selecting the Best Candidates Program(Grant No.CMAJBGS202318)。
文摘Thunderstorm wind gusts are small in scale,typically occurring within a range of a few kilometers.It is extremely challenging to monitor and forecast thunderstorm wind gusts using only automatic weather stations.Therefore,it is necessary to establish thunderstorm wind gust identification techniques based on multisource high-resolution observations.This paper introduces a new algorithm,called thunderstorm wind gust identification network(TGNet).It leverages multimodal feature fusion to fuse the temporal and spatial features of thunderstorm wind gust events.The shapelet transform is first used to extract the temporal features of wind speeds from automatic weather stations,which is aimed at distinguishing thunderstorm wind gusts from those caused by synoptic-scale systems or typhoons.Then,the encoder,structured upon the U-shaped network(U-Net)and incorporating recurrent residual convolutional blocks(R2U-Net),is employed to extract the corresponding spatial convective characteristics of satellite,radar,and lightning observations.Finally,by using the multimodal deep fusion module based on multi-head cross-attention,the temporal features of wind speed at each automatic weather station are incorporated into the spatial features to obtain 10-minutely classification of thunderstorm wind gusts.TGNet products have high accuracy,with a critical success index reaching 0.77.Compared with those of U-Net and R2U-Net,the false alarm rate of TGNet products decreases by 31.28%and 24.15%,respectively.The new algorithm provides grid products of thunderstorm wind gusts with a spatial resolution of 0.01°,updated every 10minutes.The results are finer and more accurate,thereby helping to improve the accuracy of operational warnings for thunderstorm wind gusts.