Object detection in occluded environments remains a core challenge in computer vision(CV),especially in domains such as autonomous driving and robotics.While Convolutional Neural Network(CNN)-based twodimensional(2D)a...Object detection in occluded environments remains a core challenge in computer vision(CV),especially in domains such as autonomous driving and robotics.While Convolutional Neural Network(CNN)-based twodimensional(2D)and three-dimensional(3D)object detection methods havemade significant progress,they often fall short under severe occlusion due to depth ambiguities in 2D imagery and the high cost and deployment limitations of 3D sensors such as Light Detection and Ranging(LiDAR).This paper presents a comparative review of recent 2D and 3D detection models,focusing on their occlusion-handling capabilities and the impact of sensor modalities such as stereo vision,Time-of-Flight(ToF)cameras,and LiDAR.In this context,we introduce FuDensityNet,our multimodal occlusion-aware detection framework that combines Red-Green-Blue(RGB)images and LiDAR data to enhance detection performance.As a forward-looking direction,we propose a monocular depth-estimation extension to FuDensityNet,aimed at replacing expensive 3D sensors with a more scalable CNN-based pipeline.Although this enhancement is not experimentally evaluated in this manuscript,we describe its conceptual design and potential for future implementation.展开更多
Flexible tactile sensors have broad applications in human physiological monitoring,robotic operation and human-machine interaction.However,the research of wearable and flexible tactile sensors with high sensitivity,wi...Flexible tactile sensors have broad applications in human physiological monitoring,robotic operation and human-machine interaction.However,the research of wearable and flexible tactile sensors with high sensitivity,wide sensing range and ability to detect three-dimensional(3D)force is still very challenging.Herein,a flexible tactile electronic skin sensor based on carbon nanotubes(CNTs)/polydimethylsiloxane(PDMS)nanocomposites is presented for 3D contact force detection.The 3D forces were acquired from combination of four specially designed cells in a sensing element.Contributed from the double-sided rough porous structure and specific surface morphology of nanocomposites,the piezoresistive sensor possesses high sensitivity of 12.1 kPa?1 within the range of 600 Pa and 0.68 kPa?1 in the regime exceeding 1 kPa for normal pressure,as well as 59.9 N?1 in the scope of<0.05 N and>2.3 N?1 in the region of<0.6 N for tangential force with ultra-low response time of 3.1 ms.In addition,multi-functional detection in human body monitoring was employed with single sensing cell and the sensor array was integrated into a robotic arm for objects grasping control,indicating the capacities in intelligent robot applications.展开更多
Wireless Sensor Network(WSNs)consists of a group of nodes that analyze the information from surrounding regions.The sensor nodes are responsible for accumulating and exchanging information.Generally,node local-ization...Wireless Sensor Network(WSNs)consists of a group of nodes that analyze the information from surrounding regions.The sensor nodes are responsible for accumulating and exchanging information.Generally,node local-ization is the process of identifying the target node’s location.In this research work,a Received Signal Strength Indicator(RSSI)-based optimal node localization approach is proposed to solve the complexities in the conventional node localization models.Initially,the RSSI value is identified using the Deep Neural Network(DNN).The RSSI is conceded as the range-based method and it does not require special hardware for the node localization process,also it consumes a very minimal amount of cost for localizing the nodes in 3D WSN.The position of the anchor nodes is fixed for detecting the location of the target.Further,the optimal position of the target node is identified using Hybrid T cell Immune with Lotus Effect Optimization algorithm(HTCI-LEO).During the node localization process,the average localization error is minimized,which is the objective of the optimal node localization.In the regular and irregular surfaces,this hybrid algorithm effectively performs the localization process.The suggested hybrid algorithm converges very fast in the three-dimensional(3D)environment.The accuracy of the proposed node localization process is 94.25%.展开更多
The recent advances in sensing and display technologies have been transforming our living environments drastically. In this paper, a new technique is introduced to accurately reconstruct indoor environments in three-d...The recent advances in sensing and display technologies have been transforming our living environments drastically. In this paper, a new technique is introduced to accurately reconstruct indoor environments in three-dimensions using a mobile platform. The system incorporates 4 ultrasonic sensors scanner system, an HD web camera as well as an inertial measurement unit (IMU). The whole platform is mountable on mobile facilities, such as a wheelchair. The proposed mapping approach took advantage of the precision of the 3D point clouds produced by the ultrasonic sensors system despite their scarcity to help build a more definite 3D scene. Using a robust iterative algorithm, it combined the structure from motion generated 3D point clouds with the ultrasonic sensors and IMU generated 3D point clouds to derive a much more precise point cloud using the depth measurements from the ultrasonic sensors. Because of their ability to recognize features of objects in the targeted scene, the ultrasonic generated point clouds performed feature extraction on the consecutive point cloud to ensure a perfect alignment. The range measured by ultrasonic sensors contributed to the depth correction of the generated 3D images (the 3D scenes). Experiments revealed that the system generated not only dense but precise 3D maps of the environments. The results showed that the designed 3D modeling platform is able to help in assistive living environment for self-navigation, obstacle alert, and other driving assisting tasks.展开更多
文摘Object detection in occluded environments remains a core challenge in computer vision(CV),especially in domains such as autonomous driving and robotics.While Convolutional Neural Network(CNN)-based twodimensional(2D)and three-dimensional(3D)object detection methods havemade significant progress,they often fall short under severe occlusion due to depth ambiguities in 2D imagery and the high cost and deployment limitations of 3D sensors such as Light Detection and Ranging(LiDAR).This paper presents a comparative review of recent 2D and 3D detection models,focusing on their occlusion-handling capabilities and the impact of sensor modalities such as stereo vision,Time-of-Flight(ToF)cameras,and LiDAR.In this context,we introduce FuDensityNet,our multimodal occlusion-aware detection framework that combines Red-Green-Blue(RGB)images and LiDAR data to enhance detection performance.As a forward-looking direction,we propose a monocular depth-estimation extension to FuDensityNet,aimed at replacing expensive 3D sensors with a more scalable CNN-based pipeline.Although this enhancement is not experimentally evaluated in this manuscript,we describe its conceptual design and potential for future implementation.
基金funding from National Natural Science Foundation of China(NSFC Nos.61774157,81771388,61874121,and 61874012)Beijing Natural Science Foundation(No.4182075)the Capital Science and Technology Conditions Platform Project(Project ID:Z181100009518014).
文摘Flexible tactile sensors have broad applications in human physiological monitoring,robotic operation and human-machine interaction.However,the research of wearable and flexible tactile sensors with high sensitivity,wide sensing range and ability to detect three-dimensional(3D)force is still very challenging.Herein,a flexible tactile electronic skin sensor based on carbon nanotubes(CNTs)/polydimethylsiloxane(PDMS)nanocomposites is presented for 3D contact force detection.The 3D forces were acquired from combination of four specially designed cells in a sensing element.Contributed from the double-sided rough porous structure and specific surface morphology of nanocomposites,the piezoresistive sensor possesses high sensitivity of 12.1 kPa?1 within the range of 600 Pa and 0.68 kPa?1 in the regime exceeding 1 kPa for normal pressure,as well as 59.9 N?1 in the scope of<0.05 N and>2.3 N?1 in the region of<0.6 N for tangential force with ultra-low response time of 3.1 ms.In addition,multi-functional detection in human body monitoring was employed with single sensing cell and the sensor array was integrated into a robotic arm for objects grasping control,indicating the capacities in intelligent robot applications.
基金appreciation to King Saud University for funding this research through the Researchers Supporting Program number(RSPD2024R918),King Saud University,Riyadh,Saudi Arabia.
文摘Wireless Sensor Network(WSNs)consists of a group of nodes that analyze the information from surrounding regions.The sensor nodes are responsible for accumulating and exchanging information.Generally,node local-ization is the process of identifying the target node’s location.In this research work,a Received Signal Strength Indicator(RSSI)-based optimal node localization approach is proposed to solve the complexities in the conventional node localization models.Initially,the RSSI value is identified using the Deep Neural Network(DNN).The RSSI is conceded as the range-based method and it does not require special hardware for the node localization process,also it consumes a very minimal amount of cost for localizing the nodes in 3D WSN.The position of the anchor nodes is fixed for detecting the location of the target.Further,the optimal position of the target node is identified using Hybrid T cell Immune with Lotus Effect Optimization algorithm(HTCI-LEO).During the node localization process,the average localization error is minimized,which is the objective of the optimal node localization.In the regular and irregular surfaces,this hybrid algorithm effectively performs the localization process.The suggested hybrid algorithm converges very fast in the three-dimensional(3D)environment.The accuracy of the proposed node localization process is 94.25%.
文摘The recent advances in sensing and display technologies have been transforming our living environments drastically. In this paper, a new technique is introduced to accurately reconstruct indoor environments in three-dimensions using a mobile platform. The system incorporates 4 ultrasonic sensors scanner system, an HD web camera as well as an inertial measurement unit (IMU). The whole platform is mountable on mobile facilities, such as a wheelchair. The proposed mapping approach took advantage of the precision of the 3D point clouds produced by the ultrasonic sensors system despite their scarcity to help build a more definite 3D scene. Using a robust iterative algorithm, it combined the structure from motion generated 3D point clouds with the ultrasonic sensors and IMU generated 3D point clouds to derive a much more precise point cloud using the depth measurements from the ultrasonic sensors. Because of their ability to recognize features of objects in the targeted scene, the ultrasonic generated point clouds performed feature extraction on the consecutive point cloud to ensure a perfect alignment. The range measured by ultrasonic sensors contributed to the depth correction of the generated 3D images (the 3D scenes). Experiments revealed that the system generated not only dense but precise 3D maps of the environments. The results showed that the designed 3D modeling platform is able to help in assistive living environment for self-navigation, obstacle alert, and other driving assisting tasks.