The 6D pose estimation of objects is of great significance for the intelligent assembly and sorting of industrial parts.In the industrial robot production scenarios,the 6D pose estimation of industrial parts mainly fa...The 6D pose estimation of objects is of great significance for the intelligent assembly and sorting of industrial parts.In the industrial robot production scenarios,the 6D pose estimation of industrial parts mainly faces two challenges:one is the loss of information and interference caused by occlusion and stacking in the sorting scenario,the other is the difficulty of feature extraction due to the weak texture of industrial parts.To address the above problems,this paper proposes an attention-based pixel-level voting network for 6D pose estimation of weakly textured industrial parts,namely CB-PVNet.On the one hand,the voting scheme can predict the keypoints of affected pixels,which improves the accuracy of keypoint localization even in scenarios such as weak texture and partial occlusion.On the other hand,the attention mechanism can extract interesting features of the object while suppressing useless features of surroundings.Extensive comparative experiments were conducted on both public datasets(including LINEMOD,Occlusion LINEMOD and T-LESS datasets)and self-made datasets.The experimental results indicate that the proposed network CB-PVNet can achieve accuracy of ADD(-s)comparable to state-of-the-art using only RGB images while ensuring real-time performance.Additionally,we also conducted robot grasping experiments in the real world.The balance between accuracy and computational efficiency makes the method well-suited for applications in industrial automation.展开更多
Real-time multi-person pose estimation(MPE)built upon neural network architectures aims to simultaneously detect multiple human instances and regress joint coordinates in dynamic scenes.However,due to factors such as ...Real-time multi-person pose estimation(MPE)built upon neural network architectures aims to simultaneously detect multiple human instances and regress joint coordinates in dynamic scenes.However,due to factors such as high model complexity and limited expression of keypoint information,both the efficiency and accuracy of real-time MPE remain to be improved.To mitigate the adverse impacts caused by the aforementioned issues,this work develops FSEM-Pose,a real-time MPE model rooted in the YOLOv10 framework.In detail,first,FSEM-Pose upgrades the backbone module of the baseline network by introducing the Feature Shuffling-Convolution(FS-Conv),which effectively reduces the backbone size while maximizing the retention of spatial information from the input image.Second,FSEM-Pose incorporates a Feature Saliency Enhancement Module(FSEM)to strengthen the feature encoding of human keypoints,thereby improving the accuracy of pose estimation.Finally,FSEM-Pose further enhances inference efficiency via a lightweight optimization of the head using shared convolutional layers.Our method achieves competitive results across multiple accuracy and efficiency metrics on the MS COCO 2017 and CrowdPose datasets.While being lightweight in design,it improves average precision(AP)by 2.1%and 2.5%,respectively.展开更多
Human pose estimation is crucial across diverse applications,from healthcare to human-computer interaction.Integrating inertial measurement units(IMUs)with monocular vision methods holds great potential for leveraging...Human pose estimation is crucial across diverse applications,from healthcare to human-computer interaction.Integrating inertial measurement units(IMUs)with monocular vision methods holds great potential for leveraging complementary modalities;however,existing approaches are often limited by IMU drift,noise,and underutilization of visual information.To address these limitations,we propose a novel dual-stream feature extraction framework that effectively combines temporal IMU data and single-view image features for improved pose estimation.Short-term dependencies in IMU sequences are captured with convolutional layers,while a Transformerbased architecture models long-range temporal dynamics.To mitigate IMU drift and inter-sensor inconsistencies,a complementary filtering module is introduced alongside a cross-channel interaction mechanism.Features from the IMU and image streams are then fused via a dedicated fusion module and further refined utilizing a high-precision regression head for accurate pose prediction.Experimental results on benchmark datasets demonstrate that our method significantly outperforms existing techniques in terms of estimation,accuracy,and robustness,validating the effectiveness of our dual-stream architecture.展开更多
基金supported by the Knowledge Innovation Program of Wuhan-Shuguang Project(Grant No.2023010201020443)the School-Level Scientific Research Project Funding Program of Jianghan University(Grant No.2022XKZX33)the Natural Science Foundation of Hubei Province(Grant No.2024AFB466).
文摘The 6D pose estimation of objects is of great significance for the intelligent assembly and sorting of industrial parts.In the industrial robot production scenarios,the 6D pose estimation of industrial parts mainly faces two challenges:one is the loss of information and interference caused by occlusion and stacking in the sorting scenario,the other is the difficulty of feature extraction due to the weak texture of industrial parts.To address the above problems,this paper proposes an attention-based pixel-level voting network for 6D pose estimation of weakly textured industrial parts,namely CB-PVNet.On the one hand,the voting scheme can predict the keypoints of affected pixels,which improves the accuracy of keypoint localization even in scenarios such as weak texture and partial occlusion.On the other hand,the attention mechanism can extract interesting features of the object while suppressing useless features of surroundings.Extensive comparative experiments were conducted on both public datasets(including LINEMOD,Occlusion LINEMOD and T-LESS datasets)and self-made datasets.The experimental results indicate that the proposed network CB-PVNet can achieve accuracy of ADD(-s)comparable to state-of-the-art using only RGB images while ensuring real-time performance.Additionally,we also conducted robot grasping experiments in the real world.The balance between accuracy and computational efficiency makes the method well-suited for applications in industrial automation.
基金supported by the Talent Startup Program of Huangshan University under Grant No.2025xkjq003Additional partial funding was gratefully received from the Scientific Research Project of the Anhui Provincial Department of Education under Grant No.2025AHGXZK40303.
文摘Real-time multi-person pose estimation(MPE)built upon neural network architectures aims to simultaneously detect multiple human instances and regress joint coordinates in dynamic scenes.However,due to factors such as high model complexity and limited expression of keypoint information,both the efficiency and accuracy of real-time MPE remain to be improved.To mitigate the adverse impacts caused by the aforementioned issues,this work develops FSEM-Pose,a real-time MPE model rooted in the YOLOv10 framework.In detail,first,FSEM-Pose upgrades the backbone module of the baseline network by introducing the Feature Shuffling-Convolution(FS-Conv),which effectively reduces the backbone size while maximizing the retention of spatial information from the input image.Second,FSEM-Pose incorporates a Feature Saliency Enhancement Module(FSEM)to strengthen the feature encoding of human keypoints,thereby improving the accuracy of pose estimation.Finally,FSEM-Pose further enhances inference efficiency via a lightweight optimization of the head using shared convolutional layers.Our method achieves competitive results across multiple accuracy and efficiency metrics on the MS COCO 2017 and CrowdPose datasets.While being lightweight in design,it improves average precision(AP)by 2.1%and 2.5%,respectively.
基金support provided by the European University of Atlantic.
文摘Human pose estimation is crucial across diverse applications,from healthcare to human-computer interaction.Integrating inertial measurement units(IMUs)with monocular vision methods holds great potential for leveraging complementary modalities;however,existing approaches are often limited by IMU drift,noise,and underutilization of visual information.To address these limitations,we propose a novel dual-stream feature extraction framework that effectively combines temporal IMU data and single-view image features for improved pose estimation.Short-term dependencies in IMU sequences are captured with convolutional layers,while a Transformerbased architecture models long-range temporal dynamics.To mitigate IMU drift and inter-sensor inconsistencies,a complementary filtering module is introduced alongside a cross-channel interaction mechanism.Features from the IMU and image streams are then fused via a dedicated fusion module and further refined utilizing a high-precision regression head for accurate pose prediction.Experimental results on benchmark datasets demonstrate that our method significantly outperforms existing techniques in terms of estimation,accuracy,and robustness,validating the effectiveness of our dual-stream architecture.