The 6D pose estimation of objects is of great significance for the intelligent assembly and sorting of industrial parts.In the industrial robot production scenarios,the 6D pose estimation of industrial parts mainly fa...The 6D pose estimation of objects is of great significance for the intelligent assembly and sorting of industrial parts.In the industrial robot production scenarios,the 6D pose estimation of industrial parts mainly faces two challenges:one is the loss of information and interference caused by occlusion and stacking in the sorting scenario,the other is the difficulty of feature extraction due to the weak texture of industrial parts.To address the above problems,this paper proposes an attention-based pixel-level voting network for 6D pose estimation of weakly textured industrial parts,namely CB-PVNet.On the one hand,the voting scheme can predict the keypoints of affected pixels,which improves the accuracy of keypoint localization even in scenarios such as weak texture and partial occlusion.On the other hand,the attention mechanism can extract interesting features of the object while suppressing useless features of surroundings.Extensive comparative experiments were conducted on both public datasets(including LINEMOD,Occlusion LINEMOD and T-LESS datasets)and self-made datasets.The experimental results indicate that the proposed network CB-PVNet can achieve accuracy of ADD(-s)comparable to state-of-the-art using only RGB images while ensuring real-time performance.Additionally,we also conducted robot grasping experiments in the real world.The balance between accuracy and computational efficiency makes the method well-suited for applications in industrial automation.展开更多
Real-time multi-person pose estimation(MPE)built upon neural network architectures aims to simultaneously detect multiple human instances and regress joint coordinates in dynamic scenes.However,due to factors such as ...Real-time multi-person pose estimation(MPE)built upon neural network architectures aims to simultaneously detect multiple human instances and regress joint coordinates in dynamic scenes.However,due to factors such as high model complexity and limited expression of keypoint information,both the efficiency and accuracy of real-time MPE remain to be improved.To mitigate the adverse impacts caused by the aforementioned issues,this work develops FSEM-Pose,a real-time MPE model rooted in the YOLOv10 framework.In detail,first,FSEM-Pose upgrades the backbone module of the baseline network by introducing the Feature Shuffling-Convolution(FS-Conv),which effectively reduces the backbone size while maximizing the retention of spatial information from the input image.Second,FSEM-Pose incorporates a Feature Saliency Enhancement Module(FSEM)to strengthen the feature encoding of human keypoints,thereby improving the accuracy of pose estimation.Finally,FSEM-Pose further enhances inference efficiency via a lightweight optimization of the head using shared convolutional layers.Our method achieves competitive results across multiple accuracy and efficiency metrics on the MS COCO 2017 and CrowdPose datasets.While being lightweight in design,it improves average precision(AP)by 2.1%and 2.5%,respectively.展开更多
Human pose estimation is crucial across diverse applications,from healthcare to human-computer interaction.Integrating inertial measurement units(IMUs)with monocular vision methods holds great potential for leveraging...Human pose estimation is crucial across diverse applications,from healthcare to human-computer interaction.Integrating inertial measurement units(IMUs)with monocular vision methods holds great potential for leveraging complementary modalities;however,existing approaches are often limited by IMU drift,noise,and underutilization of visual information.To address these limitations,we propose a novel dual-stream feature extraction framework that effectively combines temporal IMU data and single-view image features for improved pose estimation.Short-term dependencies in IMU sequences are captured with convolutional layers,while a Transformerbased architecture models long-range temporal dynamics.To mitigate IMU drift and inter-sensor inconsistencies,a complementary filtering module is introduced alongside a cross-channel interaction mechanism.Features from the IMU and image streams are then fused via a dedicated fusion module and further refined utilizing a high-precision regression head for accurate pose prediction.Experimental results on benchmark datasets demonstrate that our method significantly outperforms existing techniques in terms of estimation,accuracy,and robustness,validating the effectiveness of our dual-stream architecture.展开更多
Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain su...Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.展开更多
With the development of computer vision technology,deep learning-based pose estimation and target detection have been widely used in the fields of human behavior analysis and intelligent security.However,owing to the ...With the development of computer vision technology,deep learning-based pose estimation and target detection have been widely used in the fields of human behavior analysis and intelligent security.However,owing to the complexity of animal poses and the diversity of species,the existing pose estimation methods still face many challenges when applied to animal targets.To solve this problem,an improved YOLO-Pose model is proposed to improve the accuracy and efficiency of animal pose estimation.On the basis of the original YOLO-Pose model,a separable kernel attention mechanism is introduced and improved to make it conform to the animal target,and combined with the spatial pyramid pool of YOLO-Pose,the multiscale feature fusion capability of the model is improved.The experimental results show that the improved YOLO-Pose model achieves excellent performance on both the public animal pose dataset and the AP-10K dataset,significantly improving the ability of target detection and pose estimation.展开更多
基金supported by the Knowledge Innovation Program of Wuhan-Shuguang Project(Grant No.2023010201020443)the School-Level Scientific Research Project Funding Program of Jianghan University(Grant No.2022XKZX33)the Natural Science Foundation of Hubei Province(Grant No.2024AFB466).
文摘The 6D pose estimation of objects is of great significance for the intelligent assembly and sorting of industrial parts.In the industrial robot production scenarios,the 6D pose estimation of industrial parts mainly faces two challenges:one is the loss of information and interference caused by occlusion and stacking in the sorting scenario,the other is the difficulty of feature extraction due to the weak texture of industrial parts.To address the above problems,this paper proposes an attention-based pixel-level voting network for 6D pose estimation of weakly textured industrial parts,namely CB-PVNet.On the one hand,the voting scheme can predict the keypoints of affected pixels,which improves the accuracy of keypoint localization even in scenarios such as weak texture and partial occlusion.On the other hand,the attention mechanism can extract interesting features of the object while suppressing useless features of surroundings.Extensive comparative experiments were conducted on both public datasets(including LINEMOD,Occlusion LINEMOD and T-LESS datasets)and self-made datasets.The experimental results indicate that the proposed network CB-PVNet can achieve accuracy of ADD(-s)comparable to state-of-the-art using only RGB images while ensuring real-time performance.Additionally,we also conducted robot grasping experiments in the real world.The balance between accuracy and computational efficiency makes the method well-suited for applications in industrial automation.
基金supported by the Talent Startup Program of Huangshan University under Grant No.2025xkjq003Additional partial funding was gratefully received from the Scientific Research Project of the Anhui Provincial Department of Education under Grant No.2025AHGXZK40303.
文摘Real-time multi-person pose estimation(MPE)built upon neural network architectures aims to simultaneously detect multiple human instances and regress joint coordinates in dynamic scenes.However,due to factors such as high model complexity and limited expression of keypoint information,both the efficiency and accuracy of real-time MPE remain to be improved.To mitigate the adverse impacts caused by the aforementioned issues,this work develops FSEM-Pose,a real-time MPE model rooted in the YOLOv10 framework.In detail,first,FSEM-Pose upgrades the backbone module of the baseline network by introducing the Feature Shuffling-Convolution(FS-Conv),which effectively reduces the backbone size while maximizing the retention of spatial information from the input image.Second,FSEM-Pose incorporates a Feature Saliency Enhancement Module(FSEM)to strengthen the feature encoding of human keypoints,thereby improving the accuracy of pose estimation.Finally,FSEM-Pose further enhances inference efficiency via a lightweight optimization of the head using shared convolutional layers.Our method achieves competitive results across multiple accuracy and efficiency metrics on the MS COCO 2017 and CrowdPose datasets.While being lightweight in design,it improves average precision(AP)by 2.1%and 2.5%,respectively.
基金support provided by the European University of Atlantic.
文摘Human pose estimation is crucial across diverse applications,from healthcare to human-computer interaction.Integrating inertial measurement units(IMUs)with monocular vision methods holds great potential for leveraging complementary modalities;however,existing approaches are often limited by IMU drift,noise,and underutilization of visual information.To address these limitations,we propose a novel dual-stream feature extraction framework that effectively combines temporal IMU data and single-view image features for improved pose estimation.Short-term dependencies in IMU sequences are captured with convolutional layers,while a Transformerbased architecture models long-range temporal dynamics.To mitigate IMU drift and inter-sensor inconsistencies,a complementary filtering module is introduced alongside a cross-channel interaction mechanism.Features from the IMU and image streams are then fused via a dedicated fusion module and further refined utilizing a high-precision regression head for accurate pose prediction.Experimental results on benchmark datasets demonstrate that our method significantly outperforms existing techniques in terms of estimation,accuracy,and robustness,validating the effectiveness of our dual-stream architecture.
基金supported in part by the National Natural Science Foundation of China under Grants 62071345。
文摘Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.
基金funded by the second batch of Tianchi Talents(Leading Tal-ents)project in Xinjiang Uygur Autonomous Region.Project leader:Lei Liu from School of Computer Science and Technology,Xinjiang University.
文摘With the development of computer vision technology,deep learning-based pose estimation and target detection have been widely used in the fields of human behavior analysis and intelligent security.However,owing to the complexity of animal poses and the diversity of species,the existing pose estimation methods still face many challenges when applied to animal targets.To solve this problem,an improved YOLO-Pose model is proposed to improve the accuracy and efficiency of animal pose estimation.On the basis of the original YOLO-Pose model,a separable kernel attention mechanism is introduced and improved to make it conform to the animal target,and combined with the spatial pyramid pool of YOLO-Pose,the multiscale feature fusion capability of the model is improved.The experimental results show that the improved YOLO-Pose model achieves excellent performance on both the public animal pose dataset and the AP-10K dataset,significantly improving the ability of target detection and pose estimation.