Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain su...Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.展开更多
With the development of computer vision technology,deep learning-based pose estimation and target detection have been widely used in the fields of human behavior analysis and intelligent security.However,owing to the ...With the development of computer vision technology,deep learning-based pose estimation and target detection have been widely used in the fields of human behavior analysis and intelligent security.However,owing to the complexity of animal poses and the diversity of species,the existing pose estimation methods still face many challenges when applied to animal targets.To solve this problem,an improved YOLO-Pose model is proposed to improve the accuracy and efficiency of animal pose estimation.On the basis of the original YOLO-Pose model,a separable kernel attention mechanism is introduced and improved to make it conform to the animal target,and combined with the spatial pyramid pool of YOLO-Pose,the multiscale feature fusion capability of the model is improved.The experimental results show that the improved YOLO-Pose model achieves excellent performance on both the public animal pose dataset and the AP-10K dataset,significantly improving the ability of target detection and pose estimation.展开更多
基金supported in part by the National Natural Science Foundation of China under Grants 62071345。
文摘Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.
基金funded by the second batch of Tianchi Talents(Leading Tal-ents)project in Xinjiang Uygur Autonomous Region.Project leader:Lei Liu from School of Computer Science and Technology,Xinjiang University.
文摘With the development of computer vision technology,deep learning-based pose estimation and target detection have been widely used in the fields of human behavior analysis and intelligent security.However,owing to the complexity of animal poses and the diversity of species,the existing pose estimation methods still face many challenges when applied to animal targets.To solve this problem,an improved YOLO-Pose model is proposed to improve the accuracy and efficiency of animal pose estimation.On the basis of the original YOLO-Pose model,a separable kernel attention mechanism is introduced and improved to make it conform to the animal target,and combined with the spatial pyramid pool of YOLO-Pose,the multiscale feature fusion capability of the model is improved.The experimental results show that the improved YOLO-Pose model achieves excellent performance on both the public animal pose dataset and the AP-10K dataset,significantly improving the ability of target detection and pose estimation.