Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain su...Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.展开更多
Remarkable progress has been made in self-supervised monocular depth estimation (SS-MDE) by exploring cross-view consistency, e.g., photometric consistency and 3D point cloud consistency. However, they are very vulner...Remarkable progress has been made in self-supervised monocular depth estimation (SS-MDE) by exploring cross-view consistency, e.g., photometric consistency and 3D point cloud consistency. However, they are very vulnerable to illumination variance, occlusions, texture-less regions, as well as moving objects, making them not robust enough to deal with various scenes. To address this challenge, we study two kinds of robust cross-view consistency in this paper. Firstly, the spatial offset field between adjacent frames is obtained by reconstructing the reference frame from its neighbors via deformable alignment, which is used to align the temporal depth features via a depth feature alignment (DFA) loss. Secondly, the 3D point clouds of each reference frame and its nearby frames are calculated and transformed into voxel space, where the point density in each voxel is calculated and aligned via a voxel density alignment (VDA) loss. In this way, we exploit the temporal coherence in both depth feature space and 3D voxel space for SS-MDE, shifting the “point-to-point” alignment paradigm to the “region-to-region” one. Compared with the photometric consistency loss as well as the rigid point cloud alignment loss, the proposed DFA and VDA losses are more robust owing to the strong representation power of deep features as well as the high tolerance of voxel density to the aforementioned challenges. Experimental results on several outdoor benchmarks show that our method outperforms current state-of-the-art techniques. Extensive ablation study and analysis validate the effectiveness of the proposed losses, especially in challenging scenes. The code and models are available at https://github.com/sunnyHelen/RCVC-depth.展开更多
Depth maps play a crucial role in various practical applications such as computer vision,augmented reality,and autonomous driving.How to obtain clear and accurate depth information in video depth estimation is a signi...Depth maps play a crucial role in various practical applications such as computer vision,augmented reality,and autonomous driving.How to obtain clear and accurate depth information in video depth estimation is a significant challenge faced in the field of computer vision.However,existing monocular video depth estimation models tend to produce blurred or inaccurate depth information in regions with object edges and low texture.To address this issue,we propose a monocular depth estimation model architecture guided by semantic segmentation masks,which introduces semantic information into the model to correct the ambiguous depth regions.We have evaluated the proposed method,and experimental results show that our method improves the accuracy of edge depth,demonstrating the effectiveness of our approach.展开更多
Object detection in occluded environments remains a core challenge in computer vision(CV),especially in domains such as autonomous driving and robotics.While Convolutional Neural Network(CNN)-based twodimensional(2D)a...Object detection in occluded environments remains a core challenge in computer vision(CV),especially in domains such as autonomous driving and robotics.While Convolutional Neural Network(CNN)-based twodimensional(2D)and three-dimensional(3D)object detection methods havemade significant progress,they often fall short under severe occlusion due to depth ambiguities in 2D imagery and the high cost and deployment limitations of 3D sensors such as Light Detection and Ranging(LiDAR).This paper presents a comparative review of recent 2D and 3D detection models,focusing on their occlusion-handling capabilities and the impact of sensor modalities such as stereo vision,Time-of-Flight(ToF)cameras,and LiDAR.In this context,we introduce FuDensityNet,our multimodal occlusion-aware detection framework that combines Red-Green-Blue(RGB)images and LiDAR data to enhance detection performance.As a forward-looking direction,we propose a monocular depth-estimation extension to FuDensityNet,aimed at replacing expensive 3D sensors with a more scalable CNN-based pipeline.Although this enhancement is not experimentally evaluated in this manuscript,we describe its conceptual design and potential for future implementation.展开更多
Precise and robust three-dimensional object detection(3DOD)presents a promising opportunity in the field of mobile robot(MR)navigation.Monocular 3DOD techniques typically involve extending existing twodimensional obje...Precise and robust three-dimensional object detection(3DOD)presents a promising opportunity in the field of mobile robot(MR)navigation.Monocular 3DOD techniques typically involve extending existing twodimensional object detection(2DOD)frameworks to predict the three-dimensional bounding box(3DBB)of objects captured in 2D RGB images.However,these methods often require multiple images,making them less feasible for various real-time scenarios.To address these challenges,the emergence of agile convolutional neural networks(CNNs)capable of inferring depth froma single image opens a new avenue for investigation.The paper proposes a novel ELDENet network designed to produce cost-effective 3DBounding Box Estimation(3D-BBE)froma single image.This novel framework comprises the PP-LCNet as the encoder and a fast convolutional decoder.Additionally,this integration includes a Squeeze-Exploit(SE)module utilizing the Math Kernel Library for Deep Neural Networks(MKLDNN)optimizer to enhance convolutional efficiency and streamline model size during effective training.Meanwhile,the proposed multi-scale sub-pixel decoder generates high-quality depth maps while maintaining a compact structure.Furthermore,the generated depthmaps provide a clear perspective with distance details of objects in the environment.These depth insights are combined with 2DOD for precise evaluation of 3D Bounding Boxes(3DBB),facilitating scene understanding and optimal route planning for mobile robots.Based on the estimated object center of the 3DBB,the Deep Reinforcement Learning(DRL)-based obstacle avoidance strategy for MRs is developed.Experimental results demonstrate that our model achieves state-of-the-art performance across three datasets:NYU-V2,KITTI,and Cityscapes.Overall,this framework shows significant potential for adaptation in intelligent mechatronic systems,particularly in developing knowledge-driven systems for mobile robot navigation.展开更多
Based on well-designed network architectures and objective functions,self-supervised monocular depth estimation has made great progress.However,lacking a specific mechanism to make the network learn more about the reg...Based on well-designed network architectures and objective functions,self-supervised monocular depth estimation has made great progress.However,lacking a specific mechanism to make the network learn more about the regions containing moving objects or occlusion scenarios,existing depth estimation methods likely produce poor results for them.Therefore,we propose an uncertainty quantification method to improve the performance of existing depth estimation networks without changing their architectures.Our uncertainty quantification method consists of uncertainty measurement,the learning guidance by uncertainty,and the ultimate adaptive determination.Firstly,with Snapshot and Siam learning strategies,we measure the uncertainty degree by calculating the variance of pre-converged epochs or twins during training.Secondly,we use the uncertainty to guide the network to strengthen learning about those regions with more uncertainty.Finally,we use the uncertainty to adaptively produce the final depth estimation results with a balance of accuracy and robustness.To demonstrate the effectiveness of our uncertainty quantification method,we apply it to two state-of-the-art models,Monodepth2 and Hints.Experimental results show that our method has improved the depth estimation performance in seven evaluation metrics compared with two baseline models and exceeded the existing uncertainty method.展开更多
Monocular depth estimation is the basic task in computer vision.Its accuracy has tremendous improvement in the decade with the development of deep learning.However,the blurry boundary in the depth map is a serious pro...Monocular depth estimation is the basic task in computer vision.Its accuracy has tremendous improvement in the decade with the development of deep learning.However,the blurry boundary in the depth map is a serious problem.Researchers find that the blurry boundary is mainly caused by two factors.First,the low-level features,containing boundary and structure information,may be lost in deep networks during the convolution process.Second,themodel ignores the errors introduced by the boundary area due to the few portions of the boundary area in the whole area,during the backpropagation.Focusing on the factors mentioned above.Two countermeasures are proposed to mitigate the boundary blur problem.Firstly,we design a scene understanding module and scale transformmodule to build a lightweight fuse feature pyramid,which can deal with low-level feature loss effectively.Secondly,we propose a boundary-aware depth loss function to pay attention to the effects of the boundary’s depth value.Extensive experiments show that our method can predict the depth maps with clearer boundaries,and the performance of the depth accuracy based on NYU-Depth V2,SUN RGB-D,and iBims-1 are competitive.展开更多
Learning-based multi-task models have been widely used in various scene understanding tasks,and complement each other,i.e.,they allow us to consider prior semantic information to better infer depth.We boost the unsupe...Learning-based multi-task models have been widely used in various scene understanding tasks,and complement each other,i.e.,they allow us to consider prior semantic information to better infer depth.We boost the unsupervised monocular depth estimation using semantic segmentation as an auxiliary task.To address the lack of cross-domain datasets and catastrophic forgetting problems encountered in multi-task training,we utilize existing methodology to obtain redundant segmentation maps to build our cross-domain dataset,which not only provides a new way to conduct multi-task training,but also helps us to evaluate results compared with those of other algorithms.In addition,in order to comprehensively use the extracted features of the two tasks in the early perception stage,we use a strategy of sharing weights in the network to fuse cross-domain features,and introduce a novel multi-task loss function to further smooth the depth values.Extensive experiments on KITTI and Cityscapes datasets show that our method has achieved state-of-the-art performance in the depth estimation task,as well improved semantic segmentation.展开更多
Background Monocular depth estimation aims to predict a dense depth map from a single RGB image,and has important applications in 3D reconstruction,automatic driving,and augmented reality.However,existing methods dire...Background Monocular depth estimation aims to predict a dense depth map from a single RGB image,and has important applications in 3D reconstruction,automatic driving,and augmented reality.However,existing methods directly feed the original RGB image into the model to extract depth features without avoiding the interference of depth-irrelevant information on depth-estimation accuracy,which leads to inferior performance.Methods To remove the influence of depth-irrelevant information and improve the depth-prediction accuracy,we propose RADepthNet,a novel reflectance-guided network that fuses boundary features.Specifically,our method predicts depth maps using the following three steps:(1)Intrinsic Image Decomposition.We propose a reflectance extraction module consisting of an encoder-decoder structure to extract the depth-related reflectance.Through an ablation study,we demonstrate that the module can reduce the influence of illumination on depth estimation.(2)Boundary Detection.A boundary extraction module,consisting of an encoder,refinement block,and upsample block,was proposed to better predict the depth at object boundaries utilizing gradient constraints.(3)Depth Prediction Module.We use an encoder different from(2)to obtain depth features from the reflectance map and fuse boundary features to predict depth.In addition,we proposed FIFADataset,a depth-estimation dataset applied in soccer scenarios.Results Extensive experiments on a public dataset and our proposed FIFADataset show that our method achieves state-of-the-art performance.展开更多
对于复杂天气场景图像模糊、低对比度和颜色失真所导致的深度信息预测不准的问题,以往的研究均以标准场景的深度图作为先验信息来对该类场景进行深度估计。然而,这一方式存在先验信息精度较低等问题。对此,提出一个基于多尺度注意力机...对于复杂天气场景图像模糊、低对比度和颜色失真所导致的深度信息预测不准的问题,以往的研究均以标准场景的深度图作为先验信息来对该类场景进行深度估计。然而,这一方式存在先验信息精度较低等问题。对此,提出一个基于多尺度注意力机制的单目深度估计模型TalentDepth,以实现对复杂天气场景的预测。首先,在编码器中融合多尺度注意力机制,在减少计算成本的同时,保留每个通道的信息,提高特征提取的效率和能力。其次,针对图像深度不清晰的问题,基于几何一致性,提出深度区域细化(Depth Region Refinement,DSR)模块,过滤不准确的像素点,以提高深度信息的可靠性。最后,输入图像翻译模型所生成的复杂样本,并计算相应原始图像上的标准损失来指导模型的自监督训练。在NuScence,KITTI和KITTI-C这3个数据集上,相比于基线模型,所提模型对误差和精度均有优化。展开更多
自监督单目深度估计受到了国内外研究人员的广泛关注。现有基于深度学习的自监督单目深度估计方法主要采用编码器-解码器结构。然而,这些方法在编码过程中对输入图像进行下采样操作,导致部分图像信息,尤其是图像的边界信息丢失,进而影...自监督单目深度估计受到了国内外研究人员的广泛关注。现有基于深度学习的自监督单目深度估计方法主要采用编码器-解码器结构。然而,这些方法在编码过程中对输入图像进行下采样操作,导致部分图像信息,尤其是图像的边界信息丢失,进而影响深度图的精度。针对上述问题,提出一种基于拉普拉斯金字塔的自监督单目深度估计方法(Self-supervised Monocular Depth Estimation Based on the Laplace Pyramid,LpDepth)。此方法的核心思想是:首先,使用拉普拉斯残差图丰富编码特征,以弥补在下采样过程中丢失的特征信息;其次,在下采样过程中使用最大池化层突显和放大特征信息,使编码器在特征提取过程中更容易地提取到训练模型所需要的特征信息;最后,使用残差模块解决过拟合问题,提高解码器对特征的利用效率。在KITTI和Make3D等数据集上对所提方法进行了测试,同时将其与现有经典方法进行了比较。实验结果证明了所提方法的有效性。展开更多
Imaging of surface-enhanced Raman scattering(SERS) nanoparticles(NPs) has been intensively studied for cancer detection due to its high sensitivity, unconstrained low signal-to-noise ratios, and multiplexing detection...Imaging of surface-enhanced Raman scattering(SERS) nanoparticles(NPs) has been intensively studied for cancer detection due to its high sensitivity, unconstrained low signal-to-noise ratios, and multiplexing detection capability. Furthermore, conjugating SERS NPs with various biomarkers is straightforward, resulting in numerous successful studies on cancer detection and diagnosis. However, Raman spectroscopy only provides spectral data from an imaging area without co-registered anatomic context.展开更多
Self-supervised monocular depth estimation has been widely investigated and applied in previous works.However,existing methods suffer from texture-copy,depth drift,and incomplete structure.It is difficult for normal C...Self-supervised monocular depth estimation has been widely investigated and applied in previous works.However,existing methods suffer from texture-copy,depth drift,and incomplete structure.It is difficult for normal CNN networks to completely understand the relationship between the object and its surrounding environment.Moreover,it is hard to design the depth smoothness loss to balance depth smoothness and sharpness.To address these issues,we propose a coarse-to-fine method with a normalized convolutional block attention module(NCBAM).In the coarse estimation stage,we incorporate the NCBAM into depth and pose networks to overcome the texture-copy and depth drift problems.Then,we use a new network to refine the coarse depth guided by the color image and produce a structure-preserving depth result in the refinement stage.Our method can produce results competitive with state-of-the-art methods.Comprehensive experiments prove the effectiveness of our two-stage method using the NCBAM.展开更多
目的从单幅影像中估计景深已成为计算机视觉研究热点之一,现有方法常通过提高网络的复杂度回归深度,增加了数据的训练成本及时间复杂度,为此提出一种面向单目深度估计的多层次感知条件随机场模型。方法采用自适应混合金字塔特征融合策略...目的从单幅影像中估计景深已成为计算机视觉研究热点之一,现有方法常通过提高网络的复杂度回归深度,增加了数据的训练成本及时间复杂度,为此提出一种面向单目深度估计的多层次感知条件随机场模型。方法采用自适应混合金字塔特征融合策略,捕获图像中不同位置间的短距离和长距离依赖关系,从而有效聚合全局和局部上下文信息,实现信息的高效传递。引入条件随机场解码机制,以此精细捕捉像素间的空间依赖关系。结合动态缩放注意力机制增强对不同图像区域间依赖关系的感知能力,引入偏置学习单元模块避免网络陷入极端值问题,保证模型的稳定性。针对不同特征模态间的交互情况,通过层次感知适配器扩展特征映射维度增强空间和通道交互性能,提高模型的特征学习能力。结果在NYU Depth v2(New York University depth dataset version 2)数据集上进行消融实验,结果表明,本文网络可以显著提高性能指标,相较于对比的先进方法,绝对相对误差(absolute relative error,Abs Rel)减小至0.1以内,降低7.4%,均方根误差(root mean square error,RMSE)降低5.4%。为验证模型在真实道路环境中的实用性,在KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago)数据集上进行对比实验,上述指标均优于对比的主流方法,其中RMSE降低3.1%,阈值(δ<1.25^(2),δ<1.25^(3))准确度接近100%,此外,在MatterPort3D数据集上验证了模型的泛化性。从可视化实验结果看,在复杂环境下本文方法可以更好地估计困难区域的深度。结论本文采用多层次特征提取器及混合金字塔特征融合策略,优化了信息在编码器和解码器间的传递过程,通过全连接解码获取像素级别的输出,能够有效提高单目深度估计精度。展开更多
基金supported in part by the National Natural Science Foundation of China under Grants 62071345。
文摘Self-supervised monocular depth estimation has emerged as a major research focus in recent years,primarily due to the elimination of ground-truth depth dependence.However,the prevailing architectures in this domain suffer from inherent limitations:existing pose network branches infer camera ego-motion exclusively under static-scene and Lambertian-surface assumptions.These assumptions are often violated in real-world scenarios due to dynamic objects,non-Lambertian reflectance,and unstructured background elements,leading to pervasive artifacts such as depth discontinuities(“holes”),structural collapse,and ambiguous reconstruction.To address these challenges,we propose a novel framework that integrates scene dynamic pose estimation into the conventional self-supervised depth network,enhancing its ability to model complex scene dynamics.Our contributions are threefold:(1)a pixel-wise dynamic pose estimation module that jointly resolves the pose transformations of moving objects and localized scene perturbations;(2)a physically-informed loss function that couples dynamic pose and depth predictions,designed to mitigate depth errors arising from high-speed distant objects and geometrically inconsistent motion profiles;(3)an efficient SE(3)transformation parameterization that streamlines network complexity and temporal pre-processing.Extensive experiments on the KITTI and NYU-V2 benchmarks show that our framework achieves state-of-the-art performance in both quantitative metrics and qualitative visual fidelity,significantly improving the robustness and generalization of monocular depth estimation under dynamic conditions.
文摘Remarkable progress has been made in self-supervised monocular depth estimation (SS-MDE) by exploring cross-view consistency, e.g., photometric consistency and 3D point cloud consistency. However, they are very vulnerable to illumination variance, occlusions, texture-less regions, as well as moving objects, making them not robust enough to deal with various scenes. To address this challenge, we study two kinds of robust cross-view consistency in this paper. Firstly, the spatial offset field between adjacent frames is obtained by reconstructing the reference frame from its neighbors via deformable alignment, which is used to align the temporal depth features via a depth feature alignment (DFA) loss. Secondly, the 3D point clouds of each reference frame and its nearby frames are calculated and transformed into voxel space, where the point density in each voxel is calculated and aligned via a voxel density alignment (VDA) loss. In this way, we exploit the temporal coherence in both depth feature space and 3D voxel space for SS-MDE, shifting the “point-to-point” alignment paradigm to the “region-to-region” one. Compared with the photometric consistency loss as well as the rigid point cloud alignment loss, the proposed DFA and VDA losses are more robust owing to the strong representation power of deep features as well as the high tolerance of voxel density to the aforementioned challenges. Experimental results on several outdoor benchmarks show that our method outperforms current state-of-the-art techniques. Extensive ablation study and analysis validate the effectiveness of the proposed losses, especially in challenging scenes. The code and models are available at https://github.com/sunnyHelen/RCVC-depth.
文摘Depth maps play a crucial role in various practical applications such as computer vision,augmented reality,and autonomous driving.How to obtain clear and accurate depth information in video depth estimation is a significant challenge faced in the field of computer vision.However,existing monocular video depth estimation models tend to produce blurred or inaccurate depth information in regions with object edges and low texture.To address this issue,we propose a monocular depth estimation model architecture guided by semantic segmentation masks,which introduces semantic information into the model to correct the ambiguous depth regions.We have evaluated the proposed method,and experimental results show that our method improves the accuracy of edge depth,demonstrating the effectiveness of our approach.
文摘Object detection in occluded environments remains a core challenge in computer vision(CV),especially in domains such as autonomous driving and robotics.While Convolutional Neural Network(CNN)-based twodimensional(2D)and three-dimensional(3D)object detection methods havemade significant progress,they often fall short under severe occlusion due to depth ambiguities in 2D imagery and the high cost and deployment limitations of 3D sensors such as Light Detection and Ranging(LiDAR).This paper presents a comparative review of recent 2D and 3D detection models,focusing on their occlusion-handling capabilities and the impact of sensor modalities such as stereo vision,Time-of-Flight(ToF)cameras,and LiDAR.In this context,we introduce FuDensityNet,our multimodal occlusion-aware detection framework that combines Red-Green-Blue(RGB)images and LiDAR data to enhance detection performance.As a forward-looking direction,we propose a monocular depth-estimation extension to FuDensityNet,aimed at replacing expensive 3D sensors with a more scalable CNN-based pipeline.Although this enhancement is not experimentally evaluated in this manuscript,we describe its conceptual design and potential for future implementation.
文摘Precise and robust three-dimensional object detection(3DOD)presents a promising opportunity in the field of mobile robot(MR)navigation.Monocular 3DOD techniques typically involve extending existing twodimensional object detection(2DOD)frameworks to predict the three-dimensional bounding box(3DBB)of objects captured in 2D RGB images.However,these methods often require multiple images,making them less feasible for various real-time scenarios.To address these challenges,the emergence of agile convolutional neural networks(CNNs)capable of inferring depth froma single image opens a new avenue for investigation.The paper proposes a novel ELDENet network designed to produce cost-effective 3DBounding Box Estimation(3D-BBE)froma single image.This novel framework comprises the PP-LCNet as the encoder and a fast convolutional decoder.Additionally,this integration includes a Squeeze-Exploit(SE)module utilizing the Math Kernel Library for Deep Neural Networks(MKLDNN)optimizer to enhance convolutional efficiency and streamline model size during effective training.Meanwhile,the proposed multi-scale sub-pixel decoder generates high-quality depth maps while maintaining a compact structure.Furthermore,the generated depthmaps provide a clear perspective with distance details of objects in the environment.These depth insights are combined with 2DOD for precise evaluation of 3D Bounding Boxes(3DBB),facilitating scene understanding and optimal route planning for mobile robots.Based on the estimated object center of the 3DBB,the Deep Reinforcement Learning(DRL)-based obstacle avoidance strategy for MRs is developed.Experimental results demonstrate that our model achieves state-of-the-art performance across three datasets:NYU-V2,KITTI,and Cityscapes.Overall,this framework shows significant potential for adaptation in intelligent mechatronic systems,particularly in developing knowledge-driven systems for mobile robot navigation.
基金supported by the National Natural Science Foundation of China under Grant No.61972298CAAI-Huawei MindSpore Open Fund,and the Xinjiang Bingtuan Science and Technology Program of China under Grant No.2019BC008.
文摘Based on well-designed network architectures and objective functions,self-supervised monocular depth estimation has made great progress.However,lacking a specific mechanism to make the network learn more about the regions containing moving objects or occlusion scenarios,existing depth estimation methods likely produce poor results for them.Therefore,we propose an uncertainty quantification method to improve the performance of existing depth estimation networks without changing their architectures.Our uncertainty quantification method consists of uncertainty measurement,the learning guidance by uncertainty,and the ultimate adaptive determination.Firstly,with Snapshot and Siam learning strategies,we measure the uncertainty degree by calculating the variance of pre-converged epochs or twins during training.Secondly,we use the uncertainty to guide the network to strengthen learning about those regions with more uncertainty.Finally,we use the uncertainty to adaptively produce the final depth estimation results with a balance of accuracy and robustness.To demonstrate the effectiveness of our uncertainty quantification method,we apply it to two state-of-the-art models,Monodepth2 and Hints.Experimental results show that our method has improved the depth estimation performance in seven evaluation metrics compared with two baseline models and exceeded the existing uncertainty method.
基金supported in part by School Research Projects of Wuyi University (No.5041700175).
文摘Monocular depth estimation is the basic task in computer vision.Its accuracy has tremendous improvement in the decade with the development of deep learning.However,the blurry boundary in the depth map is a serious problem.Researchers find that the blurry boundary is mainly caused by two factors.First,the low-level features,containing boundary and structure information,may be lost in deep networks during the convolution process.Second,themodel ignores the errors introduced by the boundary area due to the few portions of the boundary area in the whole area,during the backpropagation.Focusing on the factors mentioned above.Two countermeasures are proposed to mitigate the boundary blur problem.Firstly,we design a scene understanding module and scale transformmodule to build a lightweight fuse feature pyramid,which can deal with low-level feature loss effectively.Secondly,we propose a boundary-aware depth loss function to pay attention to the effects of the boundary’s depth value.Extensive experiments show that our method can predict the depth maps with clearer boundaries,and the performance of the depth accuracy based on NYU-Depth V2,SUN RGB-D,and iBims-1 are competitive.
基金This work was supported by the national key research development plan(Project No.YS2018YFB1403703)research project of the communication university of china(Project No.CUC200D058).
文摘Learning-based multi-task models have been widely used in various scene understanding tasks,and complement each other,i.e.,they allow us to consider prior semantic information to better infer depth.We boost the unsupervised monocular depth estimation using semantic segmentation as an auxiliary task.To address the lack of cross-domain datasets and catastrophic forgetting problems encountered in multi-task training,we utilize existing methodology to obtain redundant segmentation maps to build our cross-domain dataset,which not only provides a new way to conduct multi-task training,but also helps us to evaluate results compared with those of other algorithms.In addition,in order to comprehensively use the extracted features of the two tasks in the early perception stage,we use a strategy of sharing weights in the network to fuse cross-domain features,and introduce a novel multi-task loss function to further smooth the depth values.Extensive experiments on KITTI and Cityscapes datasets show that our method has achieved state-of-the-art performance in the depth estimation task,as well improved semantic segmentation.
基金Supported by the National Natural Science Foundation of China under Grants 61872241, 62077037 and 62077037Shanghai Municipal Science and Technology Major Project under Grant 2021SHZDZX0102。
文摘Background Monocular depth estimation aims to predict a dense depth map from a single RGB image,and has important applications in 3D reconstruction,automatic driving,and augmented reality.However,existing methods directly feed the original RGB image into the model to extract depth features without avoiding the interference of depth-irrelevant information on depth-estimation accuracy,which leads to inferior performance.Methods To remove the influence of depth-irrelevant information and improve the depth-prediction accuracy,we propose RADepthNet,a novel reflectance-guided network that fuses boundary features.Specifically,our method predicts depth maps using the following three steps:(1)Intrinsic Image Decomposition.We propose a reflectance extraction module consisting of an encoder-decoder structure to extract the depth-related reflectance.Through an ablation study,we demonstrate that the module can reduce the influence of illumination on depth estimation.(2)Boundary Detection.A boundary extraction module,consisting of an encoder,refinement block,and upsample block,was proposed to better predict the depth at object boundaries utilizing gradient constraints.(3)Depth Prediction Module.We use an encoder different from(2)to obtain depth features from the reflectance map and fuse boundary features to predict depth.In addition,we proposed FIFADataset,a depth-estimation dataset applied in soccer scenarios.Results Extensive experiments on a public dataset and our proposed FIFADataset show that our method achieves state-of-the-art performance.
文摘对于复杂天气场景图像模糊、低对比度和颜色失真所导致的深度信息预测不准的问题,以往的研究均以标准场景的深度图作为先验信息来对该类场景进行深度估计。然而,这一方式存在先验信息精度较低等问题。对此,提出一个基于多尺度注意力机制的单目深度估计模型TalentDepth,以实现对复杂天气场景的预测。首先,在编码器中融合多尺度注意力机制,在减少计算成本的同时,保留每个通道的信息,提高特征提取的效率和能力。其次,针对图像深度不清晰的问题,基于几何一致性,提出深度区域细化(Depth Region Refinement,DSR)模块,过滤不准确的像素点,以提高深度信息的可靠性。最后,输入图像翻译模型所生成的复杂样本,并计算相应原始图像上的标准损失来指导模型的自监督训练。在NuScence,KITTI和KITTI-C这3个数据集上,相比于基线模型,所提模型对误差和精度均有优化。
文摘自监督单目深度估计受到了国内外研究人员的广泛关注。现有基于深度学习的自监督单目深度估计方法主要采用编码器-解码器结构。然而,这些方法在编码过程中对输入图像进行下采样操作,导致部分图像信息,尤其是图像的边界信息丢失,进而影响深度图的精度。针对上述问题,提出一种基于拉普拉斯金字塔的自监督单目深度估计方法(Self-supervised Monocular Depth Estimation Based on the Laplace Pyramid,LpDepth)。此方法的核心思想是:首先,使用拉普拉斯残差图丰富编码特征,以弥补在下采样过程中丢失的特征信息;其次,在下采样过程中使用最大池化层突显和放大特征信息,使编码器在特征提取过程中更容易地提取到训练模型所需要的特征信息;最后,使用残差模块解决过拟合问题,提高解码器对特征的利用效率。在KITTI和Make3D等数据集上对所提方法进行了测试,同时将其与现有经典方法进行了比较。实验结果证明了所提方法的有效性。
基金National Science Foundation (1808436,1918074,2306708,2237142-CAREER)U.S.Department of Energy (234402)。
文摘Imaging of surface-enhanced Raman scattering(SERS) nanoparticles(NPs) has been intensively studied for cancer detection due to its high sensitivity, unconstrained low signal-to-noise ratios, and multiplexing detection capability. Furthermore, conjugating SERS NPs with various biomarkers is straightforward, resulting in numerous successful studies on cancer detection and diagnosis. However, Raman spectroscopy only provides spectral data from an imaging area without co-registered anatomic context.
基金partially supported by the Key Technological Innovation Projects of Hubei Province(2018AAA062)National Natural Science Foundation of China(61972298)Wuhan University-Huawei GeoInformatics Innovation Lab.
文摘Self-supervised monocular depth estimation has been widely investigated and applied in previous works.However,existing methods suffer from texture-copy,depth drift,and incomplete structure.It is difficult for normal CNN networks to completely understand the relationship between the object and its surrounding environment.Moreover,it is hard to design the depth smoothness loss to balance depth smoothness and sharpness.To address these issues,we propose a coarse-to-fine method with a normalized convolutional block attention module(NCBAM).In the coarse estimation stage,we incorporate the NCBAM into depth and pose networks to overcome the texture-copy and depth drift problems.Then,we use a new network to refine the coarse depth guided by the color image and produce a structure-preserving depth result in the refinement stage.Our method can produce results competitive with state-of-the-art methods.Comprehensive experiments prove the effectiveness of our two-stage method using the NCBAM.
文摘目的从单幅影像中估计景深已成为计算机视觉研究热点之一,现有方法常通过提高网络的复杂度回归深度,增加了数据的训练成本及时间复杂度,为此提出一种面向单目深度估计的多层次感知条件随机场模型。方法采用自适应混合金字塔特征融合策略,捕获图像中不同位置间的短距离和长距离依赖关系,从而有效聚合全局和局部上下文信息,实现信息的高效传递。引入条件随机场解码机制,以此精细捕捉像素间的空间依赖关系。结合动态缩放注意力机制增强对不同图像区域间依赖关系的感知能力,引入偏置学习单元模块避免网络陷入极端值问题,保证模型的稳定性。针对不同特征模态间的交互情况,通过层次感知适配器扩展特征映射维度增强空间和通道交互性能,提高模型的特征学习能力。结果在NYU Depth v2(New York University depth dataset version 2)数据集上进行消融实验,结果表明,本文网络可以显著提高性能指标,相较于对比的先进方法,绝对相对误差(absolute relative error,Abs Rel)减小至0.1以内,降低7.4%,均方根误差(root mean square error,RMSE)降低5.4%。为验证模型在真实道路环境中的实用性,在KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago)数据集上进行对比实验,上述指标均优于对比的主流方法,其中RMSE降低3.1%,阈值(δ<1.25^(2),δ<1.25^(3))准确度接近100%,此外,在MatterPort3D数据集上验证了模型的泛化性。从可视化实验结果看,在复杂环境下本文方法可以更好地估计困难区域的深度。结论本文采用多层次特征提取器及混合金字塔特征融合策略,优化了信息在编码器和解码器间的传递过程,通过全连接解码获取像素级别的输出,能够有效提高单目深度估计精度。