期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Point-PC:Point cloud completion guided by prior knowledge via causal inference
1
作者 Xuesong Gao Chuanqi Jiao +2 位作者 Ruidong Chen Weijie Wang Weizhi Nie 《CAAI Transactions on Intelligence Technology》 2025年第4期1007-1018,共12页
The goal of point cloud completion is to reconstruct raw scanned point clouds acquired from incomplete observations due to occlusion and restricted viewpoints.Numerous methods use a partial-to-complete framework,direc... The goal of point cloud completion is to reconstruct raw scanned point clouds acquired from incomplete observations due to occlusion and restricted viewpoints.Numerous methods use a partial-to-complete framework,directly predicting missing components via global characteristics extracted from incomplete inputs.However,this makes detail re-covery challenging,as global characteristics fail to provide complete missing component specifics.A new point cloud completion method named Point-PC is proposed.A memory network and a causal inference model are separately designed to introduce shape priors and select absent shape information as supplementary geometric factors for aiding completion.Concretely,a memory mechanism is proposed to store complete shape features and their associated shapes in a key-value format.The authors design a pre-training strategy that uses contrastive learning to map incomplete shape features into the complete shape feature domain,enabling retrieval of analogous shapes from incomplete inputs.In addition,the authors employ backdoor adjustment to eliminate confounders,which are shape prior components sharing identical semantic structures with incomplete inputs.Experiments conducted on three datasets show that our method achieves superior performance compared to state-of-the-art approaches.The code for Point-PC can be accessed by https://github.com/bizbard/Point-PC.git. 展开更多
关键词 causal inference contrastive alignment memory network point cloud completion
在线阅读 下载PDF
GeeNet:robust and fast point cloud completion for ground elevation estimation towards autonomous vehicles 被引量:1
2
作者 Liwen LIU Weidong YANG Ben FEI 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第7期938-950,共13页
Ground elevation estimation is vital for numerous applications in autonomous vehicles and intelligent robotics including three-dimensional object detection,navigable space detection,point cloud matching for localizati... Ground elevation estimation is vital for numerous applications in autonomous vehicles and intelligent robotics including three-dimensional object detection,navigable space detection,point cloud matching for localization,and registration for mapping.However,most works regard the ground as a plane without height information,which causes inaccurate manipulation in these applications.In this work,we propose GeeNet,a novel end-to-end,lightweight method that completes the ground in nearly real time and simultaneously estimates the ground elevation in a grid-based representation.GeeNet leverages the mixing of two-and three-dimensional convolutions to preserve a lightweight architecture to regress ground elevation information for each cell of the grid.For the first time,GeeNet has fulfilled ground elevation estimation from semantic scene completion.We use the SemanticKITTI and SemanticPOSS datasets to validate the proposed GeeNet,demonstrating the qualitative and quantitative performances of GeeNet on ground elevation estimation and semantic scene completion of the point cloud.Moreover,the crossdataset generalization capability of GeeNet is experimentally proven.GeeNet achieves state-of-the-art performance in terms of point cloud completion and ground elevation estimation,with a runtime of 0.88 ms. 展开更多
关键词 point cloud completion Ground elevation estimation REAL-TIME Autonomous vehicles
原文传递
HDR-Net-Fusion:Real-time 3D dynamic scene reconstruction with a hierarchical deep reinforcement network 被引量:1
3
作者 Hao-Xuan Song Jiahui Huang +1 位作者 Yan-Pei Cao Tai-Jiang Mu 《Computational Visual Media》 EI CSCD 2021年第4期419-435,共17页
Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing de... Reconstructing dynamic scenes with commodity depth cameras has many applications in computer graphics,computer vision,and robotics.However,due to the presence of noise and erroneous observations from data capturing devices and the inherently ill-posed nature of non-rigid registration with insufficient information,traditional approaches often produce low-quality geometry with holes,bumps,and misalignments.We propose a novel 3D dynamic reconstruction system,named HDR-Net-Fusion,which learns to simultaneously reconstruct and refine the geometry on the fly with a sparse embedded deformation graph of surfels,using a hierarchical deep reinforcement(HDR)network.The latter comprises two parts:a global HDR-Net which rapidly detects local regions with large geometric errors,and a local HDR-Net serving as a local patch refinement operator to promptly complete and enhance such regions.Training the global HDR-Net is formulated as a novel reinforcement learning problem to implicitly learn the region selection strategy with the goal of improving the overall reconstruction quality.The applicability and efficiency of our approach are demonstrated using a large-scale dynamic reconstruction dataset.Our method can reconstruct geometry with higher quality than traditional methods. 展开更多
关键词 dynamic 3D scene reconstruction deep reinforcement learning point cloud completion deep neural networks
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部