期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Weakly-Supervised Single-view Dense 3D Point Cloud Reconstruction via Differentiable Renderer 被引量:3
1
作者 Peng Jin Shaoli Liu +4 位作者 Jianhua Liu Hao Huang Linlin Yang Michael Weinmann Reinhard Klein 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2021年第5期195-205,共11页
In recent years,addressing ill-posed problems by leveraging prior knowledge contained in databases on learning techniques has gained much attention.In this paper,we focus on complete three-dimensional(3D)point cloud r... In recent years,addressing ill-posed problems by leveraging prior knowledge contained in databases on learning techniques has gained much attention.In this paper,we focus on complete three-dimensional(3D)point cloud reconstruction based on a single red-green-blue(RGB)image,a task that cannot be approached using classical reconstruction techniques.For this purpose,we used an encoder-decoder framework to encode the RGB information in latent space,and to predict the 3D structure of the considered object from different viewpoints.The individual predictions are combined to yield a common representation that is used in a module combining camera pose estimation and rendering,thereby achieving differentiability with respect to imaging process and the camera pose,and optimization of the two-dimensional prediction error of novel viewpoints.Thus,our method allows end-to-end training and does not require supervision based on additional ground-truth(GT)mask annotations or ground-truth camera pose annotations.Our evaluation of synthetic and real-world data demonstrates the robustness of our approach to appearance changes and self-occlusions,through outperformance of current state-of-the-art methods in terms of accuracy,density,and model completeness. 展开更多
关键词 Point clouds reconstruction Differentiable renderer Neural networks single-view configuration
在线阅读 下载PDF
Hair-GAN:Recovering 3D hair structure from a single image using generative adversarial networks 被引量:2
2
作者 Meng Zhang Youyi Zheng 《Visual Informatics》 EI 2019年第2期102-112,共11页
We introduce Hair-GAN,an architecture of generative adversarial networks,to recover the 3D hair structure from a single image.The goal of our networks is to build a parametric transformation from 2D hair maps to 3D ha... We introduce Hair-GAN,an architecture of generative adversarial networks,to recover the 3D hair structure from a single image.The goal of our networks is to build a parametric transformation from 2D hair maps to 3D hair structure.The 3D hair structure is represented as a 3D volumetric field which encodes both the occupancy and the orientation information of the hair strands.Given a single hair image,we first align it with a bust model and extract a set of 2D maps encoding the hair orientation information in 2D,along with the bust depth map to feed into our Hair-GAN.With our generator network,we compute the 3D volumetric field as the structure guidance for the final hair synthesis.The modeling results not only resemble the hair in the input image but also possesses many vivid details in other views.The efficacy of our method is demonstrated by using a variety of hairstyles and comparing with the prior art. 展开更多
关键词 single-view hair modeling 3D volumetric structure Deep learning Generative adversarial networks
原文传递
MDISN:Learning multiscale deformed implicit fields from single images
3
作者 Yujie Wang Yixin Zhuang +1 位作者 Yunzhe Liu Baoquan Chen 《Visual Informatics》 EI 2022年第2期41-49,共9页
We present a multiscale deformed implicit surface network(MDISN)to reconstruct 3D objects from single images by adapting the implicit surface of the target object from coarse to fine to the input image.The basic idea ... We present a multiscale deformed implicit surface network(MDISN)to reconstruct 3D objects from single images by adapting the implicit surface of the target object from coarse to fine to the input image.The basic idea is to optimize the implicit surface according to the change of consecutive feature maps from the input image.And with multi-resolution feature maps,the implicit field is refined progressively,such that lower resolutions outline the main object components,and higher resolutions reveal fine-grained geometric details.To better explore the changes in feature maps,we devise a simple field deformation module that receives two consecutive feature maps to refine the implicit field with finer geometric details.Experimental results on both synthetic and real-world datasets demonstrate the superiority of the proposed method compared to state-of-the-art methods. 展开更多
关键词 single-view 3D reconstruction Implicit neural representation Multiscale deformation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部