期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
NeuS-PIR: Learning relightable neural surface using pre-integrated rendering
1
作者 Shi Mao Chenming Wu +3 位作者 Zhelun Shen Yifan Wang Dayan Wu Liangjun Zhang 《Computational Visual Media》 2025年第4期727-744,共18页
In this paper, we propose NeuS-PIR, a novel approach for learning relightable neural surfaces using pre-integrated rendering from multi-view image observations. Unlike traditional methods based on NeRFs or discrete me... In this paper, we propose NeuS-PIR, a novel approach for learning relightable neural surfaces using pre-integrated rendering from multi-view image observations. Unlike traditional methods based on NeRFs or discrete mesh representations, our approach employs an implicit neural surface representation to reconstruct high-quality geometry. This representation enables the factorization of the radiance field into two components: a spatially varying material field and an all-frequency lighting model. By jointly optimizing this factorization with a differentiable pre-integrated rendering framework, and material encoding regularization, our method effectively addresses the ambiguity in geometry reconstruction, leading to improved disentanglement and refinement of scene properties. Furthermore, we introduce a technique to distill indirect illumination fields, capturing complex lighting effects such as inter-reflections. As a result, NeuS-PIR enables advanced applications like relighting, which can be seamlessly integrated into modern graphics engines. Extensive qualitative and quantitative experiments on both synthetic and real datasets demonstrate that NeuS-PIR outperforms existing methods across various tasks. Source code is available at https://github.com/Sheldonmao/NeuSPIR. 展开更多
关键词 inverse rendering pre-integrated rendering neural implicit representation
原文传递
Statistical learning based facial animation
2
作者 Shibiao XU Guanghui MA +1 位作者 Weiliang MENG Xiaopeng ZHANG 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2013年第7期542-550,共9页
To synthesize real-time and realistic facial animation, we present an effective algorithm which combines image- and geometry-based methods for facial animation simulation. Considering the numerous motion units in the ... To synthesize real-time and realistic facial animation, we present an effective algorithm which combines image- and geometry-based methods for facial animation simulation. Considering the numerous motion units in the expression coding system, we present a novel simplified motion unit based on the basic facial expression, and construct the corresponding basic action for a head model. As image features are difficult to obtain using the performance driven method, we develop an automatic image feature recognition method based on statistical learning, and an expression image semi-automatic labeling method with rotation invariant face detection, which can improve the accuracy and efficiency of expression feature identification and training. After facial animation redirection, each basic action weight needs to be computed and mapped automatically. We apply the blend shape method to construct and train the corresponding expression database according to each basic action, and adopt the least squares method to compute the corresponding control parameters for facial animation. Moreover, there is a pre-integration of diffuse light distribution and specular light distribution based on the physical method, to improve the plausibility and efficiency of facial rendering. Our work provides a simplification of the facial motion unit, an optimization of the statistical training process and recognition process for facial animation, solves the expression parameters, and simulates the subsurface scattering effect in real time. Experimental results indicate that our method is effective and efficient, and suitable for computer animation and interactive applications. 展开更多
关键词 Facial animation Motion unit Statistical learning Realistic rendering pre-integration
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部