人脸重演技术作为可控人脸生成领域的关键研究方向,其目标在于通过给定的驱动人脸图像或视频帧,驱动源人脸图像,实现其面部表情和姿态的准确可控合成。该技术要求生成结果既能保持源人脸图像的身份特征,又能与驱动人脸图像的表情姿态保...人脸重演技术作为可控人脸生成领域的关键研究方向,其目标在于通过给定的驱动人脸图像或视频帧,驱动源人脸图像,实现其面部表情和姿态的准确可控合成。该技术要求生成结果既能保持源人脸图像的身份特征,又能与驱动人脸图像的表情姿态保持高度一致。单样本人脸重演任务由于仅依赖单一视角的2D人脸图像,导致面部信息描述不充分。现有方法在生成姿态变化幅度较大的人脸图像时,难以准确地保持人脸身份、表情姿态的一致性。针对该问题,提出了一种基于3D可解释性神经渲染的单样本人脸重演(3D Explainable Neural Rendering Based Single-sample Face Reenactment,3D-ENS)方法。该方法在神经网络内部显式建模出固定的3D人脸结构及纹理信息用于整个重演视频生成阶段,以保证重演结果中人脸身份的一致性和表情姿态变化的稳定性。在此基础上构建了一种神经纹理补全网络,通过多尺度特征学习机制实现高质量面部纹理重建;提出了一种背景运动估计网络,预测驱动后人脸图像的背景,将背景信息与补全后的面部神经纹理渲染(Neural Texture Rendering,NTR)结果进行融合。使用关键点检测模型提供面部一致性约束,进一步提升模型的表观一致性。在主流基准数据集与真实场景数据上的实验证明,所提方法具有较好的身份保持度,能够有效应对面部姿态变化的复杂场景,为人脸重演任务提供了新的解决方案。展开更多
This paper presents a new solution to haptic based teleoperation to control a large-sized slave robot for space exploration, which includes two specially designed haptic joysticks, a hybrid master-slave motion mapping...This paper presents a new solution to haptic based teleoperation to control a large-sized slave robot for space exploration, which includes two specially designed haptic joysticks, a hybrid master-slave motion mapping method, and a haptic feedback model rendering the operating resistance and the interactive feedback on the slave side. Two devices using the 3 R and DELTA mechanisms respectively are developed to be manipulated to control the position and orientation of a large-sized slave robot by using both of a user's two hands respectively. The hybrid motion mapping method combines rate control and variable scaled position mapping to realize accurate and efficient master-slave control. Haptic feedback for these two mapping modes is designed with emphasis on ergonomics to improve the immersion of haptic based teleoperation. A stiffness estimation method is used to calculate the contact stiffness on the slave side and play the contact force rendered by using a traditional spring-damping model to a user on the master side stably. Experiments by using virtual environments to simulate the slave side are conducted to validate the effectiveness and efficiency of the proposed solution.展开更多
文摘人脸重演技术作为可控人脸生成领域的关键研究方向,其目标在于通过给定的驱动人脸图像或视频帧,驱动源人脸图像,实现其面部表情和姿态的准确可控合成。该技术要求生成结果既能保持源人脸图像的身份特征,又能与驱动人脸图像的表情姿态保持高度一致。单样本人脸重演任务由于仅依赖单一视角的2D人脸图像,导致面部信息描述不充分。现有方法在生成姿态变化幅度较大的人脸图像时,难以准确地保持人脸身份、表情姿态的一致性。针对该问题,提出了一种基于3D可解释性神经渲染的单样本人脸重演(3D Explainable Neural Rendering Based Single-sample Face Reenactment,3D-ENS)方法。该方法在神经网络内部显式建模出固定的3D人脸结构及纹理信息用于整个重演视频生成阶段,以保证重演结果中人脸身份的一致性和表情姿态变化的稳定性。在此基础上构建了一种神经纹理补全网络,通过多尺度特征学习机制实现高质量面部纹理重建;提出了一种背景运动估计网络,预测驱动后人脸图像的背景,将背景信息与补全后的面部神经纹理渲染(Neural Texture Rendering,NTR)结果进行融合。使用关键点检测模型提供面部一致性约束,进一步提升模型的表观一致性。在主流基准数据集与真实场景数据上的实验证明,所提方法具有较好的身份保持度,能够有效应对面部姿态变化的复杂场景,为人脸重演任务提供了新的解决方案。
基金supported by the Open Research Fund of Key Laboratory of Space Utilization,Chinese Academy of Sciences(No.LSU-YKZX-2017-02)
文摘This paper presents a new solution to haptic based teleoperation to control a large-sized slave robot for space exploration, which includes two specially designed haptic joysticks, a hybrid master-slave motion mapping method, and a haptic feedback model rendering the operating resistance and the interactive feedback on the slave side. Two devices using the 3 R and DELTA mechanisms respectively are developed to be manipulated to control the position and orientation of a large-sized slave robot by using both of a user's two hands respectively. The hybrid motion mapping method combines rate control and variable scaled position mapping to realize accurate and efficient master-slave control. Haptic feedback for these two mapping modes is designed with emphasis on ergonomics to improve the immersion of haptic based teleoperation. A stiffness estimation method is used to calculate the contact stiffness on the slave side and play the contact force rendered by using a traditional spring-damping model to a user on the master side stably. Experiments by using virtual environments to simulate the slave side are conducted to validate the effectiveness and efficiency of the proposed solution.