期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Learning monocular face reconstruction from in the wild images using rotation cycle consistency
1
作者 Xinrong HU Kaifan YANG +2 位作者 Ruiqi LUO Tao PENG Junping LIU 《虚拟现实与智能硬件(中英文)》 2025年第4期379-392,共14页
With the popularity of the digital human body,monocular three-dimensional(3D)face reconstruction is widely used in fields such as animation and face recognition.Although current methods trained using single-view image... With the popularity of the digital human body,monocular three-dimensional(3D)face reconstruction is widely used in fields such as animation and face recognition.Although current methods trained using single-view image sets perform well in monocular 3D face reconstruction tasks,they tend to rely on the constraints of the a priori model or the appearance conditions of the input images,fundamentally because of the inability to propose an effective method to reduce the effects of two-dimensional(2D)ambiguity.To solve this problem,we developed an unsupervised training framework for monocular face 3D reconstruction using rotational cycle consistency.Specifically,to learn more accurate facial information,we first used an autoencoder to factor the input images and applied these factors to generate normalized frontal views.We then proceeded through a differentiable renderer to use rotational consistency to continuously perceive refinement.Our method provided implicit multi-view consistency constraints on the pose and depth information estimation of the input face,and the performance was accurate and robust in the presence of large variations in expression and pose.In the benchmark tests,our method performed more stably and realistically than other methods that used 3D face reconstruction in monocular 2D images. 展开更多
关键词 3D face reconstruction View synthesis Rotation cycle consistency
在线阅读 下载PDF
Proxy-Based Embedding Alignment for RGB-Infrared Person Re-Identification
2
作者 Zhaopeng Dou Yifan Sun +1 位作者 Yali Li Shengjin Wang 《Tsinghua Science and Technology》 2025年第3期1112-1124,共13页
RGB-Infrared person re-IDentification(re-ID)aims to match RGB and infrared(IR)images of the same person.However,the modality discrepancy between RGB and IR images poses a significant challenge for re-ID.To address thi... RGB-Infrared person re-IDentification(re-ID)aims to match RGB and infrared(IR)images of the same person.However,the modality discrepancy between RGB and IR images poses a significant challenge for re-ID.To address this issue,this paper proposes a Proxy-based Embedding Alignment(PEA)method to align the RGB and IR modalities in the embedding space.PEA introduces modality-specific identity proxies and leverages the sample-to-proxy relations to learn the model.Specifically,PEA focuses on three types of alignments:intra-modality alignment,inter-modality alignment,and cycle alignment.Intra-modality alignment aims to align sample features and proxies of the same identity within a modality.Inter-modality alignment aims to align sample features and proxies of the same identity across different modalities.Cycle alignment requires that a proxy is aligned with itself after tracing it along a cross-modality cycle(e.g.,IR→RGB→IR).By integrating these alignments into the training process,PEA effectively mitigates the impact of modality discrepancy and learns discriminative features across modalities.We conduct extensive experiments on several RGB-IR re-ID datasets,and the results show that PEA outperforms current state-of-the-art methods.Notably,on SYSU-MM01 dataset,PEA achieves 71.0%mAP under the multi-shot setting of the indoor-search protocol,surpassing the best-performing method by 7.2%. 展开更多
关键词 cross-modality person re-identification feature alignment cycle consistency metric learning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部