Lip synchronization serves as a core technology for enabling natural interactions in digital virtual humans.However,it faces challenges such as insufficient dynamic correspondence between speech and lip movements and ...Lip synchronization serves as a core technology for enabling natural interactions in digital virtual humans.However,it faces challenges such as insufficient dynamic correspondence between speech and lip movements and inadequate modeling of image details.To address these limitations,a comprehensively optimized lip synchronization framework extending the Wav2Lip architecture was proposed in this study.Firstly,based on the Wav2Lip model,a facial region extraction strategy using facial keypoints was designed,which effectively enhances the robustness of facial alignment during lip synchronization for digital virtual humans.Then,a cross-modal attention fusion module between visual and speech features was introduced to improve cross-modal information fusion,and a dynamic receptive field convolution module was developed in the generation branch to enhance the modeling performance of the lip region.Finally,experiments were conducted on the VFHQ dataset.The proposed method was compared with Wav2Lip,VideoRetalking,and DI-Net models,and its performance was evaluated using three metrics:LSE-C,CSIM,and FID.Experimental results showed that the proposed method achieves significant improvements in synchronization accuracy and image fidelity,providing an efficient and feasible solution for lip-synthesis tasks of digital virtual humans.展开更多
文摘Lip synchronization serves as a core technology for enabling natural interactions in digital virtual humans.However,it faces challenges such as insufficient dynamic correspondence between speech and lip movements and inadequate modeling of image details.To address these limitations,a comprehensively optimized lip synchronization framework extending the Wav2Lip architecture was proposed in this study.Firstly,based on the Wav2Lip model,a facial region extraction strategy using facial keypoints was designed,which effectively enhances the robustness of facial alignment during lip synchronization for digital virtual humans.Then,a cross-modal attention fusion module between visual and speech features was introduced to improve cross-modal information fusion,and a dynamic receptive field convolution module was developed in the generation branch to enhance the modeling performance of the lip region.Finally,experiments were conducted on the VFHQ dataset.The proposed method was compared with Wav2Lip,VideoRetalking,and DI-Net models,and its performance was evaluated using three metrics:LSE-C,CSIM,and FID.Experimental results showed that the proposed method achieves significant improvements in synchronization accuracy and image fidelity,providing an efficient and feasible solution for lip-synthesis tasks of digital virtual humans.