摘要
针对传统语音唇动分析模型容易忽略唇动帧间时变信息从而影响一致性判别结果的问题,提出一种基于平移不变学习字典的一致性判决方法.该方法将平移不变稀疏表示引入语音唇动一致性分析,通过音视频联合字典学习算法训练出时空平移不变的音视频字典,并采用新的数据映射方式对学习算法中的稀疏编码部分进行改进;利用字典中的音视频联合原子作为描述不同音节或词语发音时音频与唇形同步变化关系的模板,最后根据这种模板制定出语音唇动一致性评分判决准则.对四类音视频不一致数据的实验结果表明:本方法与传统统计类方法相比,对于少音节语料,总体等错误率(EER)平均从23.6%下降到11.3%;对于多音节语句,总体EER平均从22.1%下降到15.9%.
In order to solve the issue of ignoring the successive and dynamic lip motion information in traditional audiovisual speech synchrony analysis models,a novel method based on shift-invariant learned dictionary was presented.In this method,sparse representation with shift-invariant dictionary was introduced to analysis the bimodal structure of articulation.Firstly,a learning dictionary was obtained based on the audiovisual coherence dictionary learning algorithm,and the sparse coding stage of the learning algorithm was modified by using a new data projection step.Secondly,the dynamic correlation between voice and lip motion of diverse syllable or word was represented as a pattern by this audiovisual coherence atom.Finally,an original audiovisual synchronization score measuring scheme was proposed according to these utterance patterns.The results of the experiment on four different inconsistent data show the good performance of the method.For long sentence,the equal error rate(EER)is reduced from 23.6% to 11.3%,and for short numeric string dataset,the EER is reduced from 22.1%to 15.9%.
出处
《华中科技大学学报(自然科学版)》
EI
CAS
CSCD
北大核心
2015年第10期69-74,共6页
Journal of Huazhong University of Science and Technology(Natural Science Edition)
基金
国家自然科学基金资助项目(61401161
61571192)
中央高校基本科研业务费专项资金资助项目(D2154950)
关键词
音视频处理
稀疏表示
平移不变
一致性分析
字典学习
audio-visual processing
sparse representation
shift-invariance
consistent analysis
dictionary learning