摘要
静态表情识别是对单一图像进行特征提取及学习训练,相比于动态表情识别缺少表情变化的时间动态信息。为利用上表情在时间上的变化特征,提出基于时序融合策略的深度网络动态表情识别方法。首先分析动态表情变化"平静-高峰"的过程,利用主动外观模型(AAM),定义峰值距离(PD)和相邻距离(ND),利用两个参数对表情序列进行筛选,除去中性及表达不明显的表情;采用连续4帧序列作为输入,并使用3个VGGFace深度卷积网络分别对输入进行训练,通过构建分类损失函数和排序损失函数将3个网络关联起来,从而达到融合表情序列时间信息的目的;最后通过CK+数据库实验验证表明,所提方法结合了深层次卷积网络特征提取的能力以及表情变化信息融合的优势,对比近年表情识别算法,在识别率上具有较强的优势。
Compared with dynamic expression recognition, static expression recognition is the feature extraction and learning training of a single image, witch lacks expression change time dynamic information. In order to make use of the temporal characteristics of expression, a dynamic expression recognition method based on deep CNN and the fusion of time sequences strategy is proposed. Firstly, the dynamic expression changes are analyzed in the process of "calm-peak", the active appearance model(AAM) is used to define the peak distance(PD) and neighboring distance(ND), and these parameters are used to filter the expression sequences to remove neutral and unobvious expressions. Then, three consecutive frames are used as the input, and three VGGFace deep convolution networks are used to train the input respectively. By constructing classification loss function and sorting loss function, three networks are connected to achieve the purpose of fusing the temporal information of expression sequence. Finally, experiments on CK + database show that the proposed method combines the ability of deep convolution network feature extraction and the advantages of expression change information fusion, and has a strong advantage in recognition rate compared with recent expression recognition algorithms.
作者
许春和
宋领赟
Xu Chunhe;Song Lingyun(College of Electrical Engineering,Suihua College,Suihua 152000,China)
出处
《国外电子测量技术》
北大核心
2021年第10期151-157,共7页
Foreign Electronic Measurement Technology
基金
2019年黑龙江省教育厅基本科研业务费项目(KYYWF10236190111)资助。
关键词
动态表情识别
主动外观模型
深度卷积神经网络
损失函数
dynamic expression recognition
active appearance model
deep convolutional neural networks
loss function