Pose manifold and tensor decomposition are used to represent the nonlinear changes of multi-view faces for pose estimation,which cannot be well handled by principal component analysis or multilinear analysis methods.A...Pose manifold and tensor decomposition are used to represent the nonlinear changes of multi-view faces for pose estimation,which cannot be well handled by principal component analysis or multilinear analysis methods.A pose manifold generation method is introduced to describe the nonlinearity in pose subspace.And a nonlinear kernel based method is used to build a smooth mapping from the low dimensional pose subspace to the high dimensional face image space.Then the tensor decomposition is applied to the nonlinear mapping coefficients to build an accurate multi-pose face model for pose estimation.More importantly,this paper gives a proper distance measurement on the pose manifold space for the nonlinear mapping and pose estimation.Experiments on the identity unseen face images show that the proposed method increases pose estimation rates by 13.8% and 10.9% against principal component analysis and multilinear analysis based methods respectively.Thus,the proposed method can be used to estimate a wide range of head poses.展开更多
Head pose estimation has been considered an important and challenging task in computer vision. In this paper we propose a novel method to estimate head pose based on a deep convolutional neural network (DCNN) for 2D...Head pose estimation has been considered an important and challenging task in computer vision. In this paper we propose a novel method to estimate head pose based on a deep convolutional neural network (DCNN) for 2D face images. We design an effective and simple method to roughly crop the face from the input image, maintaining the individual-relative facial features ratio. The method can be used in various poses. Then two convolutional neural networks are set up to train the head pose classifier and then compared with each other. The simpler one has six layers. It performs well on seven yaw poses but is somewhat unsatisfactory when mixed in two pitch poses. The other has eight layers and more pixels in input layers. It has better performance on more poses and more training samples. Before training the network, two reasonable strategies including shift and zoom are executed to prepare training samples. Finally, feature extraction filters are optimized together with the weight of the classification component through training, to minimize the classification error. Our method has been evaluated on the CAS-PEAL-R1, CMU PIE, and CUBIC FacePix databases. It has better performance than state-of-the-art methods for head pose estimation.展开更多
This paper presents a joint head pose and facial landmark regression method with input from depth images for realtime application. Our main contributions are: firstly, a joint optimization method to estimate head pose...This paper presents a joint head pose and facial landmark regression method with input from depth images for realtime application. Our main contributions are: firstly, a joint optimization method to estimate head pose and facial landmarks, i.e., the pose regression result provides supervised initialization for cascaded facial landmark regression, while the regression result for the facial landmarks can also help to further refine the head pose at each stage. Secondly,we classify the head pose space into 9 sub-spaces, and then use a cascaded random forest with a global shape constraint for training facial landmarks in each specific space. This classification-guided method can effectively handle the problem of large pose changes and occlusion.Lastly, we have built a 3D face database containing 73 subjects, each with 14 expressions in various head poses. Experiments on challenging databases show our method achieves state-of-the-art performance on both head pose estimation and facial landmark regression.展开更多
Accurate head poses are useful for many face-related tasks such as face recognition, gaze estimation,and emotion analysis. Most existing methods estimate head poses that are included in the training data(i.e.,previous...Accurate head poses are useful for many face-related tasks such as face recognition, gaze estimation,and emotion analysis. Most existing methods estimate head poses that are included in the training data(i.e.,previously seen head poses). To predict head poses that are not seen in the training data, some regression-based methods have been proposed. However, they focus on estimating continuous head pose angles, and thus do not systematically evaluate the performance on predicting unseen head poses. In this paper, we use a dense multivariate label distribution(MLD) to represent the pose angle of a face image. By incorporating both seen and unseen pose angles into MLD, the head pose predictor can estimate unseen head poses with an accuracy comparable to that of estimating seen head poses. On the Pointing'04 database, the mean absolute errors of results for yaw and pitch are 4.01?and 2.13?, respectively. In addition, experiments on the CAS-PEAL and CMU Multi-PIE databases show that the proposed dense MLD-based head pose estimation method can obtain the state-of-the-art performance when compared to some existing methods.展开更多
Lots of progress has been made recently on 2 D human pose tracking with tracking-by-detection approaches. However,several challenges still remain in this area which is due to self-occlusions and the confusion between ...Lots of progress has been made recently on 2 D human pose tracking with tracking-by-detection approaches. However,several challenges still remain in this area which is due to self-occlusions and the confusion between the left and right limbs during tracking. In this work,a head orientation detection step is introduced into the tracking framework to serve as a complementary tool to assist human pose estimation. With the face orientation determined,the system can decide whether the left or right side of the human body is exactly visible and infer the state of the symmetric counterpart. By granting a higher priority for the completely visible side,the system can avoid double counting to a great extent when inferring body poses. The proposed framework is evaluated on the HumanEva dataset. The results show that it largely reduces the occurrence of double counting and distinguishes the left and right sides consistently.展开更多
目的现有表情识别方法聚焦提升模型的整体识别准确率,对方法的头部姿态鲁棒性研究不充分。在实际应用中,人的头部姿态往往变化多样,影响表情识别效果,因此研究头部姿态对表情识别的影响,并提升模型在该方面的鲁棒性显得尤为重要。为此,...目的现有表情识别方法聚焦提升模型的整体识别准确率,对方法的头部姿态鲁棒性研究不充分。在实际应用中,人的头部姿态往往变化多样,影响表情识别效果,因此研究头部姿态对表情识别的影响,并提升模型在该方面的鲁棒性显得尤为重要。为此,在深入分析头部姿态对表情识别影响的基础上,提出一种能够基于无标签非正脸表情数据提升模型头部姿态鲁棒性的半监督表情识别方法。方法首先按头部姿态对典型表情识别数据集AffectNet重新划分,构建了AffectNet-Yaw数据集,支持在不同角度上进行模型精度测试,提升了模型对比公平性。其次,提出一种基于双一致性约束的半监督表情识别方法(dual-consistency semi-supervised learning for facial expression recognition,DCSSL),利用空间一致性模块对翻转前后人脸图像的类别激活一致性进行空间约束,使模型训练时更关注面部表情关键区域特征;利用语义一致性模块通过非对称数据增强和自学式学习方法不断地筛选高质量非正脸数据用于模型优化。在无需对非正脸表情数据人工标注的情况下,方法直接从有标签正脸数据和无标签非正脸数据中学习。最后,联合优化了交叉熵损失、空间一致性约束损失和语义一致性约束损失函数,以确保有监督学习和半监督学习之间的平衡。结果实验结果表明,头部姿态对自然场景表情识别有显著影响;提出AffectNet-Yaw具有更均衡的头部姿态分布,有效促进了对这种影响的全面评估;DCSSL方法结合空间一致性和语义一致性约束充分利用无标签非正脸表情数据,显著提高了模型在头部姿态变化下的鲁棒性,较MA-NET(multi-scale and local attention network)和EfficientFace全监督方法,平均表情识别精度分别提升了5.40%和17.01%。结论本文提出的双一致性半监督方法能充分利用正脸和非正脸数据,显著提升了模型在头部姿态变化下的表情识别精度;新数据集有效支撑了对头部姿态对表情识别影响的全面评估。展开更多
Facial expression recognition(FER)has numerous applications in computer security,neuroscience,psychology,and engineering.Owing to its non-intrusiveness,it is considered a useful technology for combating crime.However,...Facial expression recognition(FER)has numerous applications in computer security,neuroscience,psychology,and engineering.Owing to its non-intrusiveness,it is considered a useful technology for combating crime.However,FER is plagued with several challenges,the most serious of which is its poor prediction accuracy in severe head poses.The aim of this study,therefore,is to improve the recognition accuracy in severe head poses by proposing a robust 3D head-tracking algorithm based on an ellipsoidal model,advanced ensemble of AdaBoost,and saturated vector machine(SVM).The FER features are tracked from one frame to the next using the ellipsoidal tracking model,and the visible expressive facial key points are extracted using Gabor filters.The ensemble algorithm(Ada-AdaSVM)is then used for feature selection and classification.The proposed technique is evaluated using the Bosphorus,BU-3DFE,MMI,CK^(+),and BP4D-Spontaneous facial expression databases.The overall performance is outstanding.展开更多
基金supported by National Natural Science Foundation of China (6090312660872145)+1 种基金Doctoral Fund of Ministry of Education of China (20090203120011)Basic Science Research Fund in XidianUniversity (72105470)
文摘Pose manifold and tensor decomposition are used to represent the nonlinear changes of multi-view faces for pose estimation,which cannot be well handled by principal component analysis or multilinear analysis methods.A pose manifold generation method is introduced to describe the nonlinearity in pose subspace.And a nonlinear kernel based method is used to build a smooth mapping from the low dimensional pose subspace to the high dimensional face image space.Then the tensor decomposition is applied to the nonlinear mapping coefficients to build an accurate multi-pose face model for pose estimation.More importantly,this paper gives a proper distance measurement on the pose manifold space for the nonlinear mapping and pose estimation.Experiments on the identity unseen face images show that the proposed method increases pose estimation rates by 13.8% and 10.9% against principal component analysis and multilinear analysis based methods respectively.Thus,the proposed method can be used to estimate a wide range of head poses.
基金Project supported by the National Key Scientific Instrument and Equipment Development Project of China(No.2013YQ49087903)the National Natural Science Foundation of China(No.61402307)the Educational Commission of Sichuan Province,China(No.15ZA0007)
文摘Head pose estimation has been considered an important and challenging task in computer vision. In this paper we propose a novel method to estimate head pose based on a deep convolutional neural network (DCNN) for 2D face images. We design an effective and simple method to roughly crop the face from the input image, maintaining the individual-relative facial features ratio. The method can be used in various poses. Then two convolutional neural networks are set up to train the head pose classifier and then compared with each other. The simpler one has six layers. It performs well on seven yaw poses but is somewhat unsatisfactory when mixed in two pitch poses. The other has eight layers and more pixels in input layers. It has better performance on more poses and more training samples. Before training the network, two reasonable strategies including shift and zoom are executed to prepare training samples. Finally, feature extraction filters are optimized together with the weight of the classification component through training, to minimize the classification error. Our method has been evaluated on the CAS-PEAL-R1, CMU PIE, and CUBIC FacePix databases. It has better performance than state-of-the-art methods for head pose estimation.
基金supported by the National Key Technologies R&D Program of China (No. 2016YFC0800501)the National Natural Science Foundation of China (No. 61672481)
文摘This paper presents a joint head pose and facial landmark regression method with input from depth images for realtime application. Our main contributions are: firstly, a joint optimization method to estimate head pose and facial landmarks, i.e., the pose regression result provides supervised initialization for cascaded facial landmark regression, while the regression result for the facial landmarks can also help to further refine the head pose at each stage. Secondly,we classify the head pose space into 9 sub-spaces, and then use a cascaded random forest with a global shape constraint for training facial landmarks in each specific space. This classification-guided method can effectively handle the problem of large pose changes and occlusion.Lastly, we have built a 3D face database containing 73 subjects, each with 14 expressions in various head poses. Experiments on challenging databases show our method achieves state-of-the-art performance on both head pose estimation and facial landmark regression.
基金supported by the National Key Scientific Instrument and Equipment Development Project of China(No.2013YQ49087903)the National Natural Science Foundation of China(No.61202160)
文摘Accurate head poses are useful for many face-related tasks such as face recognition, gaze estimation,and emotion analysis. Most existing methods estimate head poses that are included in the training data(i.e.,previously seen head poses). To predict head poses that are not seen in the training data, some regression-based methods have been proposed. However, they focus on estimating continuous head pose angles, and thus do not systematically evaluate the performance on predicting unseen head poses. In this paper, we use a dense multivariate label distribution(MLD) to represent the pose angle of a face image. By incorporating both seen and unseen pose angles into MLD, the head pose predictor can estimate unseen head poses with an accuracy comparable to that of estimating seen head poses. On the Pointing'04 database, the mean absolute errors of results for yaw and pitch are 4.01?and 2.13?, respectively. In addition, experiments on the CAS-PEAL and CMU Multi-PIE databases show that the proposed dense MLD-based head pose estimation method can obtain the state-of-the-art performance when compared to some existing methods.
文摘Lots of progress has been made recently on 2 D human pose tracking with tracking-by-detection approaches. However,several challenges still remain in this area which is due to self-occlusions and the confusion between the left and right limbs during tracking. In this work,a head orientation detection step is introduced into the tracking framework to serve as a complementary tool to assist human pose estimation. With the face orientation determined,the system can decide whether the left or right side of the human body is exactly visible and infer the state of the symmetric counterpart. By granting a higher priority for the completely visible side,the system can avoid double counting to a great extent when inferring body poses. The proposed framework is evaluated on the HumanEva dataset. The results show that it largely reduces the occurrence of double counting and distinguishes the left and right sides consistently.
文摘目的现有表情识别方法聚焦提升模型的整体识别准确率,对方法的头部姿态鲁棒性研究不充分。在实际应用中,人的头部姿态往往变化多样,影响表情识别效果,因此研究头部姿态对表情识别的影响,并提升模型在该方面的鲁棒性显得尤为重要。为此,在深入分析头部姿态对表情识别影响的基础上,提出一种能够基于无标签非正脸表情数据提升模型头部姿态鲁棒性的半监督表情识别方法。方法首先按头部姿态对典型表情识别数据集AffectNet重新划分,构建了AffectNet-Yaw数据集,支持在不同角度上进行模型精度测试,提升了模型对比公平性。其次,提出一种基于双一致性约束的半监督表情识别方法(dual-consistency semi-supervised learning for facial expression recognition,DCSSL),利用空间一致性模块对翻转前后人脸图像的类别激活一致性进行空间约束,使模型训练时更关注面部表情关键区域特征;利用语义一致性模块通过非对称数据增强和自学式学习方法不断地筛选高质量非正脸数据用于模型优化。在无需对非正脸表情数据人工标注的情况下,方法直接从有标签正脸数据和无标签非正脸数据中学习。最后,联合优化了交叉熵损失、空间一致性约束损失和语义一致性约束损失函数,以确保有监督学习和半监督学习之间的平衡。结果实验结果表明,头部姿态对自然场景表情识别有显著影响;提出AffectNet-Yaw具有更均衡的头部姿态分布,有效促进了对这种影响的全面评估;DCSSL方法结合空间一致性和语义一致性约束充分利用无标签非正脸表情数据,显著提高了模型在头部姿态变化下的鲁棒性,较MA-NET(multi-scale and local attention network)和EfficientFace全监督方法,平均表情识别精度分别提升了5.40%和17.01%。结论本文提出的双一致性半监督方法能充分利用正脸和非正脸数据,显著提升了模型在头部姿态变化下的表情识别精度;新数据集有效支撑了对头部姿态对表情识别影响的全面评估。
文摘Facial expression recognition(FER)has numerous applications in computer security,neuroscience,psychology,and engineering.Owing to its non-intrusiveness,it is considered a useful technology for combating crime.However,FER is plagued with several challenges,the most serious of which is its poor prediction accuracy in severe head poses.The aim of this study,therefore,is to improve the recognition accuracy in severe head poses by proposing a robust 3D head-tracking algorithm based on an ellipsoidal model,advanced ensemble of AdaBoost,and saturated vector machine(SVM).The FER features are tracked from one frame to the next using the ellipsoidal tracking model,and the visible expressive facial key points are extracted using Gabor filters.The ensemble algorithm(Ada-AdaSVM)is then used for feature selection and classification.The proposed technique is evaluated using the Bosphorus,BU-3DFE,MMI,CK^(+),and BP4D-Spontaneous facial expression databases.The overall performance is outstanding.