期刊文献+
共找到91篇文章
< 1 2 5 >
每页显示 20 50 100
Leveraging CNN to Analyse Facial Expressions for Academic Engagement Monitoring with Insights from the Multi⁃Source Academic Affective Engagement Dataset
1
作者 Noora C T Tamil Selvan P 《Journal of Harbin Institute of Technology(New Series)》 2025年第2期65-79,共15页
The dynamics of student engagement and emotional states significantly influence learning outcomes.Positive emotions resulting from successful task completion stand in contrast to negative affective states that arise f... The dynamics of student engagement and emotional states significantly influence learning outcomes.Positive emotions resulting from successful task completion stand in contrast to negative affective states that arise from learning struggles or failures.Effective transitions to engagement occur upon problem resolution,while unresolved issues lead to frustration and subsequent boredom.This study proposes a Convolutional Neural Networks(CNN)based approach utilizing the Multi⁃source Academic Affective Engagement Dataset(MAAED)to categorize facial expressions into boredom,confusion,frustration,and yawning.This method provides an efficient and objective way to assess student engagement by extracting features from facial images.Recognizing and addressing negative affective states,such as confusion and boredom,is fundamental in creating supportive learning environments.Through automated frame extraction and model comparison,this study demonstrates reduced loss values with improving accuracy,showcasing the effectiveness of this method in objectively evaluating student engagement.Monitoring facial engagement with CNN using the MAAED dataset is essential for gaining insights into human behaviour and improving educational experiences. 展开更多
关键词 emotion recognition student engagement facial expressions academic affective engagement MAAED
在线阅读 下载PDF
A Modified CNN Network for Automatic Pain Identification Using Facial Expressions 被引量:1
2
作者 Ioannis Karamitsos IIham Seladji Sanjay Modak 《Journal of Software Engineering and Applications》 2021年第8期400-417,共18页
Pain is a strong symptom of diseases. Being an involuntary unpleasant feeling, it can be considered a reliable indicator of health issues. Pain has always been expressed verbally, but in some cases, traditional patien... Pain is a strong symptom of diseases. Being an involuntary unpleasant feeling, it can be considered a reliable indicator of health issues. Pain has always been expressed verbally, but in some cases, traditional patient self-reporting is not efficient. On one side, there are patients who have neurological disorders and cannot express themselves accurately, as well as patients who suddenly lose consciousness due to an abrupt faintness. On another side, medical staff working in crowded hospitals need to focus on emergencies and would opt for the automation of the task of looking after hospitalized patients during their entire stay, in order to notice any pain-related emergency. These issues can be tackled with deep learning. Knowing that pain is generally followed by spontaneous facial behaviors, facial expressions can be used as a substitute to verbal reporting, to express pain. In this paper, a convolutional neural network (CNN) model was built and trained to detect pain through patients’ facial expressions, using the UNBC-McMaster Shoulder Pain dataset. First, faces were detected from images using the Haarcascade Frontal Face Detector provided by OpenCV, and preprocessed through gray scaling, histogram equalization, face detection, image cropping, mean filtering, and normalization. Next, preprocessed images were fed into a CNN model which was built based on a modified version of the VGG16 architecture. The model was finally evaluated and fine-tuned in a continuous way based on its accuracy, which reached 92.5%. 展开更多
关键词 CNN Computer Vision facial expressions Image Processing Pain Assessment
在线阅读 下载PDF
On the Importance of Bodily Gestures,Facial Expressions,and Intonations to Thinking Expression and Interpretation
3
作者 石桐 蒋翃遐 《海外英语》 2021年第17期288-289,共2页
Bodily gestures,facial expressions,and intonations are argued to be notably important features of spoken languagewhich are opposed to written language.Bodily gestures with or without spoken words can influence the cla... Bodily gestures,facial expressions,and intonations are argued to be notably important features of spoken languagewhich are opposed to written language.Bodily gestures with or without spoken words can influence the clarity and density of expres-sion and involvement of listeners.Facial expressions whether or not correspond with exact thought could be"decoded"to influencethe extent of intelligibility of expression.Intonation can always reflect the mutual beliefs concerning the propositional content andstates of consciousness relating to the expression and interpretation.Therefore,these can considerably improve or abate the accura-cy of expression and interpretation of thought. 展开更多
关键词 Bodily gestures facial expressions intonations THOUGHT
在线阅读 下载PDF
EEG Mapping of Cortical Activation Related to Emotional Stroop with Facial Expressions: A TREFACE Study
4
作者 Edward Prada Maria C. H. Tavares +4 位作者 Ana Garcia Corina Satler Lia Martinez Cândida H. L. Alves Carlos Tomaz 《Journal of Behavioral and Brain Science》 CAS 2022年第10期514-532,共19页
TREFACE (Test for Recognition of Facial Expressions with Emotional Conflict) is a computerized model for investigating the emotional factor in executive functions based on the Stroop paradigm, for the recognition of e... TREFACE (Test for Recognition of Facial Expressions with Emotional Conflict) is a computerized model for investigating the emotional factor in executive functions based on the Stroop paradigm, for the recognition of emotional expressions in human faces. To investigate the influence of the emotional component at the cortical level, the electroencephalographic (EEG) recording technique was used to measure the involvement of cortical areas during the execution of certain tasks. Thirty Brazilian native Portuguese-speaking graduate students were evaluated on their anxiety and depression levels and on their well-being at the time of the session. The EEG recording was performed in 19 channels during the execution of the TREFACE test in the 3 stages established by the model-guided training, reading, and recognition—both with congruent conditions, when the image corresponds to the word shown, and incongruent condition, when there is no correspondence. The results showed better performance in the reading stage and in congruent conditions, while greater intensity of cortical activation in the recognition stage and in incongruent conditions. In a complementary way, specific frontal activations were observed: intense theta frequency activation in the left extension representing the frontal recruitment of posterior regions in information processing;also, activation in alpha frequency in the right frontotemporal line, illustrating the executive processing in the control of attention, in addition to the dorsal manifestation of the prefrontal side, for emotional performance. Activations in beta and gamma frequencies were displayed in a more intensely distributed way in the recognition stage. The results of this mapping of cortical activity in our study can help to understand how words and images of faces can be regulated in everyday life and in clinical contexts, suggesting an integrated model that includes the neural bases of the regulation strategy. 展开更多
关键词 EEG EMOTION facial expressions Executive Functions STROOP TREFACE
暂未订购
Brain pathways of pain empathy activated by pained facial expressions: a meta-analysis of fMRI using the activation likelihood estimation method 被引量:2
5
作者 Ruo-Chu Xiong Xin Fu +4 位作者 Li-Zhen Wu Cheng-Han Zhang Hong-Xiang Wu Yu Shi Wen Wu 《Neural Regeneration Research》 SCIE CAS CSCD 2019年第1期172-178,共7页
OBJECTIVE: The objective of this study is to summarize and analyze the brain signal patterns of empathy for pain caused by facial expressions of pain utilizing activation likelihood estimation, a meta-analysis method.... OBJECTIVE: The objective of this study is to summarize and analyze the brain signal patterns of empathy for pain caused by facial expressions of pain utilizing activation likelihood estimation, a meta-analysis method. DATA SOURCES: Studies concerning the brain mechanism were searched from the Science Citation Index, Science Direct, PubMed, DeepDyve, Cochrane Library, SinoMed, Wanfang, VIP, China National Knowledge Infrastructure, and other databases, such as SpringerLink, AMA, Science Online, Wiley Online, were collected. A time limitation of up to 13 December 2016 was applied to this study. DATA SELECTION: Studies presenting with all of the following criteria were considered for study inclusion: Use of functional magnetic resonance imaging, neutral and pained facial expression stimuli, involvement of adult healthy human participants over 18 years of age, whose empathy ability showed no difference from the healthy adult, a painless basic state, results presented in Talairach or Montreal Neurological Institute coordinates, multiple studies by the same team as long as they used different raw data. OUTCOME MEASURES: Activation likelihood estimation was used to calculate the combined main activated brain regions under the stimulation of pained facial expression. RESULTS: Eight studies were included, containing 178 subjects. Meta-analysis results suggested that the anterior cingulate cortex(BA32), anterior central gyrus(BA44), fusiform gyrus, and insula(BA13) were activated positively as major brain areas under the stimulation of pained facial expression. CONCLUSION: Our study shows that pained facial expression alone, without viewing of painful stimuli, activated brain regions related to pain empathy, further contributing to revealing the brain's mechanisms of pain empathy. 展开更多
关键词 nerve regeneration facial expression pain empathy functional magnetic resonance imaging GringleALE activation likelihood estimation brain function imaging anterior cingulate cortex anterior central gyrus fusiform gyrus INSULA neural regeneration
暂未订购
Probing the processing of facial expressions in monkeys via time perception and eye tracking 被引量:1
6
作者 Xin-He Liu Lu Gan +2 位作者 Zhi-Ting Zhang Pan-Ke Yu Ji Dai 《Zoological Research》 SCIE CSCD 2023年第5期882-893,共12页
Accurately recognizing facial expressions is essential for effective social interactions.Non-human primates(NHPs)are widely used in the study of the neural mechanisms underpinning facial expression processing,yet it r... Accurately recognizing facial expressions is essential for effective social interactions.Non-human primates(NHPs)are widely used in the study of the neural mechanisms underpinning facial expression processing,yet it remains unclear how well monkeys can recognize the facial expressions of other species such as humans.In this study,we systematically investigated how monkeys process the facial expressions of conspecifics and humans using eye-tracking technology and sophisticated behavioral tasks,namely the temporal discrimination task(TDT)and face scan task(FST).We found that monkeys showed prolonged subjective time perception in response to Negative facial expressions in monkeys while showing longer reaction time to Negative facial expressions in humans.Monkey faces also reliably induced divergent pupil contraction in response to different expressions,while human faces and scrambled monkey faces did not.Furthermore,viewing patterns in the FST indicated that monkeys only showed bias toward emotional expressions upon observing monkey faces.Finally,masking the eye region marginally decreased the viewing duration for monkey faces but not for human faces.By probing facial expression processing in monkeys,our study demonstrates that monkeys are more sensitive to the facial expressions of conspecifics than those of humans,thus shedding new light on inter-species communication through facial expressions between NHPs and humans. 展开更多
关键词 MONKEY facial expression Time perception EYE-TRACKING Pupil size
在线阅读 下载PDF
AI-assisted flexible electronics in humanoid robot heads for natural and authentic facial expressions 被引量:1
7
作者 Nian Dai Kaijun Zhang +4 位作者 Fan Zhang Junfeng Li Junwen Zhong YongAn Huang Han Ding 《The Innovation》 2025年第2期13-15,共3页
The realization of natural and authentic facial expressions in humanoid robots poses a challenging and prominent research domain,encompassing interdisciplinary facets including mechanical design,sensing and actuation ... The realization of natural and authentic facial expressions in humanoid robots poses a challenging and prominent research domain,encompassing interdisciplinary facets including mechanical design,sensing and actuation control,psychology,cognitive science,flexible electronics,artificial intelligence(AI),etc.We have traced the recent developments of humanoid robot heads for facial expressions,discussed major challenges in embodied AI and flexible electronics for facial expression recognition and generation,and highlighted future trends in this field.Developing humanoid robot heads with natural and authentic facial expressions demands collaboration in interdisciplinary fields such as multi-modal sensing,emotional computing,and human-robot interactions(HRIs)to advance the emotional anthropomorphism of humanoid robots,bridging the gap between humanoid robots and human beings and enabling seamless HRIs. 展开更多
关键词 facial expressionsdiscussed facial expression embodied ai humanoid robot heads flexible electronics natural authentic humanoid robots facial expressions
原文传递
Generation of Performance-Driven Facial Expressions
8
作者 GOU Ye WANG Xiao-kan 《Computer Aided Drafting,Design and Manufacturing》 2009年第2期56-62,共7页
Coordinates of the key facial feature points can be captured by motion capture system OPTOTRAK with real-time character and high accuracy. The facial model is considered as an undirected weighted graph. By iteratively... Coordinates of the key facial feature points can be captured by motion capture system OPTOTRAK with real-time character and high accuracy. The facial model is considered as an undirected weighted graph. By iteratively subdividing the related triangle edges, the geodesic distance between points on the model surface is finally obtained. The RBF (Radial Basis Functions) interpolation technique based on geodesic distance is applied to generate deformation of the facial mesh model. Experimental results demonstrate that the geodesic distance can explore the complex topology of human face models perfectly and the method can generate realistic facial expressions. 展开更多
关键词 facial expression performance-driven RBF geodesic distance
在线阅读 下载PDF
A Deep Learning-Based Automated Approach of Schizophrenia Detection from Facial Micro-Expressions
9
作者 Anum Saher Ghulam Gilanie +3 位作者 Sana Cheema Akkasha Latif Syeda Naila Batool Hafeez Ullah 《Intelligent Automation & Soft Computing》 2024年第6期1053-1071,共19页
Schizophrenia is a severe mental illness responsible for many of the world’s disabilities.It significantly impacts human society;thus,rapid,and efficient identification is required.This research aims to diagnose schi... Schizophrenia is a severe mental illness responsible for many of the world’s disabilities.It significantly impacts human society;thus,rapid,and efficient identification is required.This research aims to diagnose schizophrenia directly from a high-resolution camera,which can capture the subtle micro facial expressions that are difficult to spot with the help of the naked eye.In a clinical study by a team of experts at Bahawal Victoria Hospital(BVH),Bahawalpur,Pakistan,there were 300 people with schizophrenia and 299 healthy subjects.Videos of these participants have been captured and converted into their frames using the OpenFace tool.Additionally,pose,gaze,Action Units(AUs),and land-marked features have been extracted in the Comma Separated Values(CSV)file.Aligned faces have been used to detect schizophrenia by the proposed and the pre-trained Convolutional Neural Network(CNN)models,i.e.,VGG16,Mobile Net,Efficient Net,Google Net,and ResNet50.Moreover,Vision transformer,Swim transformer,big transformer,and vision transformer without attention have also been used to train the models on customized dataset.CSV files have been used to train a model using logistic regression,decision trees,random forest,gradient boosting,and support vector machine classifiers.Moreover,the parameters of the proposed CNN architecture have been optimized using the Particle Swarm Optimization algorithm.The experimental results showed a validation accuracy of 99.6%for the proposed CNN model.The results demonstrated that the reported method is superior to the previous methodologies.The model can be deployed in a real-time environment. 展开更多
关键词 SCHIZOPHRENIA deep learning machine learning facial expressions TRANSFORMERS particle swarm optimization(PSO)algorithm
在线阅读 下载PDF
Real-Time Facial Expression Recognition on Res-MobileNetV3
10
作者 Li Beibei Zhu Jiansheng +3 位作者 Li Suwen Dai Linlin Yan Zhiyuan Ma Liangde 《China Communications》 2025年第3期54-64,共11页
Artificial intelligence,such as deep learning technology,has advanced the study of facial expression recognition since facial expression carries rich emotional information and is significant for many naturalistic situ... Artificial intelligence,such as deep learning technology,has advanced the study of facial expression recognition since facial expression carries rich emotional information and is significant for many naturalistic situations.To pursue a high facial expression recognition accuracy,the network model of deep learning is generally designed to be very deep while the model’s real-time performance is typically constrained and limited.With MobileNetV3,a lightweight model with a good accuracy,a further study is conducted by adding a basic ResNet module to each of its existing modules and an SSH(Single Stage Headless Face Detector)context module to expand the model’s perceptual field.In this article,the enhanced model named Res-MobileNetV3,could alleviate the subpar of real-time performance and compress the size of large network models,which can process information at a rate of up to 33 frames per second.Although the improved model has been verified to be slightly inferior to the current state-of-the-art method in aspect of accuracy rate on the publically available face expression datasets,it can bring a good balance on accuracy,real-time performance,model size and model complexity in practical applications. 展开更多
关键词 artificial intelligence facial expression recognition MobileNetV3 ResNet SSH
在线阅读 下载PDF
SG-TE:Spatial Guidance and Temporal Enhancement Network for Facial-Bodily Emotion Recognition
11
作者 Zhong Huang Danni Zhang +3 位作者 Fuji Ren Min Hu Juan Liu Haitao Yu 《CAAI Transactions on Intelligence Technology》 2025年第3期871-890,共20页
To overcome the deficiencies of single-modal emotion recognition based on facial expression or bodily posture in natural scenes,a spatial guidance and temporal enhancement(SG-TE)network is proposed for facial-bodily e... To overcome the deficiencies of single-modal emotion recognition based on facial expression or bodily posture in natural scenes,a spatial guidance and temporal enhancement(SG-TE)network is proposed for facial-bodily emotion recognition.First,ResNet50,DNN and spatial ransformer models are used to capture facial texture vectors,bodily skeleton vectors and wholebody geometric vectors,and an intraframe correlation attention guidance(S-CAG)mechanism,which guides the facial texture vector and the bodily skeleton vector by the whole-body geometric vector,is designed to exploit the spatial potential emotional correlation between face and posture.Second,an interframe significant segment enhancement(T-SSE)structure is embedded into a temporal transformer to enhance high emotional intensity frame information and avoid emotional asynchrony.Finally,an adaptive weight assignment(M-AWA)strategy is constructed to realise facial-bodily fusion.The experimental results on the BabyRobot Emotion Dataset(BRED)and Context-Aware Emotion Recognition(CAER)dataset indicate that the proposed network reaches accuracies of 81.61%and 89.39%,which are 9.61%and 9.46%higher than those of the baseline network,respectively.Compared with the state-of-the-art methods,the proposed method achieves 7.73%and 20.57%higher accuracy than single-modal methods based on facial expression or bodily posture,respectively,and 2.16%higher accuracy than the dual-modal methods based on facial-bodily fusion.Therefore,the proposed method,which adaptively fuses the complementary information of face and posture,improves the quality of emotion recognition in real-world scenarios. 展开更多
关键词 bodily posture facial expression intraframe spatial guidance interframe temporal enhancement multimodal feature fusion
在线阅读 下载PDF
The use of facial expressions in measuring students’interaction with distance learning environments during the COVID-19 crisis
12
作者 Waleed Maqableh Faisal Y.Alzyoud Jamal Zraqou 《Visual Informatics》 EI 2023年第1期1-17,共17页
Digital learning is becoming increasingly important in the crisis COVID-19 and is widespread in most countries.The proliferation of smart devices and 5G telecommunications systems are contributing to the development o... Digital learning is becoming increasingly important in the crisis COVID-19 and is widespread in most countries.The proliferation of smart devices and 5G telecommunications systems are contributing to the development of digital learning systems as an alternative to traditional learning systems.Digital learning includes blended learning,online learning,and personalized learning which mainly depends on the use of new technologies and strategies,so digital learning is widely developed to improve education and combat emerging disasters such as COVID-19 diseases.Despite the tremendous benefits of digital learning,there are many obstacles related to the lack of digitized curriculum and collaboration between teachers and students.Therefore,many attempts have been made to improve the learning outcomes through the following strategies:collaboration,teacher convenience,personalized learning,cost and time savings through professional development,and modeling.In this study,facial expressions and heart rates are used to measure the effectiveness of digital learning systems and the level of learners’engagement in learning environments.The results showed that the proposed approach outperformed the known related works in terms of learning effectiveness.The results of this research can be used to develop a digital learning environment. 展开更多
关键词 E-LEARNING COVID-19 Face-to-face learning facial expressions Heart pulse
原文传递
Using Kinect for real-time emotion recognition via facial expressions 被引量:4
13
作者 Qi-rong MAO Xin-yu PAN +1 位作者 Yong-zhao ZHAN Xiang-jun SHEN 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2015年第4期272-282,共11页
Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their perfor... Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their performance is usually computationally expensive. In this paper, we propose a real-time emotion recognition approach based on both 2D and 3D facial expression features captured by Kinect sensors. To capture the deformation of the 3D mesh during facial expression, we combine the features of animation units (AUs) and feature point positions (FPPs) tracked by Kinect. A fusion algorithm based on improved emotional profiles (IEPs) arid maximum confidence is proposed to recognize emotions with these real-time facial expression features. Experiments on both an emotion dataset and a real-time video show the superior performance of our method. 展开更多
关键词 KINECT Emotion recognition facial expression Real-time classification Fusion algorithm Supportvector machine (SVM)
原文传递
How Facial Expressions of Recipients Influence Online Prosocial Behaviors?-Evidence from Big Data Analysis on Tencent Gongyi Platform
14
作者 Lihan He Tianguang Meng 《Journal of Social Computing》 EI 2023年第4期337-356,共20页
Cyberspace has significantly influenced people’s perceptions of social interactions and communication.As a result,the conventional theories of kin selection and reciprocal altruism fall short in completely elucidatin... Cyberspace has significantly influenced people’s perceptions of social interactions and communication.As a result,the conventional theories of kin selection and reciprocal altruism fall short in completely elucidating online prosocial behavior.Based on the social information processing model,we propose an analytical framework to explain the donation behaviors on online platform.Through collecting textual and visual data from Tencent Gongyi platform pertaining to disease relief projects,and employing techniques encompassing text analysis,image analysis,and propensity score matching,we investigate the impact of both internal emotional cues and external contextual cues on donation behaviors.It is found that positive emotions tend to attract a larger number of donations,while negative emotions tend to result in higher per capita donation amounts.Furthermore,these effects manifest differently under distinct external contextual conditions. 展开更多
关键词 online prosocial behavior donation behavior facial expression big data image analysis
原文传递
Global-local combined features to detect pain intensity from facial expression images with attention mechanism
15
作者 Jiang Wu Yi Shi +1 位作者 Shun Yan Hong-Mei Yan 《Journal of Electronic Science and Technology》 EI CAS CSCD 2024年第3期80-93,共14页
The estimation of pain intensity is critical for medical diagnosis and treatment of patients.With the development of image monitoring technology and artificial intelligence,automatic pain assessment based on facial ex... The estimation of pain intensity is critical for medical diagnosis and treatment of patients.With the development of image monitoring technology and artificial intelligence,automatic pain assessment based on facial expression and behavioral analysis shows a potential value in clinical applications.This paper reports a framework of convolutional neural network with global and local attention mechanism(GLA-CNN)for the effective detection of pain intensity at four-level thresholds using facial expression images.GLA-CNN includes two modules,namely global attention network(GANet)and local attention network(LANet).LANet is responsible for extracting representative local patch features of faces,while GANet extracts whole facial features to compensate for the ignored correlative features between patches.In the end,the global correlational and local subtle features are fused for the final estimation of pain intensity.Experiments under the UNBC-McMaster Shoulder Pain database demonstrate that GLA-CNN outperforms other state-of-the-art methods.Additionally,a visualization analysis is conducted to present the feature map of GLA-CNN,intuitively showing that it can extract not only local pain features but also global correlative facial ones.Our study demonstrates that pain assessment based on facial expression is a non-invasive and feasible method,and can be employed as an auxiliary pain assessment tool in clinical practice. 展开更多
关键词 ATTENTION Convolutional neural network facial expression Pain intensity
在线阅读 下载PDF
Facial expression recognition based on fuzzy-LDA/CCA 被引量:1
16
作者 周晓彦 郑文明 +1 位作者 邹采荣 赵力 《Journal of Southeast University(English Edition)》 EI CAS 2008年第4期428-432,共5页
A novel fuzzy linear discriminant analysis method by the canonical correlation analysis (fuzzy-LDA/CCA)is presented and applied to the facial expression recognition. The fuzzy method is used to evaluate the degree o... A novel fuzzy linear discriminant analysis method by the canonical correlation analysis (fuzzy-LDA/CCA)is presented and applied to the facial expression recognition. The fuzzy method is used to evaluate the degree of the class membership to which each training sample belongs. CCA is then used to establish the relationship between each facial image and the corresponding class membership vector, and the class membership vector of a test image is estimated using this relationship. Moreover, the fuzzy-LDA/CCA method is also generalized to deal with nonlinear discriminant analysis problems via kernel method. The performance of the proposed method is demonstrated using real data. 展开更多
关键词 fuzzy linear discriminant analysis canonical correlation analysis facial expression recognition
在线阅读 下载PDF
Robust Face Recognition Against Expressions and Partial Occlusions 被引量:5
17
作者 Fadhlan Kamaru Zaman Amir Akramin Shafie Yasir Mohd Mustafah 《International Journal of Automation and computing》 EI CSCD 2016年第4期319-337,共19页
Facial features under variant-expressions and partial occlusions could have degrading effect on overall face recognition performance. As a solution, we suggest that the contribution of these features on final classifi... Facial features under variant-expressions and partial occlusions could have degrading effect on overall face recognition performance. As a solution, we suggest that the contribution of these features on final classification should be determined. In order to represent facial features' contribution according to their variations, we propose a feature selection process that describes facial features as local independent component analysis (ICA) features. These local features are acquired using locally lateral subspace (LLS) strategy. Then, through linear discriminant analysis (LDA) we investigate the intraclass and interclass representation of each local ICA feature and express each feature's contribution via a weighting process. Using these weights, we define the contribution of each feature at local classifier level. In order to recognize faces under single sample constraint, we implement LLS strategy on locally linear embedding (LLE) along with the proposed feature selection. Additionally, we highlight the efficiency of the implementation of LLS strategy. The overall accuracy achieved by our approach on datasets with different facial expressions and partial occlusions such as AR, JAFFE, FERET and CK% is 90.70%. We present together in this paper survey results on face recognition performance and physiological feature selection performed by human subjects. 展开更多
关键词 Face recognition facial expressions dimensionality reduction single sample feature selection.
原文传递
Processing Environmental Stimuli in Paranoid Schizophrenia:Recognizing Facial Emotions and Performing Executive Functions 被引量:4
18
作者 YU Shao Hua ZHU Jun Peng +6 位作者 XU You ZHENG Lei Lei CHAI Hao HE Wei LIU Wei Bo LI Hui Chun WANG Wei 《Biomedical and Environmental Sciences》 SCIE CAS CSCD 2012年第6期697-705,共9页
Objective To study the contribution of executive function to abnormal recognition of facia expressions of emotion in schizophrenia patients. Methods Abnormal recognition of facial expressions of emotion was assayed ac... Objective To study the contribution of executive function to abnormal recognition of facia expressions of emotion in schizophrenia patients. Methods Abnormal recognition of facial expressions of emotion was assayed according to Japanese and Caucasian facial expressions of emotion (JACFEE), Wisconsin card sorting test {WCST), positive and negative symptom scale, and Hamilton anxiety and depression scale, respectively, in 88 paranoid schizophrenia patients and 75 healthy volunteers. Results Patients scored higher on the Positive and Negative Symptom Scale and the Hamilton Anxiety and Depression Scales, displayed lower JACFEE recognition accuracies and poorer WCST performances. The JACFEE recognition accuracy of contempt and disgust was negatively correlated with the negative symptom scale score while the recognition accuracy of fear was positively with the positive symptom scale score and the recognition accuracy of surprise was negatively with the general psychopathology score in patients. Moreover, the WCST could predict the JACFEE recognition accuracy of contempt, disgust, and sadness in patients, and the perseverative errors negatively predicted the recognition accuracy of sadness in healthy volunteers. The JACFEE recognition accuracy of sadness could predict the WCST categories in paranoid schizophrenia patients. Conclusion Recognition accuracy of social-/moral emotions, such as contempt, disgust and sadness is related to the executive function in paranoid schizophrenia patients, especially when regarding sadness. 展开更多
关键词 Executive function Japanese and Caucasian facial expressions of emotion Paranoidschizophrenia Wisconsin card sorting test
在线阅读 下载PDF
Robust facial expression recognition system in higher poses
19
作者 Ebenezer Owusu Justice Kwame Appati Percy Okae 《Visual Computing for Industry,Biomedicine,and Art》 EI 2022年第1期159-173,共15页
Facial expression recognition(FER)has numerous applications in computer security,neuroscience,psychology,and engineering.Owing to its non-intrusiveness,it is considered a useful technology for combating crime.However,... Facial expression recognition(FER)has numerous applications in computer security,neuroscience,psychology,and engineering.Owing to its non-intrusiveness,it is considered a useful technology for combating crime.However,FER is plagued with several challenges,the most serious of which is its poor prediction accuracy in severe head poses.The aim of this study,therefore,is to improve the recognition accuracy in severe head poses by proposing a robust 3D head-tracking algorithm based on an ellipsoidal model,advanced ensemble of AdaBoost,and saturated vector machine(SVM).The FER features are tracked from one frame to the next using the ellipsoidal tracking model,and the visible expressive facial key points are extracted using Gabor filters.The ensemble algorithm(Ada-AdaSVM)is then used for feature selection and classification.The proposed technique is evaluated using the Bosphorus,BU-3DFE,MMI,CK^(+),and BP4D-Spontaneous facial expression databases.The overall performance is outstanding. 展开更多
关键词 facial expressions Three-dimensional head pose Ellipsoidal model Gabor filters Ada-AdaSVM
在线阅读 下载PDF
Functional near-infrared spectroscopy can detect low-frequency hemodynamic oscillations in the prefrontal cortex during steady-state visual evoked potentialinducing periodic facial expression stimuli presentation
20
作者 Meng-Yun Wang Anzhe Yuan +2 位作者 Juan Zhang Yutao Xiang Zhen Yuan 《Visual Computing for Industry,Biomedicine,and Art》 2020年第1期321-328,共8页
Brain oscillations are vital to cognitive functions,while disrupted oscillatory activity is linked to various brain disorders.Although high-frequency neural oscillations(>1 Hz)have been extensively studied in cogni... Brain oscillations are vital to cognitive functions,while disrupted oscillatory activity is linked to various brain disorders.Although high-frequency neural oscillations(>1 Hz)have been extensively studied in cognition,the neural mechanisms underlying low-frequency hemodynamic oscillations(LFHO)<1 Hz have not yet been fully explored.One way to examine oscillatory neural dynamics is to use a facial expression(FE)paradigm to induce steady-state visual evoked potentials(SSVEPs),which has been used in electroencephalography studies of high-frequency brain oscillation activity.In this study,LFHO during SSVEP-inducing periodic flickering stimuli presentation were inspected using functional near-infrared spectroscopy(fNIRS),in which hemodynamic responses in the prefrontal cortex were recorded while participants were passively viewing dynamic FEs flickering at 0.2 Hz.The fast Fourier analysis results demonstrated that the power exhibited monochronic peaks at 0.2 Hz across all channels,indicating that the periodic events successfully elicited LFHO in the prefrontal cortex.More importantly,measurement of LFHO can effectively distinguish the brain activation difference between different cognitive conditions,with happy FE presentation showing greater LFHO power than neutral FE presentation.These results demonstrate that stimuli flashing at a given frequency can induce LFHO in the prefrontal cortex,which provides new insights into the cognitive mechanisms involved in slow oscillation. 展开更多
关键词 Steady state visual evoked potentials Dynamic facial expressions Functional near-infrared spectroscopy Brain oscillation
暂未订购
上一页 1 2 5 下一页 到第
使用帮助 返回顶部