Objective To determine the correlation between traditional Chinese medicine(TCM)inspec-tion of spirit classification and the severity grade of depression based on facial features,offer-ing insights for intelligent int...Objective To determine the correlation between traditional Chinese medicine(TCM)inspec-tion of spirit classification and the severity grade of depression based on facial features,offer-ing insights for intelligent intergrated TCM and western medicine diagnosis of depression.Methods Using the Audio-Visual Emotion Challenge and Workshop(AVEC 2014)public dataset on depression,which conclude 150 interview videos,the samples were classified ac-cording to the TCM inspection of spirit classification:Deshen(得神,presence of spirit),Shaoshen(少神,insufficiency of spirit),and Shenluan(神乱,confusion of spirit).Meanwhile,based on Beck Depression Inventory-II(BDI-II)score for the severity grade of depression,the samples were divided into minimal(0-13,Q1),mild(14-19,Q2),moderate(20-28,Q3),and severe(29-63,Q4).Sixty-eight landmarks were extracted with a ResNet-50 network,and the feature extracion mode was stadardized.Random forest and support vectior machine(SVM)classifiers were used to predict TCM inspection of spirit classification and the severity grade of depression,respectively.A Chi-square test and Apriori association rule mining were then applied to quantify and explore the relationships.Results The analysis revealed a statistically significant and moderately strong association be-tween TCM spirit classification and the severity grade of depression,as confirmed by a Chi-square test(χ^(2)=14.04,P=0.029)with a Cramer’s V effect size of 0.243.Further exploration us-ing association rule mining identified the most compelling rule:“moderate depression(Q3)→Shenluan”.This rule demonstrated a support level of 5%,indicating this specific co-occur-rence was present in 5%of the cohort.Crucially,it achieved a high Confidence of 86%,mean-ing that among patients diagnosed with Q3,86%exhibited the Shenluan pattern according to TCM assessment.The substantial Lift of 2.37 signifies that the observed likelihood of Shenlu-an manifesting in Q3 patients is 2.37 times higher than would be expected by chance if these states were independent-compelling evidence of a highly non-random association.Conse-quently,Shenluan emerges as a distinct and core TCM diagnostic manifestation strongly linked to Q3,forming a clinically significant phenotype within this patient subgroup.展开更多
Kabuki syndrome(KS)is a rare congenital mental retardation condition characterized by facial dysmorphia,visceral and skeletal malformations,and developmental delay.The integrated phenotype and genotype-based prioritiz...Kabuki syndrome(KS)is a rare congenital mental retardation condition characterized by facial dysmorphia,visceral and skeletal malformations,and developmental delay.The integrated phenotype and genotype-based prioritization is critical for diagnoses of genetic diseases.In this study,a Chinese woman,presenting with characteristic facial features of KS,came for pre-pregnancy consultation.We aimed to clarify the diagnosis and provide pre-pregnancy genetic counseling.Facial dysmorphology analysis and next-generation sequencing-based multigene panel approach were used to identify candidate syndromes and causative variants,respectively.The candidate variant was verified by Sanger sequencing.We identified a novel de novo KDM6A pathogenic variant(c.3521G>A)in the woman,which was in line with the Face2Gene analysis result.Peripheral blood RNA assay showed that the variant transcript underwent the nonsense-mediated mRNA decay and led to subsequent haploinsufficiency of KDM6A.Our study provides the genetic diagnosis method for KS type 2 and identifies the first KDM6A point variant in Chinese patient.展开更多
1Introduction Facial action analysis(FAA)focuses on detecting facial movements,particularly facial action units(AUs).FAA tasks,including AU detection,AU intensity estimation,and pain estimation,are crucial for underst...1Introduction Facial action analysis(FAA)focuses on detecting facial movements,particularly facial action units(AUs).FAA tasks,including AU detection,AU intensity estimation,and pain estimation,are crucial for understanding emotions and physical conditions.A major challenge in FAA is the insufficiency of labeled data,hindering the performance and generalization of FAA models.展开更多
Facial emotion recognition is an essential and important aspect of the field of human-machine interaction.Past research on facial emotion recognition focuses on the laboratory environment.However,it faces many challen...Facial emotion recognition is an essential and important aspect of the field of human-machine interaction.Past research on facial emotion recognition focuses on the laboratory environment.However,it faces many challenges in real-world conditions,i.e.,illumination changes,large pose variations and partial or full occlusions.Those challenges lead to different face areas with different degrees of sharpness and completeness.Inspired by this fact,we focus on the authenticity of predictions generated by different<emotion,region>pairs.For example,if only the mouth areas are available and the emotion classifier predicts happiness,then there is a question of how to judge the authenticity of predictions.This problem can be converted into the contribution of different face areas to different emotions.In this paper,we divide the whole face into six areas:nose areas,mouth areas,eyes areas,nose to mouth areas,nose to eyes areas and mouth to eyes areas.To obtain more convincing results,our experiments are conducted on three different databases:facial expression recognition+(FER+),real-world affective faces database(RAF-DB)and expression in-the-wild(ExpW)dataset.Through analysis of the classification accuracy,the confusion matrix and the class activation map(CAM),we can establish convincing results.To sum up,the contributions of this paper lie in two areas:1)We visualize concerned areas of human faces in emotion recognition;2)We analyze the contribution of different face areas to different emotions in real-world conditions through experimental analysis.Our findings can be combined with findings in psychology to promote the understanding of emotional expressions.展开更多
Facial expression recognition has been an active research field over the past few decades,with typical methods including principal component analysis based on eigenfaces and independent component analysis.With the dev...Facial expression recognition has been an active research field over the past few decades,with typical methods including principal component analysis based on eigenfaces and independent component analysis.With the development of deep learning technology,convolutional neural networks have also played an important role in facial expression recognition.Although these methods perform well,there is still significant room for improvement.This paper uses the Xinghuo Large Model for teachers’facial emotion analysis.First,classroom recorded videos are framed to extract key facial expression regions of teachers;then a network model is constructed,which is adjusted and extracted through a two-stream architecture;next,teachers’facial data are used as network input,and emotion prediction is performed using deep learning-based methods and Xinghuo Large Model-based methods respectively;finally,the prediction results are fused to obtain the final teachers’facial emotion analysis results.展开更多
Facial measurement and analysis is an important part of anthropometry,which provides data support for the design of facial protective equipment.To overcome the inconveniences,low efficiency and poor measurement accura...Facial measurement and analysis is an important part of anthropometry,which provides data support for the design of facial protective equipment.To overcome the inconveniences,low efficiency and poor measurement accuracy of facial size parameters measured and analyzed by manual contact,a method of automatic measurement and analysis of face size parameters is proposed.First,the automatic marking method of faces based on deep learning can improve the efficiency ofmeasuring facial parameters.Then,facial parameters,including nose middle width,nose width,face width and eye width,can be measured.Finally,the data set of face size parameters is classified and counted based on fuzzy clustering analysis.Sixty-five groups of Han youth facial data are collected formeasurement and analysis,and compared with the existing algorithms,the facial morphology analysis system presented in this paper has higher measurement accuracy.展开更多
Background:Techniques of facial age progression help in predicting the evolution of facial features over a period of time by maintaining the individual identity.Aims and Objectives:This study aims to investigate a 3-y...Background:Techniques of facial age progression help in predicting the evolution of facial features over a period of time by maintaining the individual identity.Aims and Objectives:This study aims to investigate a 3-year age progression in individuals across the 1-10,11-20,and 21-30 years of age groups,utilizing pixel-based analysis to identify changes in ten facial regions:forehead,periorbital(left and right),perinasal,maxillary(left and right),maxillary angle(left and right),and zygomatic arch(left and right).Materials and Methods:Digital anthropometric measurements were calculated from images selected at 3-year intervals.Pixel-based analyses were performed on each region,and interobserver error analysis was conducted to assess measurement reliability.Results:Significant region-specific changes(P<0.0001 in most regions)were observed,with the exception of the forehead—which remained stable in certain age groups.The periorbital and maxillary regions exhibited the most pronounced transformations.Distinctive gender-and age-dependent patterns were also noted,and interobserver error analysis confirmed the robustness of the pixel-based measurements despite minor manual interpretation variations.Conclusion:The validation of these findings demonstrates the efficiency of pixel-based analysis for determining facial age progression.The observed 3-year interval changes underscore the importance of region-specific,standardized measurement procedures and provide a strong foundation for refining digital age progression methodologies in anthropological and forensic applications.展开更多
基金Research and Development Plan of Key Areas of Hunan Science and Technology Department (2022SK2044)Clinical Research Center for Depressive Disorder in Hunan Province (2021SK4022)。
文摘Objective To determine the correlation between traditional Chinese medicine(TCM)inspec-tion of spirit classification and the severity grade of depression based on facial features,offer-ing insights for intelligent intergrated TCM and western medicine diagnosis of depression.Methods Using the Audio-Visual Emotion Challenge and Workshop(AVEC 2014)public dataset on depression,which conclude 150 interview videos,the samples were classified ac-cording to the TCM inspection of spirit classification:Deshen(得神,presence of spirit),Shaoshen(少神,insufficiency of spirit),and Shenluan(神乱,confusion of spirit).Meanwhile,based on Beck Depression Inventory-II(BDI-II)score for the severity grade of depression,the samples were divided into minimal(0-13,Q1),mild(14-19,Q2),moderate(20-28,Q3),and severe(29-63,Q4).Sixty-eight landmarks were extracted with a ResNet-50 network,and the feature extracion mode was stadardized.Random forest and support vectior machine(SVM)classifiers were used to predict TCM inspection of spirit classification and the severity grade of depression,respectively.A Chi-square test and Apriori association rule mining were then applied to quantify and explore the relationships.Results The analysis revealed a statistically significant and moderately strong association be-tween TCM spirit classification and the severity grade of depression,as confirmed by a Chi-square test(χ^(2)=14.04,P=0.029)with a Cramer’s V effect size of 0.243.Further exploration us-ing association rule mining identified the most compelling rule:“moderate depression(Q3)→Shenluan”.This rule demonstrated a support level of 5%,indicating this specific co-occur-rence was present in 5%of the cohort.Crucially,it achieved a high Confidence of 86%,mean-ing that among patients diagnosed with Q3,86%exhibited the Shenluan pattern according to TCM assessment.The substantial Lift of 2.37 signifies that the observed likelihood of Shenlu-an manifesting in Q3 patients is 2.37 times higher than would be expected by chance if these states were independent-compelling evidence of a highly non-random association.Conse-quently,Shenluan emerges as a distinct and core TCM diagnostic manifestation strongly linked to Q3,forming a clinically significant phenotype within this patient subgroup.
基金supported by the National Natural Science Foundation of China(No.81471506,81401219,81501276,and 81771638)the National Key Research and Development Program of China(No.2016YFC0905103)+4 种基金the Shanghai Municipal Commission of Science and Technology Program,China(No.15411966700,15411964000,and 17411972900)the Municipal Human Resources Development Program for Outstanding Young Talents in Medical and Health Sciences in Shanghai,China(No.2018YQ39)the Shanghai Municipal Commission of Health and Family Planning Program,China(No.20154Y0039 and 15GWZK0701)the Shanghai Jiao Tong University Program,China(No.YG2017MS39)the Innovation Foundation of Translational Medicine of Shanghai Jiao Tong University School of Medicine,Shanghai SJTUSM Biobank,China(No.15ZH4011).
文摘Kabuki syndrome(KS)is a rare congenital mental retardation condition characterized by facial dysmorphia,visceral and skeletal malformations,and developmental delay.The integrated phenotype and genotype-based prioritization is critical for diagnoses of genetic diseases.In this study,a Chinese woman,presenting with characteristic facial features of KS,came for pre-pregnancy consultation.We aimed to clarify the diagnosis and provide pre-pregnancy genetic counseling.Facial dysmorphology analysis and next-generation sequencing-based multigene panel approach were used to identify candidate syndromes and causative variants,respectively.The candidate variant was verified by Sanger sequencing.We identified a novel de novo KDM6A pathogenic variant(c.3521G>A)in the woman,which was in line with the Face2Gene analysis result.Peripheral blood RNA assay showed that the variant transcript underwent the nonsense-mediated mRNA decay and led to subsequent haploinsufficiency of KDM6A.Our study provides the genetic diagnosis method for KS type 2 and identifies the first KDM6A point variant in Chinese patient.
基金supported by the National Natural Science Foundation of China(Grant Nos.U2336213,62176248).
文摘1Introduction Facial action analysis(FAA)focuses on detecting facial movements,particularly facial action units(AUs).FAA tasks,including AU detection,AU intensity estimation,and pain estimation,are crucial for understanding emotions and physical conditions.A major challenge in FAA is the insufficiency of labeled data,hindering the performance and generalization of FAA models.
基金supported by the National Key Research & Development Plan of China (No. 2017YFB1002804)National Natural Science Foundation of China (Nos. 61425017, 61773379, 61332017, 61603390 and 61771472)the Major Program for the 325 National Social Science Fund of China (No. 13&ZD189)
文摘Facial emotion recognition is an essential and important aspect of the field of human-machine interaction.Past research on facial emotion recognition focuses on the laboratory environment.However,it faces many challenges in real-world conditions,i.e.,illumination changes,large pose variations and partial or full occlusions.Those challenges lead to different face areas with different degrees of sharpness and completeness.Inspired by this fact,we focus on the authenticity of predictions generated by different<emotion,region>pairs.For example,if only the mouth areas are available and the emotion classifier predicts happiness,then there is a question of how to judge the authenticity of predictions.This problem can be converted into the contribution of different face areas to different emotions.In this paper,we divide the whole face into six areas:nose areas,mouth areas,eyes areas,nose to mouth areas,nose to eyes areas and mouth to eyes areas.To obtain more convincing results,our experiments are conducted on three different databases:facial expression recognition+(FER+),real-world affective faces database(RAF-DB)and expression in-the-wild(ExpW)dataset.Through analysis of the classification accuracy,the confusion matrix and the class activation map(CAM),we can establish convincing results.To sum up,the contributions of this paper lie in two areas:1)We visualize concerned areas of human faces in emotion recognition;2)We analyze the contribution of different face areas to different emotions in real-world conditions through experimental analysis.Our findings can be combined with findings in psychology to promote the understanding of emotional expressions.
基金supported by the Software and System Engineering Research Center of Smart Car,Anhui Institute of Information Technology(AIIT)(No.:23kjcxpt001).
文摘Facial expression recognition has been an active research field over the past few decades,with typical methods including principal component analysis based on eigenfaces and independent component analysis.With the development of deep learning technology,convolutional neural networks have also played an important role in facial expression recognition.Although these methods perform well,there is still significant room for improvement.This paper uses the Xinghuo Large Model for teachers’facial emotion analysis.First,classroom recorded videos are framed to extract key facial expression regions of teachers;then a network model is constructed,which is adjusted and extracted through a two-stream architecture;next,teachers’facial data are used as network input,and emotion prediction is performed using deep learning-based methods and Xinghuo Large Model-based methods respectively;finally,the prediction results are fused to obtain the final teachers’facial emotion analysis results.
基金supported in part by the Natural Science Foundation of Heilongjiang Province (No:LH2020F049).
文摘Facial measurement and analysis is an important part of anthropometry,which provides data support for the design of facial protective equipment.To overcome the inconveniences,low efficiency and poor measurement accuracy of facial size parameters measured and analyzed by manual contact,a method of automatic measurement and analysis of face size parameters is proposed.First,the automatic marking method of faces based on deep learning can improve the efficiency ofmeasuring facial parameters.Then,facial parameters,including nose middle width,nose width,face width and eye width,can be measured.Finally,the data set of face size parameters is classified and counted based on fuzzy clustering analysis.Sixty-five groups of Han youth facial data are collected formeasurement and analysis,and compared with the existing algorithms,the facial morphology analysis system presented in this paper has higher measurement accuracy.
文摘Background:Techniques of facial age progression help in predicting the evolution of facial features over a period of time by maintaining the individual identity.Aims and Objectives:This study aims to investigate a 3-year age progression in individuals across the 1-10,11-20,and 21-30 years of age groups,utilizing pixel-based analysis to identify changes in ten facial regions:forehead,periorbital(left and right),perinasal,maxillary(left and right),maxillary angle(left and right),and zygomatic arch(left and right).Materials and Methods:Digital anthropometric measurements were calculated from images selected at 3-year intervals.Pixel-based analyses were performed on each region,and interobserver error analysis was conducted to assess measurement reliability.Results:Significant region-specific changes(P<0.0001 in most regions)were observed,with the exception of the forehead—which remained stable in certain age groups.The periorbital and maxillary regions exhibited the most pronounced transformations.Distinctive gender-and age-dependent patterns were also noted,and interobserver error analysis confirmed the robustness of the pixel-based measurements despite minor manual interpretation variations.Conclusion:The validation of these findings demonstrates the efficiency of pixel-based analysis for determining facial age progression.The observed 3-year interval changes underscore the importance of region-specific,standardized measurement procedures and provide a strong foundation for refining digital age progression methodologies in anthropological and forensic applications.