期刊文献+
共找到16篇文章
< 1 >
每页显示 20 50 100
Ensemble of Deep Learning with Crested Porcupine Optimizer Based Autism Spectrum Disorder Detection Using Facial Images
1
作者 Jagadesh Balasubramani Surendran Rajendran +1 位作者 Mohammad Zakariah Abeer Alnuaim 《Computers, Materials & Continua》 2025年第5期2793-2807,共15页
Autism spectrum disorder(ASD)is a multifaceted neurological developmental condition that manifests in several ways.Nearly all autistic children remain undiagnosed before the age of three.Developmental problems affecti... Autism spectrum disorder(ASD)is a multifaceted neurological developmental condition that manifests in several ways.Nearly all autistic children remain undiagnosed before the age of three.Developmental problems affecting face features are often associated with fundamental brain disorders.The facial evolution of newborns with ASD is quite different from that of typically developing children.Early recognition is very significant to aid families and parents in superstition and denial.Distinguishing facial features from typically developing children is an evident manner to detect children analyzed with ASD.Presently,artificial intelligence(AI)significantly contributes to the emerging computer-aided diagnosis(CAD)of autism and to the evolving interactivemethods that aid in the treatment and reintegration of autistic patients.This study introduces an Ensemble of deep learning models based on the autism spectrum disorder detection in facial images(EDLM-ASDDFI)model.The overarching goal of the EDLM-ASDDFI model is to recognize the difference between facial images of individuals with ASD and normal controls.In the EDLM-ASDDFI method,the primary level of data pre-processing is involved by Gabor filtering(GF).Besides,the EDLM-ASDDFI technique applies the MobileNetV2 model to learn complex features from the pre-processed data.For the ASD detection process,the EDLM-ASDDFI method uses ensemble techniques for classification procedure that encompasses long short-term memory(LSTM),deep belief network(DBN),and hybrid kernel extreme learning machine(HKELM).Finally,the hyperparameter selection of the three deep learning(DL)models can be implemented by the design of the crested porcupine optimizer(CPO)technique.An extensive experiment was conducted to emphasize the improved ASD detection performance of the EDLM-ASDDFI method.The simulation outcomes indicated that the EDLM-ASDDFI technique highlighted betterment over other existing models in terms of numerous performance measures. 展开更多
关键词 Autism spectrum disorder ensemble learning crested porcupine optimizer facial images computeraided diagnosis
在线阅读 下载PDF
Facial Image-Based Autism Detection:A Comparative Study of Deep Neural Network Classifiers
2
作者 Tayyaba Farhat Sheeraz Akram +3 位作者 Hatoon SAlSagri Zulfiqar Ali Awais Ahmad Arfan Jaffar 《Computers, Materials & Continua》 SCIE EI 2024年第1期105-126,共22页
Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particula... Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particularly in regions with limited diagnostic resources like Pakistan.This study aims to conduct an extensive comparative analysis of various machine learning classifiers for ASD detection using facial images to identify an accurate and cost-effective solution tailored to the local context.The research involves experimentation with VGG16 and MobileNet models,exploring different batch sizes,optimizers,and learning rate schedulers.In addition,the“Orange”machine learning tool is employed to evaluate classifier performance and automated image processing capabilities are utilized within the tool.The findings unequivocally establish VGG16 as the most effective classifier with a 5-fold cross-validation approach.Specifically,VGG16,with a batch size of 2 and the Adam optimizer,trained for 100 epochs,achieves a remarkable validation accuracy of 99% and a testing accuracy of 87%.Furthermore,the model achieves an F1 score of 88%,precision of 85%,and recall of 90% on test images.To validate the practical applicability of the VGG16 model with 5-fold cross-validation,the study conducts further testing on a dataset sourced fromautism centers in Pakistan,resulting in an accuracy rate of 85%.This reaffirms the model’s suitability for real-world ASD detection.This research offers valuable insights into classifier performance,emphasizing the potential of machine learning to deliver precise and accessible ASD diagnoses via facial image analysis. 展开更多
关键词 AUTISM Autism Spectrum Disorder(ASD) disease segmentation features optimization deep learning models facial images classification
在线阅读 下载PDF
Artificially Generated Facial Images for Gender Classification Using Deep Learning
3
作者 Valliappan Raman Khaled ELKarazle Patrick Then 《Computer Systems Science & Engineering》 SCIE EI 2023年第2期1341-1355,共15页
Given the current expansion of the computer visionfield,several appli-cations that rely on extracting biometric information like facial gender for access control,security or marketing purposes are becoming more common.... Given the current expansion of the computer visionfield,several appli-cations that rely on extracting biometric information like facial gender for access control,security or marketing purposes are becoming more common.A typical gender classifier requires many training samples to learn as many distinguishable features as possible.However,collecting facial images from individuals is usually a sensitive task,and it might violate either an individual's privacy or a specific data privacy law.In order to bridge the gap between privacy and the need for many facial images for deep learning training,an artificially generated dataset of facial images is proposed.We acquire a pre-trained Style-Generative Adversar-ial Networks(StyleGAN)generator and use it to create a dataset of facial images.We label the images according to the observed gender using a set of criteria that differentiate the facial features of males and females apart.We use this manually-labelled dataset to train three facial gender classifiers,a custom-designed network,and two pre-trained networks based on the Visual Geometry Group designs(VGG16)and(VGG19).We cross-validate these three classifiers on two separate datasets containing labelled images of actual subjects.For testing,we use the UTKFace and the Kaggle gender dataset.Our experimental results suggest that using a set of artificial images for training produces a comparable performance with accuracies similar to existing state-of-the-art methods,which uses actual images of individuals.The average classification accuracy of each classifier is between 94%and 95%,which is similar to existing proposed methods. 展开更多
关键词 facial recognition data collection facial images generative adversarial networks facial gender estimation
在线阅读 下载PDF
A Deep Learning-Based Ocular Structure Segmentation for Assisted Myasthenia Gravis Diagnosis from Facial Images
4
作者 Linna Zhao Jianqiang Li +8 位作者 Xi Xu Chujie Zhu Wenxiu Cheng Suqin Liu Mingming Zhao Lei Zhang Jing Zhang Jian Yin Jijiang Yang 《Tsinghua Science and Technology》 2025年第6期2592-2605,共14页
Myasthenia Gravis(MG)is an autoimmune neuromuscular disease.Given that extraocular muscle manifestations are the initial and primary symptoms in most patients,ocular muscle assessment is regarded necessary early scree... Myasthenia Gravis(MG)is an autoimmune neuromuscular disease.Given that extraocular muscle manifestations are the initial and primary symptoms in most patients,ocular muscle assessment is regarded necessary early screening tool.To overcome the limitations of the manual clinical method,an intuitive idea is to collect data via imaging devices,followed by analysis or processing using Deep Learning(DL)techniques(particularly image segmentation approaches)to enable automatic MG evaluation.Unfortunately,their clinical applications in this field have not been thoroughly explored.To bridge this gap,our study prospectively establishes a new DL-based system to promote the diagnosis of MG disease,with a complete workflow including facial data acquisition,eye region localization,and ocular structure segmentation.Experimental results demonstrate that the proposed system achieves superior segmentation performance of ocular structure.Moreover,it markedly improves the diagnostic accuracy of doctors.In the future,this endeavor can offer highly promising MG monitoring tools for healthcare professionals,patients,and regions with limited medical resources. 展开更多
关键词 ocular structure segmentation Deep Learning(DL) Myasthenia Gravis(MG)diagnosis facial images
原文传递
A survey on facial image deblurring
5
作者 Bingnan Wang Fanjiang Xu Quan Zheng 《Computational Visual Media》 SCIE EI CSCD 2024年第1期3-25,共23页
When a facial image is blurred,it significantly affects high-level vision tasks such as face recognition.The purpose of facial image deblurring is to recover a clear image from a blurry input image,which can improve t... When a facial image is blurred,it significantly affects high-level vision tasks such as face recognition.The purpose of facial image deblurring is to recover a clear image from a blurry input image,which can improve the recognition accuracy,etc.However,general deblurring methods do not perform well on facial images.Therefore,some face deblurring methods have been proposed to improve performance by adding semantic or structural information as specific priors according to the characteristics of the facial images.In this paper,we survey and summarize recently published methods for facial image deblurring,most of which are based on deep learning.First,we provide a brief introduction to the modeling of image blurring.Next,we summarize face deblurring methods into two categories:model-based methods and deep learning-based methods.Furthermore,we summarize the datasets,loss functions,and performance evaluation metrics commonly used in the neural network training process.We show the performance of classical methods on these datasets and metrics and provide a brief discussion on the differences between model-based and learning-based methods.Finally,we discuss the current challenges and possible future research directions. 展开更多
关键词 facial image deblurring MODEL-BASED deep learning-based semantic or structural prior
原文传递
Comprehensive Review and Analysis on Facial Emotion Recognition:Performance Insights into Deep and Traditional Learning with Current Updates and Challenges
6
作者 Amjad Rehman Muhammad Mujahid +2 位作者 Alex Elyassih Bayan AlGhofaily Saeed Ali Omer Bahaj 《Computers, Materials & Continua》 SCIE EI 2025年第1期41-72,共32页
In computer vision and artificial intelligence,automatic facial expression-based emotion identification of humans has become a popular research and industry problem.Recent demonstrations and applications in several fi... In computer vision and artificial intelligence,automatic facial expression-based emotion identification of humans has become a popular research and industry problem.Recent demonstrations and applications in several fields,including computer games,smart homes,expression analysis,gesture recognition,surveillance films,depression therapy,patientmonitoring,anxiety,and others,have brought attention to its significant academic and commercial importance.This study emphasizes research that has only employed facial images for face expression recognition(FER),because facial expressions are a basic way that people communicate meaning to each other.The immense achievement of deep learning has resulted in a growing use of its much architecture to enhance efficiency.This review is on machine learning,deep learning,and hybrid methods’use of preprocessing,augmentation techniques,and feature extraction for temporal properties of successive frames of data.The following section gives a brief summary of assessment criteria that are accessible to the public and then compares them with benchmark results the most trustworthy way to assess FER-related research topics statistically.In this review,a brief synopsis of the subject matter may be beneficial for novices in the field of FER as well as seasoned scholars seeking fruitful avenues for further investigation.The information conveys fundamental knowledge and provides a comprehensive understanding of the most recent state-of-the-art research. 展开更多
关键词 Face emotion recognition deep learning hybrid learning CK+ facial images machine learning technological development
在线阅读 下载PDF
Analysis of Tongue and Face Image Features of Anemic Women and Construction of Risk-Screening Model
7
作者 Hongyuan Fu Yi Chun +7 位作者 Yahan Zhang Yu Wang Yulin Shi Tao Jiang Xiaojuan Hu Liping Tu Yongzhi Li Jiatuo Xu 《Biomedical and Environmental Sciences》 2025年第8期935-951,共17页
Objective To identify the key features of facial and tongue images associated with anemia in female populations,establish anemia risk-screening models,and evaluate their performance.Methods A total of 533 female parti... Objective To identify the key features of facial and tongue images associated with anemia in female populations,establish anemia risk-screening models,and evaluate their performance.Methods A total of 533 female participants(anemic and healthy)were recruited from Shuguang Hospital.Facial and tongue images were collected using the TFDA-1 tongue and face diagnosis instrument.Color and texture features from various parts of facial and tongue images were extracted using Face Diagnosis Analysis System(FDAS)and Tongue Diagnosis Analysis System version 2.0(TDAS v2.0).Least Absolute Shrinkage and Selection Operator(LASSO)regression was used for feature selection.Ten machine learning models and one deep learning model(ResNet50V2+Conv1D)were developed and evaluated.Results Anemic women showed lower a-values,higher L-and b-values across all age groups.Texture features analysis showed that women aged 30–39 with anemia had higher angular second moment(ASM)and lower entropy(ENT)values in facial images,while those aged 40–49 had lower contrast(CON),ENT,and MEAN values in tongue images but higher ASM.Anemic women exhibited age-related trends similar to healthy women,with decreasing L-values and increasing a-,b-,and ASM-values.LASSO identified 19 key features from 62.Among classifiers,the Artificial Neural Network(ANN)model achieved the best performance[area under the curve(AUC):0.849,accuracy:0.781].The ResNet50V2 model achieved comparable results[AUC:0.846,accuracy:0.818].Conclusion Differences in facial and tongue images suggest that color and texture features can serve as potential TCM phenotype and auxiliary diagnostic indicators for female anemia. 展开更多
关键词 Female anemia facial image Tongue image Machine learning Deep learning
暂未订购
Genome-wide variants of Eurasian facial shape differentiation and a prospective model of DNA based face prediction 被引量:11
8
作者 Lu Qiao Yajun Yang +14 位作者 Pengcheng Fu Sile Hu Hang Zhou Shouneng Peng Jingze Tan Yan Lu Haiyi Lou Dongsheng Lu Sijie Wu Jing Guo Li Jin Yaqun Guan Sijia Wang Shuhua Xu Kun Tang 《Journal of Genetics and Genomics》 SCIE CAS CSCD 2018年第8期419-432,共14页
It is a long-standing question as to which genes define the characteristic facial features among different ethnic groups. In this study, we use Uyghurs, an ancient admixed population to query the genetic bases why Eur... It is a long-standing question as to which genes define the characteristic facial features among different ethnic groups. In this study, we use Uyghurs, an ancient admixed population to query the genetic bases why Europeans and Han Chinese look different. Facial traits were analyzed based on high-dense 3D facial images; numerous biometric spaces were examined for divergent facial features between European and Han Chinese, ranging from inter-landmark distances to dense shape geometrics, Genome-wide associ- ation studies (GWAS) were conducted on a discovery panel of Uyghurs, Six significant loci were iden- tified, four of which, rs1868752, rs118078182, rs60159418 at or near UBASH3B, COL23A1, PCDH7 and rs17868256 were replicated in independent cohorts of Uyghurs or Southern Han Chinese. A prospective model was also developed to predict 3D faces based on top GWAS signals and tested in hypothetic forensic scenarios. 展开更多
关键词 Genome-wide association study Dense 3D facial image Ancestry-divergent phenotypes Face prediction Forensic scenario
原文传递
An Automated and Real-time Approach of Depression Detection from Facial Micro-expressions 被引量:4
9
作者 Ghulam Gilanie Mahmood ul Hassan +5 位作者 Mutyyba Asghar Ali Mustafa Qamar Hafeez Ullah Rehan Ullah Khan Nida Aslam Irfan Ullah Khan 《Computers, Materials & Continua》 SCIE EI 2022年第11期2513-2528,共16页
Depression is a mental psychological disorder that may cause a physical disorder or lead to death.It is highly impactful on the socialeconomical life of a person;therefore,its effective and timely detection is needful... Depression is a mental psychological disorder that may cause a physical disorder or lead to death.It is highly impactful on the socialeconomical life of a person;therefore,its effective and timely detection is needful.Despite speech and gait,facial expressions have valuable clues to depression.This study proposes a depression detection system based on facial expression analysis.Facial features have been used for depression detection using Support Vector Machine(SVM)and Convolutional Neural Network(CNN).We extracted micro-expressions using Facial Action Coding System(FACS)as Action Units(AUs)correlated with the sad,disgust,and contempt features for depression detection.A CNN-based model is also proposed in this study to auto classify depressed subjects from images or videos in real-time.Experiments have been performed on the dataset obtained from Bahawal Victoria Hospital,Bahawalpur,Pakistan,as per the patient health questionnaire depression scale(PHQ-8);for inferring the mental condition of a patient.The experiments revealed 99.9%validation accuracy on the proposed CNN model,while extracted features obtained 100%accuracy on SVM.Moreover,the results proved the superiority of the reported approach over state-of-the-art methods. 展开更多
关键词 Depression detection facial micro-expressions facial landmarked images
在线阅读 下载PDF
A Robust Method of Bipolar Mental Illness Detection from Facial Micro Expressions Using Machine Learning Methods
10
作者 Ghulam Gilanie Sana Cheema +4 位作者 Akkasha Latif AnumSaher Muhammad Ahsan Hafeez Ullah Diya Oommen 《Intelligent Automation & Soft Computing》 2024年第1期57-71,共15页
Bipolar disorder is a serious mental condition that may be caused by any kind of stress or emotional upset experienced by the patient.It affects a large percentage of people globally,who fluctuate between depression a... Bipolar disorder is a serious mental condition that may be caused by any kind of stress or emotional upset experienced by the patient.It affects a large percentage of people globally,who fluctuate between depression and mania,or vice versa.A pleasant or unpleasant mood is more than a reflection of a state of mind.Normally,it is a difficult task to analyze through physical examination due to a large patient-psychiatrist ratio,so automated procedures are the best options to diagnose and verify the severity of bipolar.In this research work,facial microexpressions have been used for bipolar detection using the proposed Convolutional Neural Network(CNN)-based model.Facial Action Coding System(FACS)is used to extract micro-expressions called Action Units(AUs)connected with sad,happy,and angry emotions.Experiments have been conducted on a dataset collected from Bahawal Victoria Hospital,Bahawalpur,Pakistan,Using the Patient Health Questionnaire-15(PHQ-15)to infer a patient’s mental state.The experimental results showed a validation accuracy of 98.99%for the proposed CNN modelwhile classification through extracted featuresUsing SupportVectorMachines(SVM),K-NearestNeighbour(KNN),and Decision Tree(DT)obtained 99.9%,98.7%,and 98.9%accuracy,respectively.Overall,the outcomes demonstrated the stated method’s superiority over the current best practices. 展开更多
关键词 Bipolar mental illness detection facial micro-expressions facial landmarked images
在线阅读 下载PDF
An adaptive dual-domain feature representation method for enhanced deep forgery detection
11
作者 Ming Li Yan Qin +1 位作者 Heng Zhang Zhiguo Shi 《Journal of Automation and Intelligence》 2025年第4期273-281,共9页
Deep forgery detection technologies are crucial for image and video recognition tasks,with their performance heavily reliant on the features extracted from both real and fake images.However,most existing methods prima... Deep forgery detection technologies are crucial for image and video recognition tasks,with their performance heavily reliant on the features extracted from both real and fake images.However,most existing methods primarily focus on spatial domain features,which limits their accuracy.To address this limitation,we propose an adaptive dual-domain feature representation method for enhanced deep forgery detection.Specifically,an adaptive region dynamic convolution module is established to efficiently extract facial features from the spatial domain.Then,we introduce an adaptive frequency dynamic filter to capture effective frequency domain features.By fusing both spatial and frequency domain features,our approach significantly improves the accuracy of classifying real and fake facial images.Finally,experimental results on three real-world datasets validate the effectiveness of our dual-domain feature representation method,which substantially improves classification precision. 展开更多
关键词 Dynamic convolution module Dynamic filter Feature representation facial images Deep forgery detection
在线阅读 下载PDF
A Method for Detecting Non-Mask Wearers Based on Regression Analysis
12
作者 Dokyung Hwang Hyeonmin Ro +2 位作者 Naejoung Kwak Jinsang Hwang Dongju Kim 《Computers, Materials & Continua》 SCIE EI 2022年第9期4411-4431,共21页
A novel practical and universal method of mask-wearing detection has been proposed to prevent viral respiratory infections.The proposed method quickly and accurately detects mask and facial regions using welltrained Y... A novel practical and universal method of mask-wearing detection has been proposed to prevent viral respiratory infections.The proposed method quickly and accurately detects mask and facial regions using welltrained You Only Look Once(YOLO)detector,then applies image coordinates of the detected bounding box(bbox).First,the data that is used to train our model is collected under various circumstances such as light disturbances,distances,time variations,and different climate conditions.It also contains various mask types to detect in general and universal application of the model.To detect mask-wearing status,it is important to detect facial and mask region accurately and we created our own dataset by taking picture of images.Furthermore,the Convolutional Neural Network(CNN)model is trained with both our own dataset and open dataset to detect under heavy foot-traffic(Indoors).To make the model robust and reliable in various environment and situations,we collected various sample data in different distances.And through the experiment,we found out that there is a particular gradient according to the mask-wearing status.The proposed method searches the point where the distance between the gradient for each state and the coordinate information of the detected object is the minimum.Then it carry out the classification of mask-wearing status of detected object.Lastly,we defined and classified three different mask-wearing states according to the mask’s position(With mask,Wear a mask around chin and Without mask).The gradient according to the mask-wearing status,is analyzed through linear regression.The regression interpretation is based on coordinate information of mask-wearing status and the sample data collected in simulated environment that considering distances between objects and the camera in the World Coordinate System.Through the experiments,we found out that linear regression analysis is more suitable than logistic regression analysis for classification of people wearing masks in general-purpose environments.And the proposed method,through linear regression analysis,classifies in a very concise way than the others. 展开更多
关键词 Automatic quarantine process detection of improper mask wearers facial image coordinates convolution neural network
在线阅读 下载PDF
A Fiber Tractography Study of Social-Emotional Related Fiber Tracts in Children and Adolescents with Autism Spectrum Disorder 被引量:5
13
作者 Yun Li Hui Fang +7 位作者 Wenming Zheng Lu Qian Yunhua Xiao Qiaorong Wu Chen Chang Chaoyong Xiao Kangkang Chu Xiaoyan Ke 《Neuroscience Bulletin》 SCIE CAS CSCD 2017年第6期722-730,共9页
The symptoms of autism spectrum disorder(ASD) have been hypothesized to be caused by changes in brain connectivity. From the clinical perspective, the‘‘disconnectivity'' hypothesis has been used to explain chara... The symptoms of autism spectrum disorder(ASD) have been hypothesized to be caused by changes in brain connectivity. From the clinical perspective, the‘‘disconnectivity'' hypothesis has been used to explain characteristic impairments in ‘‘socio-emotional'' function.Therefore, in this study we compared the facial emotional recognition(FER) feature and the integrity of socialemotional-related white-matter tracts between children and adolescents with high-functioning ASD(HFA) and their typically developing(TD) counterparts. The correlation between the two factors was explored to find out if impairment of the white-matter tracts is the neural basis of social-emotional disorders. Compared with the TD group,FER was significantly impaired and the fractional anisotropy value of the right cingulate fasciculus was increased in the HFA group(P / 0.01). In conclusion, the FER function of children and adolescents with HFA was impaired and the microstructure of the cingulate fasciculus had abnormalities. 展开更多
关键词 Autism spectrum disorder facial emotional recognition Social-emotional related white matter fiber tracts Diffusion tensor imaging Tractography
原文传递
Affective rating ranking based on face images in arousal-valence dimensional space
14
作者 Guo-peng XU Hai-tang LU +1 位作者 Fei-fei ZHANG Qi-rong MAO 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2018年第6期783-795,共13页
In dimensional affect recognition, the machine learning methods, which are used to model and predict affect, are mostly classification and regression. However, the annotation in the dimensional affect space usually ta... In dimensional affect recognition, the machine learning methods, which are used to model and predict affect, are mostly classification and regression. However, the annotation in the dimensional affect space usually takes the form of a continuous real value which has an ordinal property. The aforementioned methods do not focus on taking advantage of this important information. Therefore, we propose an affective rating ranking framework for affect recognition based on face images in the valence and arousal dimensional space. Our approach can appropriately use the ordinal information among affective ratings which are generated by discretizing continuous annotations.Specifically, we first train a series of basic cost-sensitive binary classifiers, each of which uses all samples relabeled according to the comparison results between corresponding ratings and a given rank of a binary classifier. We obtain the final affective ratings by aggregating the outputs of binary classifiers. By comparing the experimental results with the baseline and deep learning based classification and regression methods on the benchmarking database of the AVEC 2015 Challenge and the selected subset of SEMAINE database, we find that our ordinal ranking method is effective in both arousal and valence dimensions. 展开更多
关键词 Ordinal ranking Dimensional affect recognition VALENCE AROUSAL facial image processing
原文传递
De Novo Dissecting the Three‑Dimensional Facial Morphology of 2379 Han Chinese Individuals 被引量:4
15
作者 Hui Qiao Jingze Tan +3 位作者 Shaoqing Wen Menghan Zhang Shuhua Xu Li Jin 《Phenomics》 2024年第1期1-12,共12页
Phenotypic diversity,especially that of facial morphology,has not been fully investigated in the Han Chinese,which is the largest ethnic group in the world.In this study,we systematically analyzed a total of 14,838 fa... Phenotypic diversity,especially that of facial morphology,has not been fully investigated in the Han Chinese,which is the largest ethnic group in the world.In this study,we systematically analyzed a total of 14,838 facial traits representing 15 categories with both a large-scale three-dimensional(3D)manual landmarking database and computer-aided facial segmented phenotyping in 2379 Han Chinese individuals.Our results illustrate that homogeneous and heterogeneous facial morphological traits exist among Han Chinese populations across the three geographical regions:Zhengzhou,Taizhou,and Nanning.We identifed 1560 shared features from extracted phenotypes,which characterized well the basic facial morphology of the Han Chinese.In particular,heterogeneous phenotypes showing population structures corresponded to geographical subpopulations.The greatest facial variation among these geographical populations was the angle of glabella,left subalare,and right cheilion(p=3.4×10^(−161)).Interestingly,we found that Han Chinese populations could be classifed into northern Han,central Han,and southern Han at the phenotypic level,and the facial morphological variation pattern of central Han Chinese was between the typical diferentiation of northern and southern Han Chinese.This result was highly consistent with the results revealed by the genetic data.These fndings provide new insights into the analysis of multidimensional phenotypes as well as a valuable resource for further facial phenotype-genotype association studies in Han Chinese and East Asian populations. 展开更多
关键词 PHENOTYPES Three-dimensional facial imaging facial morphology Han Chinese
在线阅读 下载PDF
Multi-Information Fusion Method for Traditional Chinese Medicine Constitution Identification in the Elderly
16
作者 Feng-Wei Yang Zhu-Qing Li +6 位作者 Yan Tang Yi Zhao Dai-Qing Tan En-Ai Lin Zhe Liu Ai-Qing Han Ji Wang 《World Journal of Traditional Chinese Medicine》 2025年第3期405-415,共11页
Objective:This study addresses the limitations of existing traditional Chinese medicine(TCM)constitution identification techniques for the elderly by proposing an intelligent identification method aimed at enhancing t... Objective:This study addresses the limitations of existing traditional Chinese medicine(TCM)constitution identification techniques for the elderly by proposing an intelligent identification method aimed at enhancing the accuracy,standardization,and formalization of the identification process.Materials and Methods:Leveraging data from the images of the tongue,face,and pulse,this study introduced four image classification models:EfficientNetV2,MobileViT,Vision Transformer,and Swin Transformer.A comparative experimental approach was employed to establish a baseline model.Subsequently,a multi-information fusion model was constructed on this foundation,extracting integrated features from diverse data to further improve identification accuracy.Results:The multi-information fusion model developed in this study achieved an accuracy of 71.32%,effectively enhancing the accuracy of TCM constitution identification for the elderly.Conclusions:The multi-information fusion model developed in this study,by integrating tongue,facial,and pulse data,considerably enhances the accuracy of TCM constitution identification.It effectively addresses the certain limitations inherent in existing TCM constitution identification techniques,offering a novel and efficacious strategy for this domain. 展开更多
关键词 Deep learning facial image multi-information fusion pulse image tongue image traditional Chinese medicine constitution
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部