期刊文献+
共找到2,331篇文章
< 1 2 117 >
每页显示 20 50 100
Smelting stage recognition for converter steelmaking based on the convolutional recurrent neural network
1
作者 Zhangjie Dai Ye Sun +2 位作者 Wei Liu Shufeng Yang Jingshe Li 《International Journal of Minerals,Metallurgy and Materials》 2025年第9期2152-2163,共12页
The converter steelmaking process represents a pivotal aspect of steel metallurgical production,with the characteristics of the flame at the furnace mouth serving as an indirect indicator of the internal smelting stag... The converter steelmaking process represents a pivotal aspect of steel metallurgical production,with the characteristics of the flame at the furnace mouth serving as an indirect indicator of the internal smelting stage.Effectively identifying and predicting the smelt-ing stage poses a significant challenge within industrial production.Traditional image-based methodologies,which rely on a single static flame image as input,demonstrate low recognition accuracy and inadequately extract the dynamic changes in smelting stage.To address this issue,the present study introduces an innovative recognition model that preprocesses flame video sequences from the furnace mouth and then employs a convolutional recurrent neural network(CRNN)to extract spatiotemporal features and derive recognition outputs.Ad-ditionally,we adopt feature layer visualization techniques to verify the model’s effectiveness and further enhance model performance by integrating the Bayesian optimization algorithm.The results indicate that the ResNet18 with convolutional block attention module(CBAM)in the convolutional layer demonstrates superior image feature extraction capabilities,achieving an accuracy of 90.70%and an area under the curve of 98.05%.The constructed Bayesian optimization-CRNN(BO-CRNN)model exhibits a significant improvement in comprehensive performance,with an accuracy of 97.01%and an area under the curve of 99.85%.Furthermore,statistics on the model’s average recognition time,computational complexity,and parameter quantity(Average recognition time:5.49 ms,floating-point opera-tions per second:18260.21 M(1 M=1×10^(6)),parameters:11.58 M)demonstrate superior performance.Through extensive repeated ex-periments on real-world datasets,the proposed CRNN model is capable of rapidly and accurately identifying smelting stages,offering a novel approach for converter smelting endpoint control. 展开更多
关键词 intelligent steelmaking flame state recognition deep learning convolutional recurrent neural network
在线阅读 下载PDF
Expression Recognition Method Based on Convolutional Neural Network and Capsule Neural Network 被引量:1
2
作者 Zhanfeng Wang Lisha Yao 《Computers, Materials & Continua》 SCIE EI 2024年第4期1659-1677,共19页
Convolutional neural networks struggle to accurately handle changes in angles and twists in the direction of images,which affects their ability to recognize patterns based on internal feature levels. In contrast, Caps... Convolutional neural networks struggle to accurately handle changes in angles and twists in the direction of images,which affects their ability to recognize patterns based on internal feature levels. In contrast, CapsNet overcomesthese limitations by vectorizing information through increased directionality and magnitude, ensuring that spatialinformation is not overlooked. Therefore, this study proposes a novel expression recognition technique calledCAPSULE-VGG, which combines the strengths of CapsNet and convolutional neural networks. By refining andintegrating features extracted by a convolutional neural network before introducing theminto CapsNet, ourmodelenhances facial recognition capabilities. Compared to traditional neural network models, our approach offersfaster training pace, improved convergence speed, and higher accuracy rates approaching stability. Experimentalresults demonstrate that our method achieves recognition rates of 74.14% for the FER2013 expression dataset and99.85% for the CK+ expression dataset. By contrasting these findings with those obtained using conventionalexpression recognition techniques and incorporating CapsNet’s advantages, we effectively address issues associatedwith convolutional neural networks while increasing expression identification accuracy. 展开更多
关键词 expression recognition capsule neural network convolutional neural network
在线阅读 下载PDF
Audiovisual speech recognition based on a deep convolutional neural network 被引量:1
3
作者 Shashidhar Rudregowda Sudarshan Patilkulkarni +2 位作者 Vinayakumar Ravi Gururaj H.L. Moez Krichen 《Data Science and Management》 2024年第1期25-34,共10页
Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for India... Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively. 展开更多
关键词 Audiovisual speech recognition Custom dataset 1D Convolution neural network(CNN) deep CNN(DCNN) Long short-term memory(LSTM) LIPREADING Dlib Mel-frequency cepstral coefficient(MFCC)
在线阅读 下载PDF
Identity-aware convolutional neural networks for facial expression recognition 被引量:14
4
作者 Chongsheng Zhang Pengyou Wang +1 位作者 Ke Chen Joni-Kristian Kamarainen 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2017年第4期784-792,共9页
Facial expression recognition is a hot topic in computer vision, but it remains challenging due to the feature inconsistency caused by person-specific 'characteristics of facial expressions. To address such a chal... Facial expression recognition is a hot topic in computer vision, but it remains challenging due to the feature inconsistency caused by person-specific 'characteristics of facial expressions. To address such a challenge, and inspired by the recent success of deep identity network (DeepID-Net) for face identification, this paper proposes a novel deep learning based framework for recognising human expressions with facial images. Compared to the existing deep learning methods, our proposed framework, which is based on multi-scale global images and local facial patches, can significantly achieve a better performance on facial expression recognition. Finally, we verify the effectiveness of our proposed framework through experiments on the public benchmarking datasets JAFFE and extended Cohn-Kanade (CK+). 展开更多
关键词 facial expression recognition deep learning CLASSIFICATION identity-aware
在线阅读 下载PDF
Facial Expression Recognition Using Enhanced Convolution Neural Network with Attention Mechanism 被引量:5
5
作者 K.Prabhu S.SathishKumar +2 位作者 M.Sivachitra S.Dineshkumar P.Sathiyabama 《Computer Systems Science & Engineering》 SCIE EI 2022年第4期415-426,共12页
Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER hav... Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER have been perfect on normal faces but have been found to be constrained in occluded faces.Recently,Deep Learning Techniques(DLT)have gained popular-ity in applications of real-world problems including recognition of human emo-tions.The human face reflects emotional states and human intentions.An expression is the most natural and powerful way of communicating non-verbally.Systems which form communications between the two are termed Human Machine Interaction(HMI)systems.FER can improve HMI systems as human expressions convey useful information to an observer.This paper proposes a FER scheme called EECNN(Enhanced Convolution Neural Network with Atten-tion mechanism)to recognize seven types of human emotions with satisfying results in its experiments.Proposed EECNN achieved 89.8%accuracy in classi-fying the images. 展开更多
关键词 facial expression recognition linear discriminant analysis animal migration optimization regions of interest enhanced convolution neural network with attention mechanism
在线阅读 下载PDF
The deep spatiotemporal network with dual-flow fusion for video-oriented facial expression recognition
6
作者 Chenquan Gan Jinhui Yao +2 位作者 Shuaiying Ma Zufan Zhang Lianxiang Zhu 《Digital Communications and Networks》 SCIE CSCD 2023年第6期1441-1447,共7页
The video-oriented facial expression recognition has always been an important issue in emotion perception.At present,the key challenge in most existing methods is how to effectively extract robust features to characte... The video-oriented facial expression recognition has always been an important issue in emotion perception.At present,the key challenge in most existing methods is how to effectively extract robust features to characterize facial appearance and geometry changes caused by facial motions.On this basis,the video in this paper is divided into multiple segments,each of which is simultaneously described by optical flow and facial landmark trajectory.To deeply delve the emotional information of these two representations,we propose a Deep Spatiotemporal Network with Dual-flow Fusion(defined as DSN-DF),which highlights the region and strength of expressions by spatiotemporal appearance features and the speed of change by spatiotemporal geometry features.Finally,experiments are implemented on CKþand MMI datasets to demonstrate the superiority of the proposed method. 展开更多
关键词 facial expression recognition deep spatiotemporal network Optical flow facial landmark trajectory Dual-flow fusion
在线阅读 下载PDF
HMM-Based Photo-Realistic Talking Face Synthesis Using Facial Expression Parameter Mapping with Deep Neural Networks
7
作者 Kazuki Sato Takashi Nose Akinori Ito 《Journal of Computer and Communications》 2017年第10期50-65,共16页
This paper proposes a technique for synthesizing a pixel-based photo-realistic talking face animation using two-step synthesis with HMMs and DNNs. We introduce facial expression parameters as an intermediate represent... This paper proposes a technique for synthesizing a pixel-based photo-realistic talking face animation using two-step synthesis with HMMs and DNNs. We introduce facial expression parameters as an intermediate representation that has a good correspondence with both of the input contexts and the output pixel data of face images. The sequences of the facial expression parameters are modeled using context-dependent HMMs with static and dynamic features. The mapping from the expression parameters to the target pixel images are trained using DNNs. We examine the required amount of the training data for HMMs and DNNs and compare the performance of the proposed technique with the conventional PCA-based technique through objective and subjective evaluation experiments. 展开更多
关键词 Visual-Speech SYNTHESIS TALKING Head Hidden MARKOV Models (HMMs) deep neural networks (DNNs) facial expression Parameter
在线阅读 下载PDF
Hybrid Convolutional Neural Network and Long Short-Term Memory Approach for Facial Expression Recognition
8
作者 M.N.Kavitha A.RajivKannan 《Intelligent Automation & Soft Computing》 SCIE 2023年第1期689-704,共16页
Facial Expression Recognition(FER)has been an importantfield of research for several decades.Extraction of emotional characteristics is crucial to FERs,but is complex to process as they have significant intra-class va... Facial Expression Recognition(FER)has been an importantfield of research for several decades.Extraction of emotional characteristics is crucial to FERs,but is complex to process as they have significant intra-class variances.Facial characteristics have not been completely explored in static pictures.Previous studies used Convolution Neural Networks(CNNs)based on transfer learning and hyperparameter optimizations for static facial emotional recognitions.Particle Swarm Optimizations(PSOs)have also been used for tuning hyperparameters.However,these methods achieve about 92 percent in terms of accuracy.The existing algorithms have issues with FER accuracy and precision.Hence,the overall FER performance is degraded significantly.To address this issue,this work proposes a combination of CNNs and Long Short-Term Memories(LSTMs)called the HCNN-LSTMs(Hybrid CNNs and LSTMs)approach for FERs.The work is evaluated on the benchmark dataset,Facial Expression Recog Image Ver(FERC).Viola-Jones(VJ)algorithms recognize faces from preprocessed images followed by HCNN-LSTMs feature extractions and FER classifications.Further,the success rate of Deep Learning Techniques(DLTs)has increased with hyperparameter tunings like epochs,batch sizes,initial learning rates,regularization parameters,shuffling types,and momentum.This proposed work uses Improved Weight based Whale Optimization Algorithms(IWWOAs)to select near-optimal settings for these parameters using bestfitness values.The experi-mentalfindings demonstrated that the proposed HCNN-LSTMs system outper-forms the existing methods. 展开更多
关键词 facial expression recognition Gaussianfilter hyperparameter optimization improved weight-based whale optimization algorithm deep learning(DL)
在线阅读 下载PDF
Customized Convolutional Neural Network for Accurate Detection of Deep Fake Images in Video Collections 被引量:1
9
作者 Dmitry Gura Bo Dong +1 位作者 Duaa Mehiar Nidal Al Said 《Computers, Materials & Continua》 SCIE EI 2024年第5期1995-2014,共20页
The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method in... The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos. 展开更多
关键词 deep fake detection video analysis convolutional neural network machine learning video dataset collection facial landmark prediction accuracy models
在线阅读 下载PDF
Augmented Deep-Feature-Based Ear Recognition Using Increased Discriminatory Soft Biometrics
10
作者 Emad Sami Jaha 《Computer Modeling in Engineering & Sciences》 2025年第9期3645-3678,共34页
The human ear has been substantiated as a viable nonintrusive biometric modality for identification or verification.Among many feasible techniques for ear biometric recognition,convolutional neural network(CNN)models ... The human ear has been substantiated as a viable nonintrusive biometric modality for identification or verification.Among many feasible techniques for ear biometric recognition,convolutional neural network(CNN)models have recently offered high-performance and reliable systems.However,their performance can still be further improved using the capabilities of soft biometrics,a research question yet to be investigated.This research aims to augment the traditional CNN-based ear recognition performance by adding increased discriminatory ear soft biometric traits.It proposes a novel framework of augmented ear identification/verification using a group of discriminative categorical soft biometrics and deriving new,more perceptive,comparative soft biometrics for feature-level fusion with hard biometric deep features.It conducts several identification and verification experiments for performance evaluation,analysis,and comparison while varying ear image datasets,hard biometric deep-feature extractors,soft biometric augmentation methods,and classifiers used.The experimental work yields promising results,reaching up to 99.94%accuracy and up to 14%improvement using the AMI and AMIC datasets,along with their corresponding soft biometric label data.The results confirm the proposed augmented approaches’superiority over their standard counterparts and emphasize the robustness of the new ear comparative soft biometrics over their categorical peers. 展开更多
关键词 Ear recognition soft biometrics human identification human verification comparative labeling ranking SVM deep features feature-level fusion convolutional neural networks(CNNs) deep learning
在线阅读 下载PDF
DFNet: A Differential Feature-Incorporated Residual Network for Image Recognition
11
作者 Pengxing Cai Yu Zhang +2 位作者 Houtian He Zhenyu Lei Shangce Gao 《Journal of Bionic Engineering》 2025年第2期931-944,共14页
Residual neural network (ResNet) is a powerful neural network architecture that has proven to be excellent in extracting spatial and channel-wise information of images. ResNet employs a residual learning strategy that... Residual neural network (ResNet) is a powerful neural network architecture that has proven to be excellent in extracting spatial and channel-wise information of images. ResNet employs a residual learning strategy that maps inputs directly to outputs, making it less difficult to optimize. In this paper, we incorporate differential information into the original residual block to improve the representative ability of the ResNet, allowing the modified network to capture more complex and metaphysical features. The proposed DFNet preserves the features after each convolutional operation in the residual block, and combines the feature maps of different levels of abstraction through the differential information. To verify the effectiveness of DFNet on image recognition, we select six distinct classification datasets. The experimental results show that our proposed DFNet has better performance and generalization ability than other state-of-the-art variants of ResNet in terms of classification accuracy and other statistical analysis. 展开更多
关键词 deep learning Residual neural network Pattern recognition Residual block Differential feature
在线阅读 下载PDF
Enhancing Human Action Recognition with Adaptive Hybrid Deep Attentive Networks and Archerfish Optimization
12
作者 Ahmad Yahiya Ahmad Bani Ahmad Jafar Alzubi +3 位作者 Sophers James Vincent Omollo Nyangaresi Chanthirasekaran Kutralakani Anguraju Krishnan 《Computers, Materials & Continua》 SCIE EI 2024年第9期4791-4812,共22页
In recent years,wearable devices-based Human Activity Recognition(HAR)models have received significant attention.Previously developed HAR models use hand-crafted features to recognize human activities,leading to the e... In recent years,wearable devices-based Human Activity Recognition(HAR)models have received significant attention.Previously developed HAR models use hand-crafted features to recognize human activities,leading to the extraction of basic features.The images captured by wearable sensors contain advanced features,allowing them to be analyzed by deep learning algorithms to enhance the detection and recognition of human actions.Poor lighting and limited sensor capabilities can impact data quality,making the recognition of human actions a challenging task.The unimodal-based HAR approaches are not suitable in a real-time environment.Therefore,an updated HAR model is developed using multiple types of data and an advanced deep-learning approach.Firstly,the required signals and sensor data are accumulated from the standard databases.From these signals,the wave features are retrieved.Then the extracted wave features and sensor data are given as the input to recognize the human activity.An Adaptive Hybrid Deep Attentive Network(AHDAN)is developed by incorporating a“1D Convolutional Neural Network(1DCNN)”with a“Gated Recurrent Unit(GRU)”for the human activity recognition process.Additionally,the Enhanced Archerfish Hunting Optimizer(EAHO)is suggested to fine-tune the network parameters for enhancing the recognition process.An experimental evaluation is performed on various deep learning networks and heuristic algorithms to confirm the effectiveness of the proposed HAR model.The EAHO-based HAR model outperforms traditional deep learning networks with an accuracy of 95.36,95.25 for recall,95.48 for specificity,and 95.47 for precision,respectively.The result proved that the developed model is effective in recognizing human action by taking less time.Additionally,it reduces the computation complexity and overfitting issue through using an optimization approach. 展开更多
关键词 Human action recognition multi-modal sensor data and signals adaptive hybrid deep attentive network enhanced archerfish hunting optimizer 1D convolutional neural network gated recurrent units
在线阅读 下载PDF
WiFi CSI Gesture Recognition Based on Parallel LSTM-FCN Deep Space-Time Neural Network 被引量:5
13
作者 Zhiling Tang Qianqian Liu +2 位作者 Minjie Wu Wenjing Chen Jingwen Huang 《China Communications》 SCIE CSCD 2021年第3期205-215,共11页
In this study,we developed a system based on deep space–time neural networks for gesture recognition.When users change or the number of gesture categories increases,the accuracy of gesture recognition decreases consi... In this study,we developed a system based on deep space–time neural networks for gesture recognition.When users change or the number of gesture categories increases,the accuracy of gesture recognition decreases considerably because most gesture recognition systems cannot accommodate both user differentiation and gesture diversity.To overcome the limitations of existing methods,we designed a onedimensional parallel long short-term memory–fully convolutional network(LSTM–FCN)model to extract gesture features of different dimensions.LSTM can learn complex time dynamic information,whereas FCN can predict gestures efficiently by extracting the deep,abstract features of gestures in the spatial dimension.In the experiment,50 types of gestures of five users were collected and evaluated.The experimental results demonstrate the effectiveness of this system and robustness to various gestures and individual changes.Statistical analysis of the recognition results indicated that an average accuracy of approximately 98.9% was achieved. 展开更多
关键词 signal and information processing parallel LSTM-FCN neural network deep learning gesture recognition wireless channel state information
在线阅读 下载PDF
Individual Dairy Cattle Recognition Based on Deep Convolutional Neural Network 被引量:2
14
作者 ZHANG Mandun SHAN Xinyuan +3 位作者 YU Jinsu GUO Yingchun LI Ruiwen XU Mingquan 《Journal of Donghua University(English Edition)》 EI CAS 2018年第2期107-112,共6页
Image based individual dairy cattle recognition has gained much attention recently. In order to further improve the accuracy of individual dairy cattle recognition, an algorithm based on deep convolutional neural netw... Image based individual dairy cattle recognition has gained much attention recently. In order to further improve the accuracy of individual dairy cattle recognition, an algorithm based on deep convolutional neural network( DCNN) is proposed in this paper,which enables automatic feature extraction and classification that outperforms traditional hand craft features. Through making multigroup comparison experiments including different network layers,different sizes of convolution kernel and different feature dimensions in full connection layer,we demonstrate that the proposed method is suitable for dairy cattle classification. The experimental results show that the accuracy is significantly higher compared to two traditional image processing algorithms: scale invariant feature transform( SIFT) algorithm and bag of feature( BOF) model. 展开更多
关键词 deep learning deep convolutional neural network(DCNN) DAIRY CATTLE INDIVIDUAL recognition
在线阅读 下载PDF
Diffraction deep neural network based orbital angular momentum mode recognition scheme in oceanic turbulence 被引量:1
15
作者 詹海潮 陈兵 +3 位作者 彭怡翔 王乐 王文鼐 赵生妹 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第4期364-369,共6页
Orbital angular momentum(OAM)has the characteristics of mutual orthogonality between modes,and has been applied to underwater wireless optical communication(UWOC)systems to increase the channel capacity.In this work,w... Orbital angular momentum(OAM)has the characteristics of mutual orthogonality between modes,and has been applied to underwater wireless optical communication(UWOC)systems to increase the channel capacity.In this work,we propose a diffractive deep neural network(DDNN)based OAM mode recognition scheme,where the DDNN is trained to capture the features of the intensity distribution of the OAM modes and output the corresponding azimuthal indices and radial indices.The results show that the proposed scheme can recognize the azimuthal indices and radial indices of the OAM modes accurately and quickly.In addition,the proposed scheme can resist weak oceanic turbulence(OT),and exhibit excellent ability to recognize OAM modes in a strong OT environment.The DDNN-based OAM mode recognition scheme has potential applications in UWOC systems. 展开更多
关键词 orbital angular momentum diffractive deep neural network mode recognition oceanic turbulence
原文传递
Individual Minke Whale Recognition Using Deep Learning Convolutional Neural Networks 被引量:1
16
作者 Dmitry A. Konovalov Suzanne Hillcoat +3 位作者 Genevieve Williams R. Alastair Birtles Naomi Gardiner Matthew I. Curnock 《Journal of Geoscience and Environment Protection》 2018年第5期25-36,共12页
The only known predictable aggregation of dwarf minke whales (Balaenoptera acutorostrata subsp.) occurs in the Australian offshore waters of the northern Great Barrier Reef in May-August each year. The identification ... The only known predictable aggregation of dwarf minke whales (Balaenoptera acutorostrata subsp.) occurs in the Australian offshore waters of the northern Great Barrier Reef in May-August each year. The identification of individual whales is required for research on the whales’ population characteristics and for monitoring the potential impacts of tourism activities, including commercial swims with the whales. At present, it is not cost-effective for researchers to manually process and analyze the tens of thousands of underwater images collated after each observation/tourist season, and a large data base of historical non-identified imagery exists. This study reports the first proof of concept for recognizing individual dwarf minke whales using the Deep Learning Convolutional Neural Networks (CNN).The “off-the-shelf” Image net-trained VGG16 CNN was used as the feature-encoder of the perpixel sematic segmentation Automatic Minke Whale Recognizer (AMWR). The most frequently photographed whale in a sample of 76 individual whales (MW1020) was identified in 179 images out of the total 1320 images provid-ed. Training and image augmentation procedures were developed to compen-sate for the small number of available images. The trained AMWR achieved 93% prediction accuracy on the testing subset of 36 positive/MW1020 and 228 negative/not-MW1020 images, where each negative image contained at least one of the other 75 whales. Furthermore on the test subset, AMWR achieved 74% precision, 80% recall, and 4% false-positive rate, making the presented approach comparable or better to other state-of-the-art individual animal recognition results. 展开更多
关键词 DWARF Minke WHALES PHOTO-IDENTIFICATION POPULATION Biology Convolutional neural networks deep Learning Image recognition
暂未订购
Human-Computer Interaction Using Deep Fusion Model-Based Facial Expression Recognition System
17
作者 Saiyed Umer Ranjeet Kumar Rout +3 位作者 Shailendra Tiwari Ahmad Ali AlZubi Jazem Mutared Alanazi Kulakov Yurii 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第5期1165-1185,共21页
A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extr... A deep fusion model is proposed for facial expression-based human-computer Interaction system.Initially,image preprocessing,i.e.,the extraction of the facial region from the input image is utilized.Thereafter,the extraction of more discriminative and distinctive deep learning features is achieved using extracted facial regions.To prevent overfitting,in-depth features of facial images are extracted and assigned to the proposed convolutional neural network(CNN)models.Various CNN models are then trained.Finally,the performance of each CNN model is fused to obtain the final decision for the seven basic classes of facial expressions,i.e.,fear,disgust,anger,surprise,sadness,happiness,neutral.For experimental purposes,three benchmark datasets,i.e.,SFEW,CK+,and KDEF are utilized.The performance of the proposed systemis compared with some state-of-the-artmethods concerning each dataset.Extensive performance analysis reveals that the proposed system outperforms the competitive methods in terms of various performance metrics.Finally,the proposed deep fusion model is being utilized to control a music player using the recognized emotions of the users. 展开更多
关键词 deep learning facial expression emotions recognition CNN
在线阅读 下载PDF
GaitDONet: Gait Recognition Using Deep Features Optimization and Neural Network
18
作者 Muhammad Attique Khan Awais Khan +6 位作者 Majed Alhaisoni Abdullah Alqahtani Ammar Armghan Sara A.Althubiti Fayadh Alenezi Senghour Mey Yunyoung Nam 《Computers, Materials & Continua》 SCIE EI 2023年第6期5087-5103,共17页
Human gait recognition(HGR)is the process of identifying a sub-ject(human)based on their walking pattern.Each subject is a unique walking pattern and cannot be simulated by other subjects.But,gait recognition is not e... Human gait recognition(HGR)is the process of identifying a sub-ject(human)based on their walking pattern.Each subject is a unique walking pattern and cannot be simulated by other subjects.But,gait recognition is not easy and makes the system difficult if any object is carried by a subject,such as a bag or coat.This article proposes an automated architecture based on deep features optimization for HGR.To our knowledge,it is the first architecture in which features are fused using multiset canonical correlation analysis(MCCA).In the proposed method,original video frames are processed for all 11 selected angles of the CASIA B dataset and utilized to train two fine-tuned deep learning models such as Squeezenet and Efficientnet.Deep transfer learning was used to train both fine-tuned models on selected angles,yielding two new targeted models that were later used for feature engineering.Features are extracted from the deep layer of both fine-tuned models and fused into one vector using MCCA.An improved manta ray foraging optimization algorithm is also proposed to select the best features from the fused feature matrix and classified using a narrow neural network classifier.The experimental process was conducted on all 11 angles of the large multi-view gait dataset(CASIA B)dataset and obtained improved accuracy than the state-of-the-art techniques.Moreover,a detailed confidence interval based analysis also shows the effectiveness of the proposed architecture for HGR. 展开更多
关键词 Human gait recognition BIOMETRIC deep learning features fusion OPTIMIZATION neural network
在线阅读 下载PDF
An improved micro-expression recognition algorithm of 3D convolutional neural network
19
作者 WU Jin SHI Qianwen +2 位作者 XI Meng WANG Lei ZENG Huadie 《High Technology Letters》 EI CAS 2022年第1期63-71,共9页
The micro-expression lasts for a very short time and the intensity is very subtle.Aiming at the problem of its low recognition rate,this paper proposes a new micro-expression recognition algorithm based on a three-dim... The micro-expression lasts for a very short time and the intensity is very subtle.Aiming at the problem of its low recognition rate,this paper proposes a new micro-expression recognition algorithm based on a three-dimensional convolutional neural network(3D-CNN),which can extract two-di-mensional features in spatial domain and one-dimensional features in time domain,simultaneously.The network structure design is based on the deep learning framework Keras,and the discarding method and batch normalization(BN)algorithm are effectively combined with three-dimensional vis-ual geometry group block(3D-VGG-Block)to reduce the risk of overfitting while improving training speed.Aiming at the problem of the lack of samples in the data set,two methods of image flipping and small amplitude flipping are used for data amplification.Finally,the recognition rate on the data set is as high as 69.11%.Compared with the current international average micro-expression recog-nition rate of about 67%,the proposed algorithm has obvious advantages in recognition rate. 展开更多
关键词 micro-expression recognition deep learning three-dimensional convolutional neural network(3D-CNN) batch normalization(BN)algorithm DROPOUT
在线阅读 下载PDF
Brain functional changes in facial expression recognition in patients with major depressive disorder before and after antidepressant treatment A functional magnetic resonance imaging study 被引量:3
20
作者 Wenyan Jiang Zhongmin Yint +3 位作者 Yixin Pang Feng Wu Lingtao Kong Ke Xu 《Neural Regeneration Research》 SCIE CAS CSCD 2012年第15期1151-1157,共7页
Functional magnetic resonance imaging was used during emotion recognition to identify changes in functional brain activation in 21 first-episode, treatment-naive major depressive disorder patients before and after ant... Functional magnetic resonance imaging was used during emotion recognition to identify changes in functional brain activation in 21 first-episode, treatment-naive major depressive disorder patients before and after antidepressant treatment. Following escitalopram oxalate treatment, patients exhibited decreased activation in bilateral precentral gyrus, bilateral middle frontal gyrus, left middle temporal gyrus, bilateral postcentral gyrus, left cingulate and right parahippocampal gyrus, and increased activation in right superior frontal gyrus, bilateral superior parietal Iobule and left occipital gyrus during sad facial expression recognition. After antidepressant treatment, patients also exhibited decreased activation in the bilateral middle frontal gyrus, bilateral cingulate and right parahippocampal gyrus, and increased activation in the right inferior frontal gyrus, left fusiform gyrus and right precuneus during happy facial expression recognition. Our experimental findings indicate that the limbic-cortical network might be a key target region for antidepressant treatment in major depressive disorder. 展开更多
关键词 major depressive disorder functional magnetic resonance imaging facial expression recognition ANTIDEPRESSANT neural regeneration
在线阅读 下载PDF
上一页 1 2 117 下一页 到第
使用帮助 返回顶部