In the long-term prediction of battery degradation,the data-driven method has great potential with historical data recorded by the battery management system.This paper proposes an enhanced data-driven model for Lithiu...In the long-term prediction of battery degradation,the data-driven method has great potential with historical data recorded by the battery management system.This paper proposes an enhanced data-driven model for Lithium-ion(Li-ion)battery state of health(SOH)estimation with a superior modeling procedure and optimized features.The Gaussian process regression(GPR)method is adopted to establish the data-driven estimator,which enables Li-ion battery SOH estimation with the uncertainty level.A novel kernel function,with the prior knowledge of Li-ion battery degradation,is then introduced to improve the mod-eling capability of the GPR.As for the features,a two-stage processing structure is proposed to find a suitable partial charging voltage profile with high efficiency.In the first stage,an optimal partial charging voltage is selected by the grid search;while in the second stage,the principal component analysis is conducted to increase both estimation accuracy and computing efficiency.Advantages of the proposed method are validated on two datasets from different Li-ion batteries:Compared with other methods,the proposed method can achieve the same accuracy level in the Oxford dataset;while in Maryland dataset,the mean absolute error,the root-mean-squared error,and the maximum error are at least improved by 16.36%,32.43%,and 45.46%,respectively.展开更多
In 2020,COVID-19 started spreading throughout the world.This deadly infection was identified as a virus that may affect the lungs and,in severe cases,could be the cause of death.The polymerase chain reaction(PCR)test ...In 2020,COVID-19 started spreading throughout the world.This deadly infection was identified as a virus that may affect the lungs and,in severe cases,could be the cause of death.The polymerase chain reaction(PCR)test is commonly used to detect this virus through the nasal passage or throat.However,the PCR test exposes health workers to this deadly virus.To limit human exposure while detecting COVID-19,image processing techniques using deep learning have been successfully applied.In this paper,a strategy based on deep learning is employed to classify the COVID-19 virus.To extract features,two deep learning models have been used,the DenseNet201 and the SqueezeNet.Transfer learning is used in feature extraction,and models are fine-tuned.A publicly available computerized tomography(CT)scan dataset has been used in this study.The extracted features from the deep learning models are optimized using the Ant Colony Optimization algorithm.The proposed technique is validated through multiple evaluation parameters.Several classifiers have been employed to classify the optimized features.The cubic support vector machine(Cubic SVM)classifier shows superiority over other commonly used classifiers and attained an accuracy of 98.72%.The proposed technique achieves state-of-the-art accuracy,a sensitivity of 98.80%,and a specificity of 96.64%.展开更多
Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework...Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework for human gait classification in video sequences using deep learning(DL)fusion assisted and posterior probability-based moth flames optimization(MFO)is proposed.In the first step,the video frames are resized and finetuned by two pre-trained lightweight DL models,EfficientNetB0 and MobileNetV2.Both models are selected based on the top-5 accuracy and less number of parameters.Later,both models are trained through deep transfer learning and extracted deep features fused using a voting scheme.In the last step,the authors develop a posterior probabilitybased MFO feature selection algorithm to select the best features.The selected features are classified using several supervised learning methods.The CASIA-B publicly available dataset has been employed for the experimental process.On this dataset,the authors selected six angles such as 0°,18°,90°,108°,162°,and 180°and obtained an average accuracy of 96.9%,95.7%,86.8%,90.0%,95.1%,and 99.7%.Results demonstrate comparable improvement in accuracy and significantly minimize the computational time with recent state-of-the-art techniques.展开更多
In the area of medical image processing,stomach cancer is one of the most important cancers which need to be diagnose at the early stage.In this paper,an optimized deep learning method is presented for multiple stomac...In the area of medical image processing,stomach cancer is one of the most important cancers which need to be diagnose at the early stage.In this paper,an optimized deep learning method is presented for multiple stomach disease classication.The proposed method work in few important steps—preprocessing using the fusion of ltering images along with Ant Colony Optimization(ACO),deep transfer learning-based features extraction,optimization of deep extracted features using nature-inspired algorithms,and nally fusion of optimal vectors and classication using Multi-Layered Perceptron Neural Network(MLNN).In the feature extraction step,pretrained Inception V3 is utilized and retrained on selected stomach infection classes using the deep transfer learning step.Later on,the activation function is applied to Global Average Pool(GAP)for feature extraction.However,the extracted features are optimized through two different nature-inspired algorithms—Particle Swarm Optimization(PSO)with dynamic tness function and Crow Search Algorithm(CSA).Hence,both methods’output is fused by a maximal value approach and classied the fused feature vector by MLNN.Two datasets are used to evaluate the proposed method—CUI WahStomach Diseases and Combined dataset and achieved an average accuracy of 99.5%.The comparison with existing techniques,it is shown that the proposed method shows signicant performance.展开更多
Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of...Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of memory,vision,and name.In 2020,approximately 18,020 deaths occurred due to brain tumors.These cases can be minimized if a brain tumor is diagnosed at a very early stage.Computer vision researchers have introduced several techniques for brain tumor detection and classification.However,owing to many factors,this is still a challenging task.These challenges relate to the tumor size,the shape of a tumor,location of the tumor,selection of important features,among others.In this study,we proposed a framework for multimodal brain tumor classification using an ensemble of optimal deep learning features.In the proposed framework,initially,a database is normalized in the form of high-grade glioma(HGG)and low-grade glioma(LGG)patients and then two pre-trained deep learning models(ResNet50 and Densenet201)are chosen.The deep learning models were modified and trained using transfer learning.Subsequently,the enhanced ant colony optimization algorithm is proposed for best feature selection from both deep models.The selected features are fused using a serial-based approach and classified using a cubic support vector machine.The experimental process was conducted on the BraTs2019 dataset and achieved accuracies of 87.8%and 84.6%for HGG and LGG,respectively.The comparison is performed using several classification methods,and it shows the significance of our proposed technique.展开更多
Identifying fruit disease manually is time-consuming, expertrequired,and expensive;thus, a computer-based automated system is widelyrequired. Fruit diseases affect not only the quality but also the quantity.As a resul...Identifying fruit disease manually is time-consuming, expertrequired,and expensive;thus, a computer-based automated system is widelyrequired. Fruit diseases affect not only the quality but also the quantity.As a result, it is possible to detect the disease early on and cure the fruitsusing computer-based techniques. However, computer-based methods faceseveral challenges, including low contrast, a lack of dataset for training amodel, and inappropriate feature extraction for final classification. In thispaper, we proposed an automated framework for detecting apple fruit leafdiseases usingCNNand a hybrid optimization algorithm. Data augmentationis performed initially to balance the selected apple dataset. After that, twopre-trained deep models are fine-tuning and trained using transfer learning.Then, a fusion technique is proposed named Parallel Correlation Threshold(PCT). The fused feature vector is optimized in the next step using a hybridoptimization algorithm. The selected features are finally classified usingmachine learning algorithms. Four different experiments have been carriedout on the augmented Plant Village dataset and yielded the best accuracy of99.8%. The accuracy of the proposed framework is also compared to that ofseveral neural nets, and it outperforms them all.展开更多
Malaria is a critical health condition that affects both sultry and frigid region worldwide,giving rise to millions of cases of disease and thousands of deaths over the years.Malaria is caused by parasites that enter ...Malaria is a critical health condition that affects both sultry and frigid region worldwide,giving rise to millions of cases of disease and thousands of deaths over the years.Malaria is caused by parasites that enter the human red blood cells,grow there,and damage them over time.Therefore,it is diagnosed by a detailed examination of blood cells under the microscope.This is the most extensively used malaria diagnosis technique,but it yields limited and unreliable results due to the manual human involvement.In this work,an automated malaria blood smear classification model is proposed,which takes images of both infected and healthy cells and preprocesses themin the L^(*)a^(*)b^(*)color space by employing several contrast enhancement methods.Feature extraction is performed using two pretrained deep convolutional neural networks,DarkNet-53 and DenseNet-201.The features are subsequently agglutinated to be optimized through a nature-based feature reduction method called the whale optimization algorithm.Several classifiers are effectuated on the reduced features,and the achieved results excel in both accuracy and time compared to previously proposed methods.展开更多
Manual diagnosis of crops diseases is not an easy process;thus,a computerized method is widely used.Froma couple of years,advancements in the domain ofmachine learning,such as deep learning,have shown substantial succ...Manual diagnosis of crops diseases is not an easy process;thus,a computerized method is widely used.Froma couple of years,advancements in the domain ofmachine learning,such as deep learning,have shown substantial success.However,they still faced some challenges such as similarity in disease symptoms and irrelevant features extraction.In this article,we proposed a new deep learning architecture with optimization algorithm for cucumber and potato leaf diseases recognition.The proposed architecture consists of five steps.In the first step,data augmentation is performed to increase the numbers of training samples.In the second step,pre-trained DarkNet19 deep model is opted and fine-tuned that later utilized for the training of fine-tuned model through transfer learning.Deep features are extracted from the global pooling layer in the next step that is refined using Improved Cuckoo search algorithm.The best selected features are finally classified using machine learning classifiers such as SVM,and named a few more for final classification results.The proposed architecture is tested using publicly available datasets–Cucumber National Dataset and Plant Village.The proposed architecture achieved an accuracy of 100.0%,92.9%,and 99.2%,respectively.Acomparison with recent techniques is also performed,revealing that the proposed method achieved improved accuracy while consuming less computational time.展开更多
Manual diagnosis of brain tumors usingmagnetic resonance images(MRI)is a hectic process and time-consuming.Also,it always requires an expert person for the diagnosis.Therefore,many computer-controlled methods for diag...Manual diagnosis of brain tumors usingmagnetic resonance images(MRI)is a hectic process and time-consuming.Also,it always requires an expert person for the diagnosis.Therefore,many computer-controlled methods for diagnosing and classifying brain tumors have been introduced in the literature.This paper proposes a novel multimodal brain tumor classification framework based on two-way deep learning feature extraction and a hybrid feature optimization algorithm.NasNet-Mobile,a pre-trained deep learning model,has been fine-tuned and twoway trained on original and enhancedMRI images.The haze-convolutional neural network(haze-CNN)approach is developed and employed on the original images for contrast enhancement.Next,transfer learning(TL)is utilized for training two-way fine-tuned models and extracting feature vectors from the global average pooling layer.Then,using a multiset canonical correlation analysis(CCA)method,features of both deep learning models are fused into a single feature matrix—this technique aims to enhance the information in terms of features for better classification.Although the information was increased,computational time also jumped.This issue is resolved using a hybrid feature optimization algorithm that chooses the best classification features.The experiments were done on two publicly available datasets—BraTs2018 and BraTs2019—and yielded accuracy rates of 94.8%and 95.7%,respectively.The proposedmethod is comparedwith several recent studies andoutperformed inaccuracy.In addition,we analyze the performance of each middle step of the proposed approach and find the selection technique strengthens the proposed framework.展开更多
Automated Facial Expression Recognition(FER)serves as the backbone of patient monitoring systems,security,and surveillance systems.Real-time FER is a challenging task,due to the uncontrolled nature of the environment ...Automated Facial Expression Recognition(FER)serves as the backbone of patient monitoring systems,security,and surveillance systems.Real-time FER is a challenging task,due to the uncontrolled nature of the environment and poor quality of input frames.In this paper,a novel FER framework has been proposed for patient monitoring.Preprocessing is performed using contrast-limited adaptive enhancement and the dataset is balanced using augmentation.Two lightweight efficient Convolution Neural Network(CNN)models MobileNetV2 and Neural search Architecture Network Mobile(NasNetMobile)are trained,and feature vectors are extracted.The Whale Optimization Algorithm(WOA)is utilized to remove irrelevant features from these vectors.Finally,the optimized features are serially fused to pass them to the classifier.A comprehensive set of experiments were carried out for the evaluation of real-time image datasets FER-2013,MMA,and CK+to report performance based on various metrics.Accuracy results show that the proposed model has achieved 82.5%accuracy and performed better in comparison to the state-of-the-art classification techniques in terms of accuracy.We would like to highlight that the proposed technique has achieved better accuracy by using 2.8 times lesser number of features.展开更多
One exciting area within computer vision is classifying human activities, which has diverse applications like medical informatics, human-computer interaction, surveillance, and task monitoring systems. In the healthca...One exciting area within computer vision is classifying human activities, which has diverse applications like medical informatics, human-computer interaction, surveillance, and task monitoring systems. In the healthcare field, understanding and classifying patients’ activities is crucial for providing doctors with essential information for medication reactions and diagnosis. While some research methods already exist, utilizing machine learning and soft computational algorithms to recognize human activity from videos and images, there’s ongoing exploration of more advanced computer vision techniques. This paper introduces a straightforward and effective automated approach that involves five key steps: preprocessing, feature extraction technique, feature selection, feature fusion, and finally classification. To evaluate the proposed approach, two commonly used benchmark datasets KTH and Weizmann are employed for training, validation, and testing of ML classifiers. The study’s findings show that the first and second datasets had remarkable accuracy rates of 99.94% and 99.80%, respectively. When compared to existing methods, our approach stands out in terms of sensitivity, accuracy, precision, and specificity evaluation metrics. In essence, this paper demonstrates a practical method for automatically classifying human activities using an optimal feature fusion and deep learning approach, promising a great result that could benefit various fields, particularly in healthcare.展开更多
Face antispoofing has received a lot of attention because it plays a role in strengthening the security of face recognition systems.Face recognition is commonly used for authentication in surveillance applications.How...Face antispoofing has received a lot of attention because it plays a role in strengthening the security of face recognition systems.Face recognition is commonly used for authentication in surveillance applications.However,attackers try to compromise these systems by using spoofing techniques such as using photos or videos of users to gain access to services or information.Many existing methods for face spoofing face difficulties when dealing with new scenarios,especially when there are variations in background,lighting,and other environmental factors.Recent advancements in deep learning with multi-modality methods have shown their effectiveness in face antispoofing,surpassing single-modal methods.However,these approaches often generate several features that can lead to issues with data dimensionality.In this study,we introduce a multimodal deep fusion network for face anti-spoofing that incorporates cross-axial attention and deep reinforcement learning techniques.This network operates at three patch levels and analyzes images from modalities(RGB,IR,and depth).Initially,our design includes an axial attention network(XANet)model that extracts deeply hidden features from multimodal images.Further,we use a bidirectional fusion technique that pays attention to both directions to combine features from each mode effectively.We further improve feature optimization by using the Enhanced Pity Beetle Optimization(EPBO)algorithm,which selects the features to address data dimensionality problems.Moreover,our proposed model employs a hybrid federated reinforcement learning(FDDRL)approach to detect and classify face anti-spoofing,achieving a more optimal tradeoff between detection rates and false positive rates.We evaluated the proposed approach on publicly available datasets,including CASIA-SURF and GREATFASD-S,and realized 98.985%and 97.956%classification accuracy,respectively.In addition,the current method outperforms other state-of-the-art methods in terms of precision,recall,and Fmeasures.Overall,the developed methodology boosts the effectiveness of our model in detecting various types of spoofing attempts.展开更多
In computer vision applications like surveillance and remote sensing,to mention a few,deep learning has had considerable success.Medical imaging still faces a number of difficulties,including intra-class similarity,a ...In computer vision applications like surveillance and remote sensing,to mention a few,deep learning has had considerable success.Medical imaging still faces a number of difficulties,including intra-class similarity,a scarcity of training data,and poor contrast skin lesions,notably in the case of skin cancer.An optimisation-aided deep learningbased system is proposed for accurate multi-class skin lesion identification.The sequential procedures of the proposed system start with preprocessing and end with categorisation.The preprocessing step is where a hybrid contrast enhancement technique is initially proposed for lesion identification with healthy regions.Instead of flipping and rotating data,the outputs from the middle phases of the hybrid enhanced technique are employed for data augmentation in the next step.Next,two pre-trained deep learning models,MobileNetV2 and NasNet Mobile,are trained using deep transfer learning on the upgraded enriched dataset.Later,a dual-threshold serial approach is employed to obtain and combine the features of both models.The next step was the variance-controlled Marine Predator methodology,which the authors proposed as a superior optimisation method.The top features from the fused feature vector are classified using machine learning classifiers.The experimental strategy provided enhanced accuracy of 94.4%using the publicly available dataset HAM10000.Additionally,the proposed framework is evaluated compared to current approaches,with remarkable results.展开更多
Graph neural networks(GNNs)have demonstrated excellent performance in graph representation learning.However,as the volume of graph data grows,issues related to cost and efficiency become increasingly prominent.Graph d...Graph neural networks(GNNs)have demonstrated excellent performance in graph representation learning.However,as the volume of graph data grows,issues related to cost and efficiency become increasingly prominent.Graph distillation methods address this challenge by extracting a smaller,reduced graph,ensuring that GNNs trained on both the original and reduced graphs show similar performance.Existing methods,however,primarily optimize the feature matrix of the reduced graph and rely on correlation information from GNNs,while neglecting the original graph’s structure and redundant nodes.This often results in a loss of critical information within the reduced graph.To overcome this limitation,we propose a graph distillation method guided by network symmetry.Specifically,we identify symmetric nodes with equivalent neighborhood structures and merge them into“super nodes”,thereby simplifying the network structure,reducing redundant parameter optimization and enhancing training efficiency.At the same time,instead of relying on the original node features,we employ gradient descent to match optimal features that align with the original features,thus improving downstream task performance.Theoretically,our method guarantees that the reduced graph retains the key information present in the original graph.Extensive experiments demonstrate that our approach achieves significant improvements in graph distillation,exhibiting strong generalization capability and outperforming existing graph reduction methods.展开更多
At present, salient object detection (SOD) has achieved considerable progress. However, the methods that perform well still face the issue of inadequate detection accuracy. For example, sometimes there are problems of...At present, salient object detection (SOD) has achieved considerable progress. However, the methods that perform well still face the issue of inadequate detection accuracy. For example, sometimes there are problems of missed and false detections. Effectively optimizing features to capture key information and better integrating different levels of features to enhance their complementarity are two significant challenges in the domain of SOD. In response to these challenges, this study proposes a novel SOD method based on multi-strategy feature optimization. We propose the multi-size feature extraction module (MSFEM), which uses the attention mechanism, the multi-level feature fusion, and the residual block to obtain finer features. This module provides robust support for the subsequent accurate detection of the salient object. In addition, we use two rounds of feature fusion and the feedback mechanism to optimize the features obtained by the MSFEM to improve detection accuracy. The first round of feature fusion is applied to integrate the features extracted by the MSFEM to obtain more refined features. Subsequently, the feedback mechanism and the second round of feature fusion are applied to refine the features, thereby providing a stronger foundation for accurately detecting salient objects. To improve the fusion effect, we propose the feature enhancement module (FEM) and the feature optimization module (FOM). The FEM integrates the upper and lower features with the optimized features obtained by the FOM to enhance feature complementarity. The FOM uses different receptive fields, the attention mechanism, and the residual block to more effectively capture key information. Experimental results demonstrate that our method outperforms 10 state-of-the-art SOD methods.展开更多
Accurate short-term photovoltaic(PV)output forecasting is beneficial for increasing grid stabil-ity and enhancing the capacity for photovoltaic power absorption.In response to the challenges faced by commonly used pho...Accurate short-term photovoltaic(PV)output forecasting is beneficial for increasing grid stabil-ity and enhancing the capacity for photovoltaic power absorption.In response to the challenges faced by commonly used photovoltaic forecasting methods,which struggle to handle issues such as non-u-niform lengths of time series data for power generation and meteorological conditions,overlapping photovoltaic characteristics,and nonlinear correlations,an improved method that utilizes spectral clustering and dynamic time warping(DTW)for selecting similar days is proposed to optimize the dataset along the temporal dimension.Furthermore,XGBoost is employed for recursive feature selec-tion.On this basis,to address the issue that single forecasting models excel at capturing different data characteristics and tend to exhibit significant prediction errors under adverse meteorological con-ditions,an improved forecasting model based on Stacking and weighted fusion is proposed to reduce the independent bias and variance of individual models and enhance the predictive accuracy.Final-ly,experimental validation is carried out using real data from a photovoltaic power station in the Xi-aoshan District of Hangzhou,China,demonstrating that the proposed method can still achieve accu-rate and robust forecasting results even under conditions of significant meteorological fluctuations.展开更多
Coastal wetlands are crucial for the‘blue carbon sink’,significantly contributing to regulating climate change.This study util-ized 160 soil samples,35 remote sensing features,and 5 geo-climatic data to accurately e...Coastal wetlands are crucial for the‘blue carbon sink’,significantly contributing to regulating climate change.This study util-ized 160 soil samples,35 remote sensing features,and 5 geo-climatic data to accurately estimate the soil organic carbon stocks(SOCS)in the coastal wetlands of Tianjin and Hebei,China.To reduce data redundancy,simplify model complexity,and improve model inter-pretability,Pearson correlation analysis(PsCA),Boruta,and recursive feature elimination(RFE)were employed to optimize features.Combined with the optimized features,the soil organic carbon density(SOCD)prediction model was constructed by using multivariate adaptive regression splines(MARS),extreme gradient boosting(XGBoost),and random forest(RF)algorithms and applied to predict the spatial distribution of SOCD and estimate the SOCS of different wetland types in 2020.The results show that:1)different feature combinations have a significant influence on the model performance.Better prediction performance was attained by building a model using RFE-based feature combinations.RF has the best prediction accuracy(R^(2)=0.587,RMSE=0.798 kg/m^(2),MAE=0.660 kg/m^(2)).2)Optical features are more important than radar and geo-climatic features in the MARS,XGBoost,and RF algorithms.3)The size of SOCS is related to SOCD and the area of each wetland type,aquaculture pond has the highest SOCS,followed by marsh,salt pan,mud-flat,and sand shore.展开更多
Feature optimization is important to agricultural text mining. Usually, the vector space model is used to represent text documents. However, this basic approach still suffers from two drawbacks: thecurse of dimension ...Feature optimization is important to agricultural text mining. Usually, the vector space model is used to represent text documents. However, this basic approach still suffers from two drawbacks: thecurse of dimension and the lack of semantic information. In this paper, a novel ontology-based feature optimization method for agricultural text was proposed. First, terms of vector space model were mapped into concepts of agricultural ontology, which concept frequency weights are computed statistically by term frequency weights; second, weights of concept similarity were assigned to the concept features according to the structure of the agricultural ontology. By combining feature frequency weights and feature similarity weights based on the agricultural ontology, the dimensionality of feature space can be reduced drastically. Moreover, the semantic information can be incorporated into this method. The results showed that this method yields a significant improvement on agricultural text clustering by the feature optimization.展开更多
In medical imaging,computer vision researchers are faced with a variety of features for verifying the authenticity of classifiers for an accurate diagnosis.In response to the coronavirus 2019(COVID-19)pandemic,new tes...In medical imaging,computer vision researchers are faced with a variety of features for verifying the authenticity of classifiers for an accurate diagnosis.In response to the coronavirus 2019(COVID-19)pandemic,new testing procedures,medical treatments,and vaccines are being developed rapidly.One potential diagnostic tool is a reverse-transcription polymerase chain reaction(RT-PCR).RT-PCR,typically a time-consuming process,was less sensitive to COVID-19 recognition in the disease’s early stages.Here we introduce an optimized deep learning(DL)scheme to distinguish COVID-19-infected patients from normal patients according to computed tomography(CT)scans.In the proposed method,contrast enhancement is used to improve the quality of the original images.A pretrained DenseNet-201 DL model is then trained using transfer learning.Two fully connected layers and an average pool are used for feature extraction.The extracted deep features are then optimized with a Firefly algorithm to select the most optimal learning features.Fusing the selected features is important to improving the accuracy of the approach;however,it directly affects the computational cost of the technique.In the proposed method,a new parallel high index technique is used to fuse two optimal vectors;the outcome is then passed on to an extreme learning machine for final classification.Experiments were conducted on a collected database of patients using a 70:30 training:Testing ratio.Our results indicated an average classification accuracy of 94.76%with the proposed approach.A comparison of the outcomes to several other DL models demonstrated the effectiveness of our DL method for classifying COVID-19 based on CT scans.展开更多
基金This work is financially supported by the Natural Science Foundation of China under Grant 52107229the Fundamental Research Funds for the Sichuan Science and Technology Program under Grant 2021YJ0063+2 种基金the China Postdoctoral Science Foundation under Grant 2020M673218Hunan High-tech Industry Science and Technology Innovation Plan under Grant 2020GK2081the Fund of Robot Technology Used for Special Environment Key Laboratory of Sichuan Province under Grant 20KFKT02.
文摘In the long-term prediction of battery degradation,the data-driven method has great potential with historical data recorded by the battery management system.This paper proposes an enhanced data-driven model for Lithium-ion(Li-ion)battery state of health(SOH)estimation with a superior modeling procedure and optimized features.The Gaussian process regression(GPR)method is adopted to establish the data-driven estimator,which enables Li-ion battery SOH estimation with the uncertainty level.A novel kernel function,with the prior knowledge of Li-ion battery degradation,is then introduced to improve the mod-eling capability of the GPR.As for the features,a two-stage processing structure is proposed to find a suitable partial charging voltage profile with high efficiency.In the first stage,an optimal partial charging voltage is selected by the grid search;while in the second stage,the principal component analysis is conducted to increase both estimation accuracy and computing efficiency.Advantages of the proposed method are validated on two datasets from different Li-ion batteries:Compared with other methods,the proposed method can achieve the same accuracy level in the Oxford dataset;while in Maryland dataset,the mean absolute error,the root-mean-squared error,and the maximum error are at least improved by 16.36%,32.43%,and 45.46%,respectively.
文摘In 2020,COVID-19 started spreading throughout the world.This deadly infection was identified as a virus that may affect the lungs and,in severe cases,could be the cause of death.The polymerase chain reaction(PCR)test is commonly used to detect this virus through the nasal passage or throat.However,the PCR test exposes health workers to this deadly virus.To limit human exposure while detecting COVID-19,image processing techniques using deep learning have been successfully applied.In this paper,a strategy based on deep learning is employed to classify the COVID-19 virus.To extract features,two deep learning models have been used,the DenseNet201 and the SqueezeNet.Transfer learning is used in feature extraction,and models are fine-tuned.A publicly available computerized tomography(CT)scan dataset has been used in this study.The extracted features from the deep learning models are optimized using the Ant Colony Optimization algorithm.The proposed technique is validated through multiple evaluation parameters.Several classifiers have been employed to classify the optimized features.The cubic support vector machine(Cubic SVM)classifier shows superiority over other commonly used classifiers and attained an accuracy of 98.72%.The proposed technique achieves state-of-the-art accuracy,a sensitivity of 98.80%,and a specificity of 96.64%.
基金King Saud University,Grant/Award Number:RSP2024R157。
文摘Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework for human gait classification in video sequences using deep learning(DL)fusion assisted and posterior probability-based moth flames optimization(MFO)is proposed.In the first step,the video frames are resized and finetuned by two pre-trained lightweight DL models,EfficientNetB0 and MobileNetV2.Both models are selected based on the top-5 accuracy and less number of parameters.Later,both models are trained through deep transfer learning and extracted deep features fused using a voting scheme.In the last step,the authors develop a posterior probabilitybased MFO feature selection algorithm to select the best features.The selected features are classified using several supervised learning methods.The CASIA-B publicly available dataset has been employed for the experimental process.On this dataset,the authors selected six angles such as 0°,18°,90°,108°,162°,and 180°and obtained an average accuracy of 96.9%,95.7%,86.8%,90.0%,95.1%,and 99.7%.Results demonstrate comparable improvement in accuracy and significantly minimize the computational time with recent state-of-the-art techniques.
基金supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘In the area of medical image processing,stomach cancer is one of the most important cancers which need to be diagnose at the early stage.In this paper,an optimized deep learning method is presented for multiple stomach disease classication.The proposed method work in few important steps—preprocessing using the fusion of ltering images along with Ant Colony Optimization(ACO),deep transfer learning-based features extraction,optimization of deep extracted features using nature-inspired algorithms,and nally fusion of optimal vectors and classication using Multi-Layered Perceptron Neural Network(MLNN).In the feature extraction step,pretrained Inception V3 is utilized and retrained on selected stomach infection classes using the deep transfer learning step.Later on,the activation function is applied to Global Average Pool(GAP)for feature extraction.However,the extracted features are optimized through two different nature-inspired algorithms—Particle Swarm Optimization(PSO)with dynamic tness function and Crow Search Algorithm(CSA).Hence,both methods’output is fused by a maximal value approach and classied the fused feature vector by MLNN.Two datasets are used to evaluate the proposed method—CUI WahStomach Diseases and Combined dataset and achieved an average accuracy of 99.5%.The comparison with existing techniques,it is shown that the proposed method shows signicant performance.
基金This study was supported by the grants of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),funded by the Ministry of Health&Welfare(HI18C1216)the grant of the National Research Foundation of Korea(NRF-2020R1I1A1A01074256)the Soonchunhyang University Research Fund.
文摘Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of memory,vision,and name.In 2020,approximately 18,020 deaths occurred due to brain tumors.These cases can be minimized if a brain tumor is diagnosed at a very early stage.Computer vision researchers have introduced several techniques for brain tumor detection and classification.However,owing to many factors,this is still a challenging task.These challenges relate to the tumor size,the shape of a tumor,location of the tumor,selection of important features,among others.In this study,we proposed a framework for multimodal brain tumor classification using an ensemble of optimal deep learning features.In the proposed framework,initially,a database is normalized in the form of high-grade glioma(HGG)and low-grade glioma(LGG)patients and then two pre-trained deep learning models(ResNet50 and Densenet201)are chosen.The deep learning models were modified and trained using transfer learning.Subsequently,the enhanced ant colony optimization algorithm is proposed for best feature selection from both deep models.The selected features are fused using a serial-based approach and classified using a cubic support vector machine.The experimental process was conducted on the BraTs2019 dataset and achieved accuracies of 87.8%and 84.6%for HGG and LGG,respectively.The comparison is performed using several classification methods,and it shows the significance of our proposed technique.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning (KETEP)granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea. (No.20204010600090).
文摘Identifying fruit disease manually is time-consuming, expertrequired,and expensive;thus, a computer-based automated system is widelyrequired. Fruit diseases affect not only the quality but also the quantity.As a result, it is possible to detect the disease early on and cure the fruitsusing computer-based techniques. However, computer-based methods faceseveral challenges, including low contrast, a lack of dataset for training amodel, and inappropriate feature extraction for final classification. In thispaper, we proposed an automated framework for detecting apple fruit leafdiseases usingCNNand a hybrid optimization algorithm. Data augmentationis performed initially to balance the selected apple dataset. After that, twopre-trained deep models are fine-tuning and trained using transfer learning.Then, a fusion technique is proposed named Parallel Correlation Threshold(PCT). The fused feature vector is optimized in the next step using a hybridoptimization algorithm. The selected features are finally classified usingmachine learning algorithms. Four different experiments have been carriedout on the augmented Plant Village dataset and yielded the best accuracy of99.8%. The accuracy of the proposed framework is also compared to that ofseveral neural nets, and it outperforms them all.
基金This research was supported by the MSIT(Ministry of Science and ICT),Korea,under the ICAN(ICT Challenge and Advanced Network of HRD)program(IITP-2021-2020-0-01832)supervised by the IITP(Institute of Information&Communications Technology Planning&Evaluation)and the Soonchunhyang University Research Fund.
文摘Malaria is a critical health condition that affects both sultry and frigid region worldwide,giving rise to millions of cases of disease and thousands of deaths over the years.Malaria is caused by parasites that enter the human red blood cells,grow there,and damage them over time.Therefore,it is diagnosed by a detailed examination of blood cells under the microscope.This is the most extensively used malaria diagnosis technique,but it yields limited and unreliable results due to the manual human involvement.In this work,an automated malaria blood smear classification model is proposed,which takes images of both infected and healthy cells and preprocesses themin the L^(*)a^(*)b^(*)color space by employing several contrast enhancement methods.Feature extraction is performed using two pretrained deep convolutional neural networks,DarkNet-53 and DenseNet-201.The features are subsequently agglutinated to be optimized through a nature-based feature reduction method called the whale optimization algorithm.Several classifiers are effectuated on the reduced features,and the achieved results excel in both accuracy and time compared to previously proposed methods.
文摘Manual diagnosis of crops diseases is not an easy process;thus,a computerized method is widely used.Froma couple of years,advancements in the domain ofmachine learning,such as deep learning,have shown substantial success.However,they still faced some challenges such as similarity in disease symptoms and irrelevant features extraction.In this article,we proposed a new deep learning architecture with optimization algorithm for cucumber and potato leaf diseases recognition.The proposed architecture consists of five steps.In the first step,data augmentation is performed to increase the numbers of training samples.In the second step,pre-trained DarkNet19 deep model is opted and fine-tuned that later utilized for the training of fine-tuned model through transfer learning.Deep features are extracted from the global pooling layer in the next step that is refined using Improved Cuckoo search algorithm.The best selected features are finally classified using machine learning classifiers such as SVM,and named a few more for final classification results.The proposed architecture is tested using publicly available datasets–Cucumber National Dataset and Plant Village.The proposed architecture achieved an accuracy of 100.0%,92.9%,and 99.2%,respectively.Acomparison with recent techniques is also performed,revealing that the proposed method achieved improved accuracy while consuming less computational time.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP)Granted Financial Resources from theMinistry of Trade,Industry&Energy,Republic of Korea(No.20204010600090).
文摘Manual diagnosis of brain tumors usingmagnetic resonance images(MRI)is a hectic process and time-consuming.Also,it always requires an expert person for the diagnosis.Therefore,many computer-controlled methods for diagnosing and classifying brain tumors have been introduced in the literature.This paper proposes a novel multimodal brain tumor classification framework based on two-way deep learning feature extraction and a hybrid feature optimization algorithm.NasNet-Mobile,a pre-trained deep learning model,has been fine-tuned and twoway trained on original and enhancedMRI images.The haze-convolutional neural network(haze-CNN)approach is developed and employed on the original images for contrast enhancement.Next,transfer learning(TL)is utilized for training two-way fine-tuned models and extracting feature vectors from the global average pooling layer.Then,using a multiset canonical correlation analysis(CCA)method,features of both deep learning models are fused into a single feature matrix—this technique aims to enhance the information in terms of features for better classification.Although the information was increased,computational time also jumped.This issue is resolved using a hybrid feature optimization algorithm that chooses the best classification features.The experiments were done on two publicly available datasets—BraTs2018 and BraTs2019—and yielded accuracy rates of 94.8%and 95.7%,respectively.The proposedmethod is comparedwith several recent studies andoutperformed inaccuracy.In addition,we analyze the performance of each middle step of the proposed approach and find the selection technique strengthens the proposed framework.
基金Researchers Supporting Project Number(RSP2022R458),King Saud University,Riyadh,Saudi Arabia.
文摘Automated Facial Expression Recognition(FER)serves as the backbone of patient monitoring systems,security,and surveillance systems.Real-time FER is a challenging task,due to the uncontrolled nature of the environment and poor quality of input frames.In this paper,a novel FER framework has been proposed for patient monitoring.Preprocessing is performed using contrast-limited adaptive enhancement and the dataset is balanced using augmentation.Two lightweight efficient Convolution Neural Network(CNN)models MobileNetV2 and Neural search Architecture Network Mobile(NasNetMobile)are trained,and feature vectors are extracted.The Whale Optimization Algorithm(WOA)is utilized to remove irrelevant features from these vectors.Finally,the optimized features are serially fused to pass them to the classifier.A comprehensive set of experiments were carried out for the evaluation of real-time image datasets FER-2013,MMA,and CK+to report performance based on various metrics.Accuracy results show that the proposed model has achieved 82.5%accuracy and performed better in comparison to the state-of-the-art classification techniques in terms of accuracy.We would like to highlight that the proposed technique has achieved better accuracy by using 2.8 times lesser number of features.
文摘One exciting area within computer vision is classifying human activities, which has diverse applications like medical informatics, human-computer interaction, surveillance, and task monitoring systems. In the healthcare field, understanding and classifying patients’ activities is crucial for providing doctors with essential information for medication reactions and diagnosis. While some research methods already exist, utilizing machine learning and soft computational algorithms to recognize human activity from videos and images, there’s ongoing exploration of more advanced computer vision techniques. This paper introduces a straightforward and effective automated approach that involves five key steps: preprocessing, feature extraction technique, feature selection, feature fusion, and finally classification. To evaluate the proposed approach, two commonly used benchmark datasets KTH and Weizmann are employed for training, validation, and testing of ML classifiers. The study’s findings show that the first and second datasets had remarkable accuracy rates of 99.94% and 99.80%, respectively. When compared to existing methods, our approach stands out in terms of sensitivity, accuracy, precision, and specificity evaluation metrics. In essence, this paper demonstrates a practical method for automatically classifying human activities using an optimal feature fusion and deep learning approach, promising a great result that could benefit various fields, particularly in healthcare.
文摘Face antispoofing has received a lot of attention because it plays a role in strengthening the security of face recognition systems.Face recognition is commonly used for authentication in surveillance applications.However,attackers try to compromise these systems by using spoofing techniques such as using photos or videos of users to gain access to services or information.Many existing methods for face spoofing face difficulties when dealing with new scenarios,especially when there are variations in background,lighting,and other environmental factors.Recent advancements in deep learning with multi-modality methods have shown their effectiveness in face antispoofing,surpassing single-modal methods.However,these approaches often generate several features that can lead to issues with data dimensionality.In this study,we introduce a multimodal deep fusion network for face anti-spoofing that incorporates cross-axial attention and deep reinforcement learning techniques.This network operates at three patch levels and analyzes images from modalities(RGB,IR,and depth).Initially,our design includes an axial attention network(XANet)model that extracts deeply hidden features from multimodal images.Further,we use a bidirectional fusion technique that pays attention to both directions to combine features from each mode effectively.We further improve feature optimization by using the Enhanced Pity Beetle Optimization(EPBO)algorithm,which selects the features to address data dimensionality problems.Moreover,our proposed model employs a hybrid federated reinforcement learning(FDDRL)approach to detect and classify face anti-spoofing,achieving a more optimal tradeoff between detection rates and false positive rates.We evaluated the proposed approach on publicly available datasets,including CASIA-SURF and GREATFASD-S,and realized 98.985%and 97.956%classification accuracy,respectively.In addition,the current method outperforms other state-of-the-art methods in terms of precision,recall,and Fmeasures.Overall,the developed methodology boosts the effectiveness of our model in detecting various types of spoofing attempts.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project,Grant/Award Number:PNURSP2023R333。
文摘In computer vision applications like surveillance and remote sensing,to mention a few,deep learning has had considerable success.Medical imaging still faces a number of difficulties,including intra-class similarity,a scarcity of training data,and poor contrast skin lesions,notably in the case of skin cancer.An optimisation-aided deep learningbased system is proposed for accurate multi-class skin lesion identification.The sequential procedures of the proposed system start with preprocessing and end with categorisation.The preprocessing step is where a hybrid contrast enhancement technique is initially proposed for lesion identification with healthy regions.Instead of flipping and rotating data,the outputs from the middle phases of the hybrid enhanced technique are employed for data augmentation in the next step.Next,two pre-trained deep learning models,MobileNetV2 and NasNet Mobile,are trained using deep transfer learning on the upgraded enriched dataset.Later,a dual-threshold serial approach is employed to obtain and combine the features of both models.The next step was the variance-controlled Marine Predator methodology,which the authors proposed as a superior optimisation method.The top features from the fused feature vector are classified using machine learning classifiers.The experimental strategy provided enhanced accuracy of 94.4%using the publicly available dataset HAM10000.Additionally,the proposed framework is evaluated compared to current approaches,with remarkable results.
基金Project supported by the National Natural Science Foundation of China(Grant No.62176217)the Program from the Sichuan Provincial Science and Technology,China(Grant No.2018RZ0081)the Fundamental Research Funds of China West Normal University(Grant No.17E063).
文摘Graph neural networks(GNNs)have demonstrated excellent performance in graph representation learning.However,as the volume of graph data grows,issues related to cost and efficiency become increasingly prominent.Graph distillation methods address this challenge by extracting a smaller,reduced graph,ensuring that GNNs trained on both the original and reduced graphs show similar performance.Existing methods,however,primarily optimize the feature matrix of the reduced graph and rely on correlation information from GNNs,while neglecting the original graph’s structure and redundant nodes.This often results in a loss of critical information within the reduced graph.To overcome this limitation,we propose a graph distillation method guided by network symmetry.Specifically,we identify symmetric nodes with equivalent neighborhood structures and merge them into“super nodes”,thereby simplifying the network structure,reducing redundant parameter optimization and enhancing training efficiency.At the same time,instead of relying on the original node features,we employ gradient descent to match optimal features that align with the original features,thus improving downstream task performance.Theoretically,our method guarantees that the reduced graph retains the key information present in the original graph.Extensive experiments demonstrate that our approach achieves significant improvements in graph distillation,exhibiting strong generalization capability and outperforming existing graph reduction methods.
文摘At present, salient object detection (SOD) has achieved considerable progress. However, the methods that perform well still face the issue of inadequate detection accuracy. For example, sometimes there are problems of missed and false detections. Effectively optimizing features to capture key information and better integrating different levels of features to enhance their complementarity are two significant challenges in the domain of SOD. In response to these challenges, this study proposes a novel SOD method based on multi-strategy feature optimization. We propose the multi-size feature extraction module (MSFEM), which uses the attention mechanism, the multi-level feature fusion, and the residual block to obtain finer features. This module provides robust support for the subsequent accurate detection of the salient object. In addition, we use two rounds of feature fusion and the feedback mechanism to optimize the features obtained by the MSFEM to improve detection accuracy. The first round of feature fusion is applied to integrate the features extracted by the MSFEM to obtain more refined features. Subsequently, the feedback mechanism and the second round of feature fusion are applied to refine the features, thereby providing a stronger foundation for accurately detecting salient objects. To improve the fusion effect, we propose the feature enhancement module (FEM) and the feature optimization module (FOM). The FEM integrates the upper and lower features with the optimized features obtained by the FOM to enhance feature complementarity. The FOM uses different receptive fields, the attention mechanism, and the residual block to more effectively capture key information. Experimental results demonstrate that our method outperforms 10 state-of-the-art SOD methods.
基金Supported by the National Natural Science Foundation of China(No.52005442)the Technology Project of Zhejiang Huayun Information Technology Co.,Ltd.(No.HYJT/JS-2020-004).
文摘Accurate short-term photovoltaic(PV)output forecasting is beneficial for increasing grid stabil-ity and enhancing the capacity for photovoltaic power absorption.In response to the challenges faced by commonly used photovoltaic forecasting methods,which struggle to handle issues such as non-u-niform lengths of time series data for power generation and meteorological conditions,overlapping photovoltaic characteristics,and nonlinear correlations,an improved method that utilizes spectral clustering and dynamic time warping(DTW)for selecting similar days is proposed to optimize the dataset along the temporal dimension.Furthermore,XGBoost is employed for recursive feature selec-tion.On this basis,to address the issue that single forecasting models excel at capturing different data characteristics and tend to exhibit significant prediction errors under adverse meteorological con-ditions,an improved forecasting model based on Stacking and weighted fusion is proposed to reduce the independent bias and variance of individual models and enhance the predictive accuracy.Final-ly,experimental validation is carried out using real data from a photovoltaic power station in the Xi-aoshan District of Hangzhou,China,demonstrating that the proposed method can still achieve accu-rate and robust forecasting results even under conditions of significant meteorological fluctuations.
基金Under the auspices of National Natural Science Foundation of China(No.42101393,41901375,52274166)Hebei Natural Science Foundation(No.D2022209005,D2023209008)Central Guided Local Science and Technology Development Fund Project of Hebei Province(No.236Z3305G,246Z4201G)Key Research and Development Program of Science and Technology Plan of Tangshan,China(No.22150221J)。
文摘Coastal wetlands are crucial for the‘blue carbon sink’,significantly contributing to regulating climate change.This study util-ized 160 soil samples,35 remote sensing features,and 5 geo-climatic data to accurately estimate the soil organic carbon stocks(SOCS)in the coastal wetlands of Tianjin and Hebei,China.To reduce data redundancy,simplify model complexity,and improve model inter-pretability,Pearson correlation analysis(PsCA),Boruta,and recursive feature elimination(RFE)were employed to optimize features.Combined with the optimized features,the soil organic carbon density(SOCD)prediction model was constructed by using multivariate adaptive regression splines(MARS),extreme gradient boosting(XGBoost),and random forest(RF)algorithms and applied to predict the spatial distribution of SOCD and estimate the SOCS of different wetland types in 2020.The results show that:1)different feature combinations have a significant influence on the model performance.Better prediction performance was attained by building a model using RFE-based feature combinations.RF has the best prediction accuracy(R^(2)=0.587,RMSE=0.798 kg/m^(2),MAE=0.660 kg/m^(2)).2)Optical features are more important than radar and geo-climatic features in the MARS,XGBoost,and RF algorithms.3)The size of SOCS is related to SOCD and the area of each wetland type,aquaculture pond has the highest SOCS,followed by marsh,salt pan,mud-flat,and sand shore.
基金supported by the National Natural Science Foundation of China (60774096)the National HighTech R&D Program of China (2008BAK49B05)
文摘Feature optimization is important to agricultural text mining. Usually, the vector space model is used to represent text documents. However, this basic approach still suffers from two drawbacks: thecurse of dimension and the lack of semantic information. In this paper, a novel ontology-based feature optimization method for agricultural text was proposed. First, terms of vector space model were mapped into concepts of agricultural ontology, which concept frequency weights are computed statistically by term frequency weights; second, weights of concept similarity were assigned to the concept features according to the structure of the agricultural ontology. By combining feature frequency weights and feature similarity weights based on the agricultural ontology, the dimensionality of feature space can be reduced drastically. Moreover, the semantic information can be incorporated into this method. The results showed that this method yields a significant improvement on agricultural text clustering by the feature optimization.
基金Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘In medical imaging,computer vision researchers are faced with a variety of features for verifying the authenticity of classifiers for an accurate diagnosis.In response to the coronavirus 2019(COVID-19)pandemic,new testing procedures,medical treatments,and vaccines are being developed rapidly.One potential diagnostic tool is a reverse-transcription polymerase chain reaction(RT-PCR).RT-PCR,typically a time-consuming process,was less sensitive to COVID-19 recognition in the disease’s early stages.Here we introduce an optimized deep learning(DL)scheme to distinguish COVID-19-infected patients from normal patients according to computed tomography(CT)scans.In the proposed method,contrast enhancement is used to improve the quality of the original images.A pretrained DenseNet-201 DL model is then trained using transfer learning.Two fully connected layers and an average pool are used for feature extraction.The extracted deep features are then optimized with a Firefly algorithm to select the most optimal learning features.Fusing the selected features is important to improving the accuracy of the approach;however,it directly affects the computational cost of the technique.In the proposed method,a new parallel high index technique is used to fuse two optimal vectors;the outcome is then passed on to an extreme learning machine for final classification.Experiments were conducted on a collected database of patients using a 70:30 training:Testing ratio.Our results indicated an average classification accuracy of 94.76%with the proposed approach.A comparison of the outcomes to several other DL models demonstrated the effectiveness of our DL method for classifying COVID-19 based on CT scans.