Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlo...Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlooked challenge is their demand for considerable run-to-failure data for training.Collection of such training data leads to prohibitive testing efforts as the run-to-failure tests can last for years.Here,we propose a semi-supervised representation learning method to enhance prediction accuracy by learning from data without RUL labels.Our approach builds on a sophisticated deep neural network that comprises an encoder and three decoder heads to extract time-dependent representation features from short-term battery operating data regardless of the existence of RUL labels.The approach is validated using three datasets collected from 34 batteries operating under various conditions,encompassing over 19,900 charge and discharge cycles.Our method achieves a root mean squared error(RMSE)within 25 cycles,even when only 1/50 of the training dataset is labelled,representing a reduction of 48%compared to the conventional approach.We also demonstrate the method's robustness with varying numbers of labelled data and different weights assigned to the three decoder heads.The projection of extracted features in low space reveals that our method effectively learns degradation features from unlabelled data.Our approach highlights the promise of utilising semi-supervised learning to reduce the data demand for reliability monitoring of energy devices.展开更多
Semi-supervised learning(SSL)aims to improve performance by exploiting unlabeled data when labels are scarce.Conventional SSL studies typically assume close environments where important factors(e.g.,label,feature,dist...Semi-supervised learning(SSL)aims to improve performance by exploiting unlabeled data when labels are scarce.Conventional SSL studies typically assume close environments where important factors(e.g.,label,feature,distribution)between labeled and unlabeled data are consistent.However,more practical tasks involve open environments where important factors between labeled and unlabeled data are inconsistent.It has been reported that exploiting inconsistent unlabeled data causes severe performance degradation,even worse than the simple supervised learning baseline.Manually verifying the quality of unlabeled data is not desirable,therefore,it is important to study robust SSL with inconsistent unlabeled data in open environments.This paper briefly introduces some advances in this line of research,focusing on techniques concerning label,feature,and data distribution inconsistency in SSL,and presents the evaluation benchmarks.Open research problems are also discussed for reference purposes.展开更多
Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,l...Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,labeled data is very scarce due to patient privacy concerns.For researchers,obtaining high-quality labeled images is exceedingly challenging because it involves manual annotation and clinical understanding.In addition,skin datasets are highly suitable for medical image classification studies due to the inter-class relationships and the inter-class similarities of skin lesions.In this paper,we propose a model called Coalition Sample Relation Consistency(CSRC),a consistency-based method that leverages Canonical Correlation Analysis(CCA)to capture the intrinsic relationships between samples.Considering that traditional consistency-based models only focus on the consistency of prediction,we additionally explore the similarity between features by using CCA.We enforce feature relation consistency based on traditional models,encouraging the model to learn more meaningful information from unlabeled data.Finally,considering that cross-entropy loss is not as suitable as the supervised loss when studying with imbalanced datasets(i.e.,ISIC 2017 and ISIC 2018),we improve the supervised loss to achieve better classification accuracy.Our study shows that this model performs better than many semi-supervised methods.展开更多
Machine learning techniques and a dataset of five wells from the Rawat oilfield in Sudan containing 93,925 samples per feature(seven well logs and one facies log) were used to classify four facies. Data preprocessing ...Machine learning techniques and a dataset of five wells from the Rawat oilfield in Sudan containing 93,925 samples per feature(seven well logs and one facies log) were used to classify four facies. Data preprocessing and preparation involve two processes: data cleaning and feature scaling. Several machine learning algorithms, including Linear Regression(LR), Decision Tree(DT), Support Vector Machine(SVM),Random Forest(RF), and Gradient Boosting(GB) for classification, were tested using different iterations and various combinations of features and parameters. The support vector radial kernel training model achieved an accuracy of 72.49% without grid search and 64.02% with grid search, while the blind-well test scores were 71.01% and 69.67%, respectively. The Decision Tree(DT) Hyperparameter Optimization model showed an accuracy of 64.15% for training and 67.45% for testing. In comparison, the Decision Tree coupled with grid search yielded better results, with a training score of 69.91% and a testing score of67.89%. The model's validation was carried out using the blind well validation approach, which achieved an accuracy of 69.81%. Three algorithms were used to generate the gradient-boosting model. During training, the Gradient Boosting classifier achieved an accuracy score of 71.57%, and during testing, it achieved 69.89%. The Grid Search model achieved a higher accuracy score of 72.14% during testing. The Extreme Gradient Boosting model had the lowest accuracy score, with only 66.13% for training and66.12% for testing. For validation, the Gradient Boosting(GB) classifier model achieved an accuracy score of 75.41% on the blind well test, while the Gradient Boosting with Grid Search achieved an accuracy score of 71.36%. The Enhanced Random Forest and Random Forest with Bagging algorithms were the most effective, with validation accuracies of 78.30% and 79.18%, respectively. However, the Random Forest and Random Forest with Grid Search models displayed significant variance between their training and testing scores, indicating the potential for overfitting. Random Forest(RF) and Gradient Boosting(GB) are highly effective for facies classification because they handle complex relationships and provide high predictive accuracy. The choice between the two depends on specific project requirements, including interpretability, computational resources, and data nature.展开更多
Hybrid precoding is considered as a promising low-cost technique for millimeter wave(mm-wave)massive Multi-Input Multi-Output(MIMO)systems.In this work,referring to the time-varying propagation circumstances,with semi...Hybrid precoding is considered as a promising low-cost technique for millimeter wave(mm-wave)massive Multi-Input Multi-Output(MIMO)systems.In this work,referring to the time-varying propagation circumstances,with semi-supervised Incremental Learning(IL),we propose an online hybrid beamforming scheme.Firstly,given the constraint of constant modulus on analog beamformer and combiner,we propose a new broadnetwork-based structure for the design model of hybrid beamforming.Compared with the existing network structure,the proposed network structure can achieve better transmission performance and lower complexity.Moreover,to enhance the efficiency of IL further,by combining the semi-supervised graph with IL,we propose a hybrid beamforming scheme based on chunk-by-chunk semi-supervised learning,where only few transmissions are required to calculate the label and all other unlabelled transmissions would also be put into a training data chunk.Unlike the existing single-by-single approach where transmissions during the model update are not taken into the consideration of model update,all transmissions,even the ones during the model update,would make contributions to model update in the proposed method.During the model update,the amount of unlabelled transmissions is very large and they also carry some information,the prediction performance can be enhanced to some extent by these unlabelled channel data.Simulation results demonstrate the spectral efficiency of the proposed method outperforms that of the existing single-by-single approach.Besides,we prove the general complexity of the proposed method is lower than that of the existing approach and give the condition under which its absolute complexity outperforms that of the existing approach.展开更多
Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to bes...Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to best improve performance while limiting the number of new labels."Model Change"active learning quantifies the resulting change incurred in the classifier by introducing the additional label(s).We pair this idea with graph-based semi-supervised learning(SSL)methods,that use the spectrum of the graph Laplacian matrix,which can be truncated to avoid prohibitively large computational and storage costs.We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.We show a variety of multiclass examples that illustrate improved performance over prior state-of-art.展开更多
With the rapid development of Internet of Things(IoT)technology,IoT systems have been widely applied in health-care,transportation,home,and other fields.However,with the continuous expansion of the scale and increasin...With the rapid development of Internet of Things(IoT)technology,IoT systems have been widely applied in health-care,transportation,home,and other fields.However,with the continuous expansion of the scale and increasing complexity of IoT systems,the stability and security issues of IoT systems have become increasingly prominent.Thus,it is crucial to detect anomalies in the collected IoT time series from various sensors.Recently,deep learning models have been leveraged for IoT anomaly detection.However,owing to the challenges associated with data labeling,most IoT anomaly detection methods resort to unsupervised learning techniques.Nevertheless,the absence of accurate abnormal information in unsupervised learning methods limits their performance.To address these problems,we propose AS-GCN-MTM,an adaptive structural Graph Convolutional Networks(GCN)-based framework using a mean-teacher mechanism(AS-GCN-MTM)for anomaly identification.It performs better than unsupervised methods using only a small amount of labeled data.Mean Teachers is an effective semi-supervised learning method that utilizes unlabeled data for training to improve the generalization ability and performance of the model.However,the dependencies between data are often unknown in time series data.To solve this problem,we designed a graph structure adaptive learning layer based on neural networks,which can automatically learn the graph structure from time series data.It not only better captures the relationships between nodes but also enhances the model’s performance by augmenting key data.Experiments have demonstrated that our method improves the baseline model with the highest F1 value by 10.4%,36.1%,and 5.6%,respectively,on three real datasets with a 10%data labeling rate.展开更多
Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malwar...Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malware detection.However,there remains a need for comprehensive studies that compare the performance of different classifiers specifically for Windows malware detection.Addressing this gap can provide valuable insights for enhancing cybersecurity strategies.While numerous studies have explored malware detection using machine learning techniques,there is a lack of systematic comparison of supervised classifiers for Windows malware detection.Understanding the relative effectiveness of these classifiers can inform the selection of optimal detection methods and improve overall security measures.This study aims to bridge the research gap by conducting a comparative analysis of supervised machine learning classifiers for detecting malware on Windows systems.The objectives include Investigating the performance of various classifiers,such as Gaussian Naïve Bayes,K Nearest Neighbors(KNN),Stochastic Gradient Descent Classifier(SGDC),and Decision Tree,in detecting Windows malware.Evaluating the accuracy,efficiency,and suitability of each classifier for real-world malware detection scenarios.Identifying the strengths and limitations of different classifiers to provide insights for cybersecurity practitioners and researchers.Offering recommendations for selecting the most effective classifier for Windows malware detection based on empirical evidence.The study employs a structured methodology consisting of several phases:exploratory data analysis,data preprocessing,model training,and evaluation.Exploratory data analysis involves understanding the dataset’s characteristics and identifying preprocessing requirements.Data preprocessing includes cleaning,feature encoding,dimensionality reduction,and optimization to prepare the data for training.Model training utilizes various supervised classifiers,and their performance is evaluated using metrics such as accuracy,precision,recall,and F1 score.The study’s outcomes comprise a comparative analysis of supervised machine learning classifiers for Windows malware detection.Results reveal the effectiveness and efficiency of each classifier in detecting different types of malware.Additionally,insights into their strengths and limitations provide practical guidance for enhancing cybersecurity defenses.Overall,this research contributes to advancing malware detection techniques and bolstering the security posture of Windows systems against evolving cyber threats.展开更多
The aim of this paper is to broaden the application of Stochastic Configuration Network (SCN) in the semi-supervised domain by utilizing common unlabeled data in daily life. It can enhance the classification accuracy ...The aim of this paper is to broaden the application of Stochastic Configuration Network (SCN) in the semi-supervised domain by utilizing common unlabeled data in daily life. It can enhance the classification accuracy of decentralized SCN algorithms while effectively protecting user privacy. To this end, we propose a decentralized semi-supervised learning algorithm for SCN, called DMT-SCN, which introduces teacher and student models by combining the idea of consistency regularization to improve the response speed of model iterations. In order to reduce the possible negative impact of unsupervised data on the model, we purposely change the way of adding noise to the unlabeled data. Simulation results show that the algorithm can effectively utilize unlabeled data to improve the classification accuracy of SCN training and is robust under different ground simulation environments.展开更多
Deep Learning(DL)is such a powerful tool that we have seen tremendous success in areas such as Computer Vision,Speech Recognition,and Natural Language Processing.Since Automated Modulation Classification(AMC)is an imp...Deep Learning(DL)is such a powerful tool that we have seen tremendous success in areas such as Computer Vision,Speech Recognition,and Natural Language Processing.Since Automated Modulation Classification(AMC)is an important part in Cognitive Radio Networks,we try to explore its potential in solving signal modulation recognition problem.It cannot be overlooked that DL model is a complex model,thus making them prone to over-fitting.DL model requires many training data to combat with over-fitting,but adding high quality labels to training data manually is not always cheap and accessible,especially in real-time system,which may counter unprecedented data in dataset.Semi-supervised Learning is a way to exploit unlabeled data effectively to reduce over-fitting in DL.In this paper,we extend Generative Adversarial Networks(GANs)to the semi-supervised learning will show it is a method can be used to create a more dataefficient classifier.展开更多
Through semi-supervised learning and knowledge inheritance,a novel Takagi-Sugeno-Kang(TSK)fuzzy system framework is proposed for epilepsy data classification in this study.The new method is based on the maximum mean d...Through semi-supervised learning and knowledge inheritance,a novel Takagi-Sugeno-Kang(TSK)fuzzy system framework is proposed for epilepsy data classification in this study.The new method is based on the maximum mean discrepancy(MMD)method and TSK fuzzy system,as a basic model for the classification of epilepsy data.First,formedical data,the interpretability of TSK fuzzy systems can ensure that the prediction results are traceable and safe.Second,in view of the deviation in the data distribution between the real source domain and the target domain,MMD is used to measure the distance between different data distributions.The objective function is constructed according to the MMD distance,and the distribution distance of different datasets is minimized to find the similar characteristics of different datasets.We introduce semi-supervised learning to further explore the relationship between data.Based on the MMD method,a semi-supervised learning(SSL)-MMD method is constructed by using pseudo-tags to realize the data distribution alignment of the same category.In addition,the idea of knowledge dissemination is used to learn pseudo-tags as additional data features.Finally,for epilepsy classification,the cross-domain TSK fuzzy system uses the cross-entropy function as the objective function and adopts the back-propagation strategy to optimize the parameters.The experimental results show that the new method can process complex epilepsy data and identify whether patients have epilepsy.展开更多
Malaria is a lethal disease responsible for thousands of deaths worldwide every year.Manual methods of malaria diagnosis are timeconsuming that require a great deal of human expertise and efforts.Computerbased automat...Malaria is a lethal disease responsible for thousands of deaths worldwide every year.Manual methods of malaria diagnosis are timeconsuming that require a great deal of human expertise and efforts.Computerbased automated diagnosis of diseases is progressively becoming popular.Although deep learning models show high performance in the medical field,it demands a large volume of data for training which is hard to acquire for medical problems.Similarly,labeling of medical images can be done with the help of medical experts only.Several recent studies have utilized deep learning models to develop efficient malaria diagnostic system,which showed promising results.However,the most common problem with these models is that they need a large amount of data for training.This paper presents a computer-aided malaria diagnosis system that combines a semi-supervised generative adversarial network and transfer learning.The proposed model is trained in a semi-supervised manner and requires less training data than conventional deep learning models.Performance of the proposed model is evaluated on a publicly available dataset of blood smear images(with malariainfected and normal class)and achieved a classification accuracy of 96.6%.展开更多
Deep learning significantly improves the accuracy of remote sensing image scene classification,benefiting from the large-scale datasets.However,annotating the remote sensing images is time-consuming and even tough for...Deep learning significantly improves the accuracy of remote sensing image scene classification,benefiting from the large-scale datasets.However,annotating the remote sensing images is time-consuming and even tough for experts.Deep neural networks trained using a few labeled samples usually generalize less to new unseen images.In this paper,we propose a semi-supervised approach for remote sensing image scene classification based on the prototype-based consistency,by exploring massive unlabeled images.To this end,we,first,propose a feature enhancement module to extract discriminative features.This is achieved by focusing the model on the foreground areas.Then,the prototype-based classifier is introduced to the framework,which is used to acquire consistent feature representations.We conduct a series of experiments on NWPU-RESISC45 and Aerial Image Dataset(AID).Our method improves the State-Of-The-Art(SOTA)method on NWPU-RESISC45 from 92.03%to 93.08%and on AID from 94.25%to 95.24%in terms of accuracy.展开更多
Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)t...Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.展开更多
Ensemble learning,a pivotal branch of machine learning,amalgamates multiple base models to enhance the overarching performance of predictive models,capitalising on the diversity and collective wisdom of the ensemble t...Ensemble learning,a pivotal branch of machine learning,amalgamates multiple base models to enhance the overarching performance of predictive models,capitalising on the diversity and collective wisdom of the ensemble to surpass individual models and mitigate overfitting.In this review,a four-layer research framework is established for the research of ensemble learning,which can offer a comprehensive and structured review of ensemble learning from bottom to top.Firstly,this survey commences by introducing fundamental ensemble learning techniques,including bagging,boosting,and stacking,while also exploring the ensemble's diversity.Then,deep ensemble learning and semi-supervised ensemble learning are studied in detail.Furthermore,the utilisation of ensemble learning techniques to navigate challenging datasets,such as imbalanced and highdimensional data,is discussed.The application of ensemble learning techniques across various research domains,including healthcare,transportation,finance,manufacturing,and the Internet,is also examined.The survey concludes by discussing challenges intrinsic to ensemble learning.展开更多
A literature review on AI applications in the field of railway safety shows that the implemented approaches mainly concern the operational,maintenance,and feedback phases following railway incidents or accidents.These...A literature review on AI applications in the field of railway safety shows that the implemented approaches mainly concern the operational,maintenance,and feedback phases following railway incidents or accidents.These approaches exploit railway safety data once the transport system has received authorization for commissioning.However,railway standards and regulations require the development of a safety management system(SMS)from the specification and design phases of the railway system.This article proposes a new AI approach for analyzing and assessing safety from the specification and design phases of the railway system with a view to improving the development of the SMS.Unlike some learning methods,the proposed approach,which is dedicated in particular to safety assessment bodies,is based on semi-supervised learning carried out in close collaboration with safety experts who contributed to the development of a database of potential accident scenarios(learning example database)relating to the risk of rail collision.The proposed decision support is based on the use of an expert system whose knowledge base is automatically generated by inductive learning in the form of an association rule(rule base)and whose main objective is to suggest to the safety expert possible hazards not considered during the development of the SMS to complete the initial hazard register.展开更多
Background:In the field of genetic diagnostics,DNA sequencing is an important tool because the depth and complexity of this field have major implications in light of the genetic architectures of diseases and the ident...Background:In the field of genetic diagnostics,DNA sequencing is an important tool because the depth and complexity of this field have major implications in light of the genetic architectures of diseases and the identification of risk factors associated with genetic disorders.Methods:Our study introduces a novel two-tiered analytical framework to raise the precision and reliability of genetic data interpretation.It is initiated by extracting and analyzing salient features from DNA sequences through a CNN-based feature analysis,taking advantage of the power inherent in Convolutional neural networks(CNNs)to attain complex patterns and minute mutations in genetic data.This study embraces an elite collection of machine learning classifiers interweaved through a stern voting mechanism,which synergistically joins the predictions made from multiple classifiers to generate comprehensive and well-balanced interpretations of the genetic data.Results:This state-of-the-art method was further tested by carrying out an empirical analysis on a variants'dataset of DNA sequences taken from patients affected by breast cancer,juxtaposed with a control group composed of healthy people.Thus,the integration of CNNs with a voting-based ensemble of classifiers returned outstanding outcomes,with performance metrics accuracy,precision,recall,and F1-scorereaching the outstanding rate of 0.88,outperforming previous models.Conclusions:This dual accomplishment underlines the transformative potential that integrating deep learning techniques with ensemble machine learning might provide in real added value for further genetic diagnostics and prognostics.These results from this study set a new benchmark in the accuracy of disease diagnosis through DNA sequencing and promise future studies on improved personalized medicine and healthcare approaches with precise genetic information.展开更多
Semi-supervised clustering techniques attempt to improve clustering accuracy by utilizing a limited number of labeled data for guidance.This method effectively integrates prior knowledge using pre-labeled data.While s...Semi-supervised clustering techniques attempt to improve clustering accuracy by utilizing a limited number of labeled data for guidance.This method effectively integrates prior knowledge using pre-labeled data.While semi-supervised fuzzy clustering(SSFC)methods leverage limited labeled data to enhance accuracy,they remain highly susceptible to inappropriate or mislabeled prior knowledge,especially in noisy or overlapping datasets where cluster boundaries are ambiguous.To enhance the effectiveness of clustering algorithms,it is essential to leverage labeled data while ensuring the safety of the previous knowledge.Existing solutions,such as the Trusted Safe Semi-Supervised Fuzzy Clustering Method(TS3FCM),struggle with random centroid initialization,fixed neighbor radius formulas,and handling outliers or noise at cluster overlaps.A new framework called Active Safe Semi-Supervised Fuzzy Clustering with Pairwise Constraints Based on Cluster Boundary(AS3FCPC)is proposed in this paper to deal with these problems.It does this by combining pairwise constraints and active learning.AS3FCPC uses active learning to query only the most informative data instances close to the cluster boundaries.It also uses pairwise constraints to enforce the cluster structure,which makes the system more accurate and robust.Extensive test results on diverse datasets,including challenging noisy and overlapping scenarios,demonstrate that AS3FCPC consistently achieves superior performance compared to state-of-the-art methods like TS3FCM and other baselines,especially when the data is noisy and overlaps.This significant improvement underscores AS3FCPC’s potential for reliable and accurate semisupervised fuzzy clustering in complex,real-world applications,particularly by effectively managing mislabeled data and ambiguous cluster boundaries.展开更多
Dementia is a neurological disorder that affects the brain and its functioning,and women experience its effects more than men do.Preventive care often requires non-invasive and rapid tests,yet conventional diagnostic ...Dementia is a neurological disorder that affects the brain and its functioning,and women experience its effects more than men do.Preventive care often requires non-invasive and rapid tests,yet conventional diagnostic techniques are time-consuming and invasive.One of the most effective ways to diagnose dementia is by analyzing a patient’s speech,which is cheap and does not require surgery.This research aims to determine the effectiveness of deep learning(DL)and machine learning(ML)structures in diagnosing dementia based on women’s speech patterns.The study analyzes data drawn from the Pitt Corpus,which contains 298 dementia files and 238 control files from the Dementia Bank database.Deep learning models and SVM classifiers were used to analyze the available audio samples in the dataset.Our methodology used two methods:a DL-ML model and a single DL model for the classification of diabetics and a single DL model.The deep learning model achieved an astronomic level of accuracy of 99.99%with an F1 score of 0.9998,Precision of 0.9997,and recall of 0.9998.The proposed DL-ML fusion model was equally impressive,with an accuracy of 99.99%,F1 score of 0.9995,Precision of 0.9998,and recall of 0.9997.Also,the study reveals how to apply deep learning and machine learning models for dementia detection from speech with high accuracy and low computational complexity.This research work,therefore,concludes by showing the possibility of using speech-based dementia detection as a possibly helpful early diagnosis mode.For even further enhanced model performance and better generalization,future studies may explore real-time applications and the inclusion of other components of speech.展开更多
Active semi-supervised fuzzy clustering integrates fuzzy clustering techniques with limited labeled data,guided by active learning,to enhance classification accuracy,particularly in complex and ambiguous datasets.Alth...Active semi-supervised fuzzy clustering integrates fuzzy clustering techniques with limited labeled data,guided by active learning,to enhance classification accuracy,particularly in complex and ambiguous datasets.Although several active semi-supervised fuzzy clustering methods have been developed previously,they typically face significant limitations,including high computational complexity,sensitivity to initial cluster centroids,and difficulties in accurately managing boundary clusters where data points often overlap among multiple clusters.This study introduces a novel Active Semi-Supervised Fuzzy Clustering algorithm specifically designed to identify,analyze,and correct misclassified boundary elements.By strategically utilizing labeled data through active learning,our method improves the robustness and precision of cluster boundary assignments.Extensive experimental evaluations conducted on three types of datasets—including benchmark UCI datasets,synthetic data with controlled boundary overlap,and satellite imagery—demonstrate that our proposed approach achieves superior performance in terms of clustering accuracy and robustness compared to existing active semi-supervised fuzzy clustering methods.The results confirm the effectiveness and practicality of our method in handling real-world scenarios where precise cluster boundaries are critical.展开更多
基金supported by the National Natural Science Foundation of China(No.52207229)the Key Research and Development Program of Ningxia Hui Autonomous Region of China(No.2024BEE02003)+1 种基金the financial support from the AEGiS Research Grant 2024,University of Wollongong(No.R6254)the financial support from the China Scholarship Council(No.202207550010).
文摘Accurate prediction of the remaining useful life(RUL)is crucial for the design and management of lithium-ion batteries.Although various machine learning models offer promising predictions,one critical but often overlooked challenge is their demand for considerable run-to-failure data for training.Collection of such training data leads to prohibitive testing efforts as the run-to-failure tests can last for years.Here,we propose a semi-supervised representation learning method to enhance prediction accuracy by learning from data without RUL labels.Our approach builds on a sophisticated deep neural network that comprises an encoder and three decoder heads to extract time-dependent representation features from short-term battery operating data regardless of the existence of RUL labels.The approach is validated using three datasets collected from 34 batteries operating under various conditions,encompassing over 19,900 charge and discharge cycles.Our method achieves a root mean squared error(RMSE)within 25 cycles,even when only 1/50 of the training dataset is labelled,representing a reduction of 48%compared to the conventional approach.We also demonstrate the method's robustness with varying numbers of labelled data and different weights assigned to the three decoder heads.The projection of extracted features in low space reveals that our method effectively learns degradation features from unlabelled data.Our approach highlights the promise of utilising semi-supervised learning to reduce the data demand for reliability monitoring of energy devices.
基金supported by the Key Program of Jiangsu Science Foundation(BK20243012)the National Natural Science Foundation of China(NSFC)(Grant Nos.62306133,62176118).
文摘Semi-supervised learning(SSL)aims to improve performance by exploiting unlabeled data when labels are scarce.Conventional SSL studies typically assume close environments where important factors(e.g.,label,feature,distribution)between labeled and unlabeled data are consistent.However,more practical tasks involve open environments where important factors between labeled and unlabeled data are inconsistent.It has been reported that exploiting inconsistent unlabeled data causes severe performance degradation,even worse than the simple supervised learning baseline.Manually verifying the quality of unlabeled data is not desirable,therefore,it is important to study robust SSL with inconsistent unlabeled data in open environments.This paper briefly introduces some advances in this line of research,focusing on techniques concerning label,feature,and data distribution inconsistency in SSL,and presents the evaluation benchmarks.Open research problems are also discussed for reference purposes.
基金sponsored by the National Natural Science Foundation of China Grant No.62271302the Shanghai Municipal Natural Science Foundation Grant 20ZR1423500.
文摘Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,labeled data is very scarce due to patient privacy concerns.For researchers,obtaining high-quality labeled images is exceedingly challenging because it involves manual annotation and clinical understanding.In addition,skin datasets are highly suitable for medical image classification studies due to the inter-class relationships and the inter-class similarities of skin lesions.In this paper,we propose a model called Coalition Sample Relation Consistency(CSRC),a consistency-based method that leverages Canonical Correlation Analysis(CCA)to capture the intrinsic relationships between samples.Considering that traditional consistency-based models only focus on the consistency of prediction,we additionally explore the similarity between features by using CCA.We enforce feature relation consistency based on traditional models,encouraging the model to learn more meaningful information from unlabeled data.Finally,considering that cross-entropy loss is not as suitable as the supervised loss when studying with imbalanced datasets(i.e.,ISIC 2017 and ISIC 2018),we improve the supervised loss to achieve better classification accuracy.Our study shows that this model performs better than many semi-supervised methods.
文摘Machine learning techniques and a dataset of five wells from the Rawat oilfield in Sudan containing 93,925 samples per feature(seven well logs and one facies log) were used to classify four facies. Data preprocessing and preparation involve two processes: data cleaning and feature scaling. Several machine learning algorithms, including Linear Regression(LR), Decision Tree(DT), Support Vector Machine(SVM),Random Forest(RF), and Gradient Boosting(GB) for classification, were tested using different iterations and various combinations of features and parameters. The support vector radial kernel training model achieved an accuracy of 72.49% without grid search and 64.02% with grid search, while the blind-well test scores were 71.01% and 69.67%, respectively. The Decision Tree(DT) Hyperparameter Optimization model showed an accuracy of 64.15% for training and 67.45% for testing. In comparison, the Decision Tree coupled with grid search yielded better results, with a training score of 69.91% and a testing score of67.89%. The model's validation was carried out using the blind well validation approach, which achieved an accuracy of 69.81%. Three algorithms were used to generate the gradient-boosting model. During training, the Gradient Boosting classifier achieved an accuracy score of 71.57%, and during testing, it achieved 69.89%. The Grid Search model achieved a higher accuracy score of 72.14% during testing. The Extreme Gradient Boosting model had the lowest accuracy score, with only 66.13% for training and66.12% for testing. For validation, the Gradient Boosting(GB) classifier model achieved an accuracy score of 75.41% on the blind well test, while the Gradient Boosting with Grid Search achieved an accuracy score of 71.36%. The Enhanced Random Forest and Random Forest with Bagging algorithms were the most effective, with validation accuracies of 78.30% and 79.18%, respectively. However, the Random Forest and Random Forest with Grid Search models displayed significant variance between their training and testing scores, indicating the potential for overfitting. Random Forest(RF) and Gradient Boosting(GB) are highly effective for facies classification because they handle complex relationships and provide high predictive accuracy. The choice between the two depends on specific project requirements, including interpretability, computational resources, and data nature.
基金supported by the National Science Foundation of China under Grant No.62101467.
文摘Hybrid precoding is considered as a promising low-cost technique for millimeter wave(mm-wave)massive Multi-Input Multi-Output(MIMO)systems.In this work,referring to the time-varying propagation circumstances,with semi-supervised Incremental Learning(IL),we propose an online hybrid beamforming scheme.Firstly,given the constraint of constant modulus on analog beamformer and combiner,we propose a new broadnetwork-based structure for the design model of hybrid beamforming.Compared with the existing network structure,the proposed network structure can achieve better transmission performance and lower complexity.Moreover,to enhance the efficiency of IL further,by combining the semi-supervised graph with IL,we propose a hybrid beamforming scheme based on chunk-by-chunk semi-supervised learning,where only few transmissions are required to calculate the label and all other unlabelled transmissions would also be put into a training data chunk.Unlike the existing single-by-single approach where transmissions during the model update are not taken into the consideration of model update,all transmissions,even the ones during the model update,would make contributions to model update in the proposed method.During the model update,the amount of unlabelled transmissions is very large and they also carry some information,the prediction performance can be enhanced to some extent by these unlabelled channel data.Simulation results demonstrate the spectral efficiency of the proposed method outperforms that of the existing single-by-single approach.Besides,we prove the general complexity of the proposed method is lower than that of the existing approach and give the condition under which its absolute complexity outperforms that of the existing approach.
基金supported by the DOD National Defense Science and Engineering Graduate(NDSEG)Research Fellowshipsupported by the NGA under Contract No.HM04762110003.
文摘Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier.A challenge is to identify which points to label to best improve performance while limiting the number of new labels."Model Change"active learning quantifies the resulting change incurred in the classifier by introducing the additional label(s).We pair this idea with graph-based semi-supervised learning(SSL)methods,that use the spectrum of the graph Laplacian matrix,which can be truncated to avoid prohibitively large computational and storage costs.We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.We show a variety of multiclass examples that illustrate improved performance over prior state-of-art.
基金This research is partially supported by the National Natural Science Foundation of China under Grant No.62376043Science and Technology Program of Sichuan Province under Grant Nos.2020JDRC0067,2023JDRC0087,and 24NSFTD0025.
文摘With the rapid development of Internet of Things(IoT)technology,IoT systems have been widely applied in health-care,transportation,home,and other fields.However,with the continuous expansion of the scale and increasing complexity of IoT systems,the stability and security issues of IoT systems have become increasingly prominent.Thus,it is crucial to detect anomalies in the collected IoT time series from various sensors.Recently,deep learning models have been leveraged for IoT anomaly detection.However,owing to the challenges associated with data labeling,most IoT anomaly detection methods resort to unsupervised learning techniques.Nevertheless,the absence of accurate abnormal information in unsupervised learning methods limits their performance.To address these problems,we propose AS-GCN-MTM,an adaptive structural Graph Convolutional Networks(GCN)-based framework using a mean-teacher mechanism(AS-GCN-MTM)for anomaly identification.It performs better than unsupervised methods using only a small amount of labeled data.Mean Teachers is an effective semi-supervised learning method that utilizes unlabeled data for training to improve the generalization ability and performance of the model.However,the dependencies between data are often unknown in time series data.To solve this problem,we designed a graph structure adaptive learning layer based on neural networks,which can automatically learn the graph structure from time series data.It not only better captures the relationships between nodes but also enhances the model’s performance by augmenting key data.Experiments have demonstrated that our method improves the baseline model with the highest F1 value by 10.4%,36.1%,and 5.6%,respectively,on three real datasets with a 10%data labeling rate.
基金This researchwork is supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R411),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malware detection.However,there remains a need for comprehensive studies that compare the performance of different classifiers specifically for Windows malware detection.Addressing this gap can provide valuable insights for enhancing cybersecurity strategies.While numerous studies have explored malware detection using machine learning techniques,there is a lack of systematic comparison of supervised classifiers for Windows malware detection.Understanding the relative effectiveness of these classifiers can inform the selection of optimal detection methods and improve overall security measures.This study aims to bridge the research gap by conducting a comparative analysis of supervised machine learning classifiers for detecting malware on Windows systems.The objectives include Investigating the performance of various classifiers,such as Gaussian Naïve Bayes,K Nearest Neighbors(KNN),Stochastic Gradient Descent Classifier(SGDC),and Decision Tree,in detecting Windows malware.Evaluating the accuracy,efficiency,and suitability of each classifier for real-world malware detection scenarios.Identifying the strengths and limitations of different classifiers to provide insights for cybersecurity practitioners and researchers.Offering recommendations for selecting the most effective classifier for Windows malware detection based on empirical evidence.The study employs a structured methodology consisting of several phases:exploratory data analysis,data preprocessing,model training,and evaluation.Exploratory data analysis involves understanding the dataset’s characteristics and identifying preprocessing requirements.Data preprocessing includes cleaning,feature encoding,dimensionality reduction,and optimization to prepare the data for training.Model training utilizes various supervised classifiers,and their performance is evaluated using metrics such as accuracy,precision,recall,and F1 score.The study’s outcomes comprise a comparative analysis of supervised machine learning classifiers for Windows malware detection.Results reveal the effectiveness and efficiency of each classifier in detecting different types of malware.Additionally,insights into their strengths and limitations provide practical guidance for enhancing cybersecurity defenses.Overall,this research contributes to advancing malware detection techniques and bolstering the security posture of Windows systems against evolving cyber threats.
文摘The aim of this paper is to broaden the application of Stochastic Configuration Network (SCN) in the semi-supervised domain by utilizing common unlabeled data in daily life. It can enhance the classification accuracy of decentralized SCN algorithms while effectively protecting user privacy. To this end, we propose a decentralized semi-supervised learning algorithm for SCN, called DMT-SCN, which introduces teacher and student models by combining the idea of consistency regularization to improve the response speed of model iterations. In order to reduce the possible negative impact of unsupervised data on the model, we purposely change the way of adding noise to the unlabeled data. Simulation results show that the algorithm can effectively utilize unlabeled data to improve the classification accuracy of SCN training and is robust under different ground simulation environments.
基金This work is supported by the National Natural Science Foundation of China(Nos.61771154,61603239,61772454,6171101570).
文摘Deep Learning(DL)is such a powerful tool that we have seen tremendous success in areas such as Computer Vision,Speech Recognition,and Natural Language Processing.Since Automated Modulation Classification(AMC)is an important part in Cognitive Radio Networks,we try to explore its potential in solving signal modulation recognition problem.It cannot be overlooked that DL model is a complex model,thus making them prone to over-fitting.DL model requires many training data to combat with over-fitting,but adding high quality labels to training data manually is not always cheap and accessible,especially in real-time system,which may counter unprecedented data in dataset.Semi-supervised Learning is a way to exploit unlabeled data effectively to reduce over-fitting in DL.In this paper,we extend Generative Adversarial Networks(GANs)to the semi-supervised learning will show it is a method can be used to create a more dataefficient classifier.
基金supported by the Fifth Key Project of Jiangsu Vocational Education Teaching Reform Research under Grant ZZZ13in part by the Science and Technology Project of Changzhou City under Grant CE20215032.
文摘Through semi-supervised learning and knowledge inheritance,a novel Takagi-Sugeno-Kang(TSK)fuzzy system framework is proposed for epilepsy data classification in this study.The new method is based on the maximum mean discrepancy(MMD)method and TSK fuzzy system,as a basic model for the classification of epilepsy data.First,formedical data,the interpretability of TSK fuzzy systems can ensure that the prediction results are traceable and safe.Second,in view of the deviation in the data distribution between the real source domain and the target domain,MMD is used to measure the distance between different data distributions.The objective function is constructed according to the MMD distance,and the distribution distance of different datasets is minimized to find the similar characteristics of different datasets.We introduce semi-supervised learning to further explore the relationship between data.Based on the MMD method,a semi-supervised learning(SSL)-MMD method is constructed by using pseudo-tags to realize the data distribution alignment of the same category.In addition,the idea of knowledge dissemination is used to learn pseudo-tags as additional data features.Finally,for epilepsy classification,the cross-domain TSK fuzzy system uses the cross-entropy function as the objective function and adopts the back-propagation strategy to optimize the parameters.The experimental results show that the new method can process complex epilepsy data and identify whether patients have epilepsy.
基金The publication of this article is funded by the Qatar National Library.
文摘Malaria is a lethal disease responsible for thousands of deaths worldwide every year.Manual methods of malaria diagnosis are timeconsuming that require a great deal of human expertise and efforts.Computerbased automated diagnosis of diseases is progressively becoming popular.Although deep learning models show high performance in the medical field,it demands a large volume of data for training which is hard to acquire for medical problems.Similarly,labeling of medical images can be done with the help of medical experts only.Several recent studies have utilized deep learning models to develop efficient malaria diagnostic system,which showed promising results.However,the most common problem with these models is that they need a large amount of data for training.This paper presents a computer-aided malaria diagnosis system that combines a semi-supervised generative adversarial network and transfer learning.The proposed model is trained in a semi-supervised manner and requires less training data than conventional deep learning models.Performance of the proposed model is evaluated on a publicly available dataset of blood smear images(with malariainfected and normal class)and achieved a classification accuracy of 96.6%.
基金supported in part by the National Natural Science Foundation of China(No.12302252)。
文摘Deep learning significantly improves the accuracy of remote sensing image scene classification,benefiting from the large-scale datasets.However,annotating the remote sensing images is time-consuming and even tough for experts.Deep neural networks trained using a few labeled samples usually generalize less to new unseen images.In this paper,we propose a semi-supervised approach for remote sensing image scene classification based on the prototype-based consistency,by exploring massive unlabeled images.To this end,we,first,propose a feature enhancement module to extract discriminative features.This is achieved by focusing the model on the foreground areas.Then,the prototype-based classifier is introduced to the framework,which is used to acquire consistent feature representations.We conduct a series of experiments on NWPU-RESISC45 and Aerial Image Dataset(AID).Our method improves the State-Of-The-Art(SOTA)method on NWPU-RESISC45 from 92.03%to 93.08%and on AID from 94.25%to 95.24%in terms of accuracy.
基金supported by the Natural Science Foundation of China(No.41804112,author:Chengyun Song).
文摘Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset.
基金supported in part by National Natural Science Foundation of China No.92467109,U21A20478National Key R&D Program of China 2023YFA1011601the Major Key Project of PCL(Grant PCL2024A05).
文摘Ensemble learning,a pivotal branch of machine learning,amalgamates multiple base models to enhance the overarching performance of predictive models,capitalising on the diversity and collective wisdom of the ensemble to surpass individual models and mitigate overfitting.In this review,a four-layer research framework is established for the research of ensemble learning,which can offer a comprehensive and structured review of ensemble learning from bottom to top.Firstly,this survey commences by introducing fundamental ensemble learning techniques,including bagging,boosting,and stacking,while also exploring the ensemble's diversity.Then,deep ensemble learning and semi-supervised ensemble learning are studied in detail.Furthermore,the utilisation of ensemble learning techniques to navigate challenging datasets,such as imbalanced and highdimensional data,is discussed.The application of ensemble learning techniques across various research domains,including healthcare,transportation,finance,manufacturing,and the Internet,is also examined.The survey concludes by discussing challenges intrinsic to ensemble learning.
文摘A literature review on AI applications in the field of railway safety shows that the implemented approaches mainly concern the operational,maintenance,and feedback phases following railway incidents or accidents.These approaches exploit railway safety data once the transport system has received authorization for commissioning.However,railway standards and regulations require the development of a safety management system(SMS)from the specification and design phases of the railway system.This article proposes a new AI approach for analyzing and assessing safety from the specification and design phases of the railway system with a view to improving the development of the SMS.Unlike some learning methods,the proposed approach,which is dedicated in particular to safety assessment bodies,is based on semi-supervised learning carried out in close collaboration with safety experts who contributed to the development of a database of potential accident scenarios(learning example database)relating to the risk of rail collision.The proposed decision support is based on the use of an expert system whose knowledge base is automatically generated by inductive learning in the form of an association rule(rule base)and whose main objective is to suggest to the safety expert possible hazards not considered during the development of the SMS to complete the initial hazard register.
文摘Background:In the field of genetic diagnostics,DNA sequencing is an important tool because the depth and complexity of this field have major implications in light of the genetic architectures of diseases and the identification of risk factors associated with genetic disorders.Methods:Our study introduces a novel two-tiered analytical framework to raise the precision and reliability of genetic data interpretation.It is initiated by extracting and analyzing salient features from DNA sequences through a CNN-based feature analysis,taking advantage of the power inherent in Convolutional neural networks(CNNs)to attain complex patterns and minute mutations in genetic data.This study embraces an elite collection of machine learning classifiers interweaved through a stern voting mechanism,which synergistically joins the predictions made from multiple classifiers to generate comprehensive and well-balanced interpretations of the genetic data.Results:This state-of-the-art method was further tested by carrying out an empirical analysis on a variants'dataset of DNA sequences taken from patients affected by breast cancer,juxtaposed with a control group composed of healthy people.Thus,the integration of CNNs with a voting-based ensemble of classifiers returned outstanding outcomes,with performance metrics accuracy,precision,recall,and F1-scorereaching the outstanding rate of 0.88,outperforming previous models.Conclusions:This dual accomplishment underlines the transformative potential that integrating deep learning techniques with ensemble machine learning might provide in real added value for further genetic diagnostics and prognostics.These results from this study set a new benchmark in the accuracy of disease diagnosis through DNA sequencing and promise future studies on improved personalized medicine and healthcare approaches with precise genetic information.
文摘Semi-supervised clustering techniques attempt to improve clustering accuracy by utilizing a limited number of labeled data for guidance.This method effectively integrates prior knowledge using pre-labeled data.While semi-supervised fuzzy clustering(SSFC)methods leverage limited labeled data to enhance accuracy,they remain highly susceptible to inappropriate or mislabeled prior knowledge,especially in noisy or overlapping datasets where cluster boundaries are ambiguous.To enhance the effectiveness of clustering algorithms,it is essential to leverage labeled data while ensuring the safety of the previous knowledge.Existing solutions,such as the Trusted Safe Semi-Supervised Fuzzy Clustering Method(TS3FCM),struggle with random centroid initialization,fixed neighbor radius formulas,and handling outliers or noise at cluster overlaps.A new framework called Active Safe Semi-Supervised Fuzzy Clustering with Pairwise Constraints Based on Cluster Boundary(AS3FCPC)is proposed in this paper to deal with these problems.It does this by combining pairwise constraints and active learning.AS3FCPC uses active learning to query only the most informative data instances close to the cluster boundaries.It also uses pairwise constraints to enforce the cluster structure,which makes the system more accurate and robust.Extensive test results on diverse datasets,including challenging noisy and overlapping scenarios,demonstrate that AS3FCPC consistently achieves superior performance compared to state-of-the-art methods like TS3FCM and other baselines,especially when the data is noisy and overlaps.This significant improvement underscores AS3FCPC’s potential for reliable and accurate semisupervised fuzzy clustering in complex,real-world applications,particularly by effectively managing mislabeled data and ambiguous cluster boundaries.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Groups Program Grant No.(RGP-1444-0057).
文摘Dementia is a neurological disorder that affects the brain and its functioning,and women experience its effects more than men do.Preventive care often requires non-invasive and rapid tests,yet conventional diagnostic techniques are time-consuming and invasive.One of the most effective ways to diagnose dementia is by analyzing a patient’s speech,which is cheap and does not require surgery.This research aims to determine the effectiveness of deep learning(DL)and machine learning(ML)structures in diagnosing dementia based on women’s speech patterns.The study analyzes data drawn from the Pitt Corpus,which contains 298 dementia files and 238 control files from the Dementia Bank database.Deep learning models and SVM classifiers were used to analyze the available audio samples in the dataset.Our methodology used two methods:a DL-ML model and a single DL model for the classification of diabetics and a single DL model.The deep learning model achieved an astronomic level of accuracy of 99.99%with an F1 score of 0.9998,Precision of 0.9997,and recall of 0.9998.The proposed DL-ML fusion model was equally impressive,with an accuracy of 99.99%,F1 score of 0.9995,Precision of 0.9998,and recall of 0.9997.Also,the study reveals how to apply deep learning and machine learning models for dementia detection from speech with high accuracy and low computational complexity.This research work,therefore,concludes by showing the possibility of using speech-based dementia detection as a possibly helpful early diagnosis mode.For even further enhanced model performance and better generalization,future studies may explore real-time applications and the inclusion of other components of speech.
文摘Active semi-supervised fuzzy clustering integrates fuzzy clustering techniques with limited labeled data,guided by active learning,to enhance classification accuracy,particularly in complex and ambiguous datasets.Although several active semi-supervised fuzzy clustering methods have been developed previously,they typically face significant limitations,including high computational complexity,sensitivity to initial cluster centroids,and difficulties in accurately managing boundary clusters where data points often overlap among multiple clusters.This study introduces a novel Active Semi-Supervised Fuzzy Clustering algorithm specifically designed to identify,analyze,and correct misclassified boundary elements.By strategically utilizing labeled data through active learning,our method improves the robustness and precision of cluster boundary assignments.Extensive experimental evaluations conducted on three types of datasets—including benchmark UCI datasets,synthetic data with controlled boundary overlap,and satellite imagery—demonstrate that our proposed approach achieves superior performance in terms of clustering accuracy and robustness compared to existing active semi-supervised fuzzy clustering methods.The results confirm the effectiveness and practicality of our method in handling real-world scenarios where precise cluster boundaries are critical.