Imbalanced multiclass datasets pose challenges for machine learning algorithms.They often contain minority classes that are important for accurate predictions.However,when the data is sparsely distributed and overlaps...Imbalanced multiclass datasets pose challenges for machine learning algorithms.They often contain minority classes that are important for accurate predictions.However,when the data is sparsely distributed and overlaps with data points fromother classes,it introduces noise.As a result,existing resamplingmethods may fail to preserve the original data patterns,further disrupting data quality and reducingmodel performance.This paper introduces Neighbor Displacement-based Enhanced Synthetic Oversampling(NDESO),a hybridmethod that integrates a data displacement strategy with a resampling technique to achieve data balance.It begins by computing the average distance of noisy data points to their neighbors and adjusting their positions toward the center before applying random oversampling.Extensive evaluations compare 14 alternatives on nine classifiers across synthetic and 20 real-world datasetswith varying imbalance ratios.This evaluation was structured into two distinct test groups.First,the effects of k-neighbor variations and distance metrics are evaluated,followed by a comparison of resampled data distributions against alternatives,and finally,determining the most suitable oversampling technique for data balancing.Second,the overall performance of the NDESO algorithm was assessed,focusing on G-mean and statistical significance.The results demonstrate that our method is robust to a wide range of variations in these parameters and the overall performance achieves an average G-mean score of 0.90,which is among the highest.Additionally,it attains the lowest mean rank of 2.88,indicating statistically significant improvements over existing approaches.This advantage underscores its potential for effectively handling data imbalance in practical scenarios.展开更多
Financial distress prediction(FDP)is a critical area of study for researchers,industry stakeholders,and regulatory authorities.However,FDP tasks present several challenges,including high-dimensional datasets,class imb...Financial distress prediction(FDP)is a critical area of study for researchers,industry stakeholders,and regulatory authorities.However,FDP tasks present several challenges,including high-dimensional datasets,class imbalances,and the complexity of parameter optimization.These issues often hinder the predictive model’s ability to accurately identify companies at high risk of financial distress.To mitigate these challenges,we introduce FinMHSPE—a novel multi-heterogeneous self-paced ensemble(MHSPE)FDP learning framework.The proposed model uses pairwise comparisons of data from multiple time frames combined with the maximum relevance and minimum redundancy method to select an optimal subset of features,effectively resolving the high dimensionality issue.Furthermore,the proposed framework incorporates the MHSPE model to iteratively identify the most informative majority class data samples,effectively addressing the class imbalance issue.To optimize the model’s parameters,we leverage the particle swarm optimization algorithm.The robustness of our proposed model is validated through extensive experiments performed on a financial dataset of Chinese listed companies.The empirical results demonstrate that the proposed model outperforms existing competing models in the field of FDP.Specifically,our FinMHSPE framework achieves the highest performance,achieving an area under the curve(AUC)value of 0.9574,considerably surpassing all existing methods.A comparative analysis of AUC values further reveals that FinMHSPE outperforms state-of-the-art approaches that rely on financial features as inputs.Furthermore,our investigation identifies several valuable features for enhancing FDP model performance,notably those associated with a company’s information and growth potential.展开更多
Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from nume...Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from numerous irrelevant and redundant features in high-dimensional imbalanced data,we proposed a novel feature selection method named AMF-SGSK based on adaptive multi-filter and subspace-based gaining sharing knowledge.Firstly,the balanced dataset was obtained by random under-sampling.Secondly,combining the feature importance score with the AUC score for each filter method,we proposed a concept called feature hardness to judge the importance of feature,which could adaptively select the essential features.Finally,the optimal feature subset was obtained by gaining sharing knowledge in multiple subspaces.This approach effectively achieved dimensionality reduction for high-dimensional imbalanced data.The experiment results on 30 benchmark imbalanced datasets showed that AMF-SGSK performed better than other eight commonly used algorithms including BGWO and IG-SSO in terms of F1-score,AUC,and G-mean.The mean values of F1-score,AUC,and Gmean for AMF-SGSK are 0.950,0.967,and 0.965,respectively,achieving the highest among all algorithms.And the mean value of Gmean is higher than those of IG-PSO,ReliefF-GWO,and BGOA by 3.72%,11.12%,and 20.06%,respectively.Furthermore,the selected feature ratio is below 0.01 across the selected ten datasets,further demonstrating the proposed method’s overall superiority over competing approaches.AMF-SGSK could adaptively remove irrelevant and redundant features and effectively improve the classification accuracy of high-dimensional imbalanced data,providing scientific and technological references for practical applications.展开更多
For imbalanced datasets, the focus of classification is to identify samples of the minority class. The performance of current data mining algorithms is not good enough for processing imbalanced datasets. The synthetic...For imbalanced datasets, the focus of classification is to identify samples of the minority class. The performance of current data mining algorithms is not good enough for processing imbalanced datasets. The synthetic minority over-sampling technique(SMOTE) is specifically designed for learning from imbalanced datasets, generating synthetic minority class examples by interpolating between minority class examples nearby. However, the SMOTE encounters the overgeneralization problem. The densitybased spatial clustering of applications with noise(DBSCAN) is not rigorous when dealing with the samples near the borderline.We optimize the DBSCAN algorithm for this problem to make clustering more reasonable. This paper integrates the optimized DBSCAN and SMOTE, and proposes a density-based synthetic minority over-sampling technique(DSMOTE). First, the optimized DBSCAN is used to divide the samples of the minority class into three groups, including core samples, borderline samples and noise samples, and then the noise samples of minority class is removed to synthesize more effective samples. In order to make full use of the information of core samples and borderline samples,different strategies are used to over-sample core samples and borderline samples. Experiments show that DSMOTE can achieve better results compared with SMOTE and Borderline-SMOTE in terms of precision, recall and F-value.展开更多
Imbalanced data is one type of datasets that are frequently found in real-world applications,e.g.,fraud detection and cancer diagnosis.For this type of datasets,improving the accuracy to identify their minority class ...Imbalanced data is one type of datasets that are frequently found in real-world applications,e.g.,fraud detection and cancer diagnosis.For this type of datasets,improving the accuracy to identify their minority class is a critically important issue.Feature selection is one method to address this issue.An effective feature selection method can choose a subset of features that favor in the accurate determination of the minority class.A decision tree is a classifier that can be built up by using different splitting criteria.Its advantage is the ease of detecting which feature is used as a splitting node.Thus,it is possible to use a decision tree splitting criterion as a feature selection method.In this paper,an embedded feature selection method using our proposed weighted Gini index(WGI)is proposed.Its comparison results with Chi2,F-statistic and Gini index feature selection methods show that F-statistic and Chi2 reach the best performance when only a few features are selected.As the number of selected features increases,our proposed method has the highest probability of achieving the best performance.The area under a receiver operating characteristic curve(ROC AUC)and F-measure are used as evaluation criteria.Experimental results with two datasets show that ROC AUC performance can be high,even if only a few features are selected and used,and only changes slightly as more and more features are selected.However,the performance of Fmeasure achieves excellent performance only if 20%or more of features are chosen.The results are helpful for practitioners to select a proper feature selection method when facing a practical problem.展开更多
Imbalanced data classification is an important research topic in real-world applications,like fault diagnosis in an aircraft manufacturing system.The over-sampling method is often used to solve this problem.It generat...Imbalanced data classification is an important research topic in real-world applications,like fault diagnosis in an aircraft manufacturing system.The over-sampling method is often used to solve this problem.It generates samples according to the distance between minority data.However,the traditional over-sampling method may change the original data distribution,which is harmful to the classification performance.In this paper,we propose a new method called Conditional SelfAttention Generative Adversarial Network with Differential Evolution(CSAGAN-DE)for imbalanced data classification.The new method aims at improving the classification performance of minority data by enhancing the quality of the generation of minority data.In CSAGAN-DE,the minority data are fed into the self-attention generative adversarial network to approximate the data distribution and create new data for the minority class.Then,the differential evolution algorithm is employed to automatically determine the number of generated minority data for achieving a satisfactory classification performance.Several experiments are conducted to evaluate the performance of the new CSAGAN-DE method.The results show that the new method can efficiently improve the classification performance compared with other related methods.展开更多
Imbalanced data classification is one of the major problems in machine learning.This imbalanced dataset typically has significant differences in the number of data samples between its classes.In most cases,the perform...Imbalanced data classification is one of the major problems in machine learning.This imbalanced dataset typically has significant differences in the number of data samples between its classes.In most cases,the performance of the machine learning algorithm such as Support Vector Machine(SVM)is affected when dealing with an imbalanced dataset.The classification accuracy is mostly skewed toward the majority class and poor results are exhibited in the prediction of minority-class samples.In this paper,a hybrid approach combining data pre-processing technique andSVMalgorithm based on improved Simulated Annealing(SA)was proposed.Firstly,the data preprocessing technique which primarily aims at solving the resampling strategy of handling imbalanced datasets was proposed.In this technique,the data were first synthetically generated to equalize the number of samples between classes and followed by a reduction step to remove redundancy and duplicated data.Next is the training of a balanced dataset using SVM.Since this algorithm requires an iterative process to search for the best penalty parameter during training,an improved SA algorithm was proposed for this task.In this proposed improvement,a new acceptance criterion for the solution to be accepted in the SA algorithm was introduced to enhance the accuracy of the optimization process.Experimental works based on ten publicly available imbalanced datasets have demonstrated higher accuracy in the classification tasks using the proposed approach in comparison with the conventional implementation of SVM.Registering at an average of 89.65%of accuracy for the binary class classification has demonstrated the good performance of the proposed works.展开更多
When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to ...When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.展开更多
The imbalance of dissolved gas analysis(DGA)data will lead to over-fitting,weak generalization and poor recognition performance for fault diagnosis models based on deep learning.To handle this problem,a novel transfor...The imbalance of dissolved gas analysis(DGA)data will lead to over-fitting,weak generalization and poor recognition performance for fault diagnosis models based on deep learning.To handle this problem,a novel transformer fault diagnosis method based on improved auxiliary classifier generative adversarial network(ACGAN)under imbalanced data is proposed in this paper,which meets both the requirements of balancing DGA data and supplying accurate diagnosis results.The generator combines one-dimensional convolutional neural networks(1D-CNN)and long short-term memories(LSTM),which can deeply extract the features from DGA samples and be greatly beneficial to ACGAN’s data balancing and fault diagnosis.The discriminator adopts multilayer perceptron networks(MLP),which prevents the discriminator from losing important features of DGA data when the network is too complex and the number of layers is too large.The experimental results suggest that the presented approach can effectively improve the adverse effects of DGA data imbalance on the deep learning models,enhance fault diagnosis performance and supply desirable diagnosis accuracy up to 99.46%.Furthermore,the comparison results indicate the fault diagnosis performance of the proposed approach is superior to that of other conventional methods.Therefore,the method presented in this study has excellent and reliable fault diagnosis performance for various unbalanced datasets.In addition,the proposed approach can also solve the problems of insufficient and imbalanced fault data in other practical application fields.展开更多
Objective Clinical medical record data associated with hepatitis B-related acute-on-chronic liver failure(HBV-ACLF)generally have small sample sizes and a class imbalance.However,most machine learning models are desig...Objective Clinical medical record data associated with hepatitis B-related acute-on-chronic liver failure(HBV-ACLF)generally have small sample sizes and a class imbalance.However,most machine learning models are designed based on balanced data and lack interpretability.This study aimed to propose a traditional Chinese medicine(TCM)diagnostic model for HBV-ACLF based on the TCM syndrome differentiation and treatment theory,which is clinically interpretable and highly accurate.Methods We collected medical records from 261 patients diagnosed with HBV-ACLF,including three syndromes:Yang jaundice(214 cases),Yang-Yin jaundice(41 cases),and Yin jaundice(6 cases).To avoid overfitting of the machine learning model,we excluded the cases of Yin jaundice.After data standardization and cleaning,we obtained 255 relevant medical records of Yang jaundice and Yang-Yin jaundice.To address the class imbalance issue,we employed the oversampling method and five machine learning methods,including logistic regression(LR),support vector machine(SVM),decision tree(DT),random forest(RF),and extreme gradient boosting(XGBoost)to construct the syndrome diagnosis models.This study used precision,F1 score,the area under the receiver operating characteristic(ROC)curve(AUC),and accuracy as model evaluation metrics.The model with the best classification performance was selected to extract the diagnostic rule,and its clinical significance was thoroughly analyzed.Furthermore,we proposed a novel multiple-round stable rule extraction(MRSRE)method to obtain a stable rule set of features that can exhibit the model’s clinical interpretability.Results The precision of the five machine learning models built using oversampled balanced data exceeded 0.90.Among these models,the accuracy of RF classification of syndrome types was 0.92,and the mean F1 scores of the two categories of Yang jaundice and Yang-Yin jaundice were 0.93 and 0.94,respectively.Additionally,the AUC was 0.98.The extraction rules of the RF syndrome differentiation model based on the MRSRE method revealed that the common features of Yang jaundice and Yang-Yin jaundice were wiry pulse,yellowing of the urine,skin,and eyes,normal tongue body,healthy sublingual vessel,nausea,oil loathing,and poor appetite.The main features of Yang jaundice were a red tongue body and thickened sublingual vessels,whereas those of Yang-Yin jaundice were a dark tongue body,pale white tongue body,white tongue coating,lack of strength,slippery pulse,light red tongue body,slimy tongue coating,and abdominal distension.This is aligned with the classifications made by TCM experts based on TCM syndrome differentiation and treatment theory.Conclusion Our model can be utilized for differentiating HBV-ACLF syndromes,which has the potential to be applied to generate other clinically interpretable models with high accuracy on clinical data characterized by small sample sizes and a class imbalance.展开更多
Prediction of machine failure is challenging as the dataset is often imbalanced with a low failure rate.The common approach to han-dle classification involving imbalanced data is to balance the data using a sampling a...Prediction of machine failure is challenging as the dataset is often imbalanced with a low failure rate.The common approach to han-dle classification involving imbalanced data is to balance the data using a sampling approach such as random undersampling,random oversampling,or Synthetic Minority Oversampling Technique(SMOTE)algorithms.This paper compared the classification performance of three popular classifiers(Logistic Regression,Gaussian Naïve Bayes,and Support Vector Machine)in predicting machine failure in the Oil and Gas industry.The original machine failure dataset consists of 20,473 hourly data and is imbalanced with 19945(97%)‘non-failure’and 528(3%)‘failure data’.The three independent variables to predict machine failure were pressure indicator,flow indicator,and level indicator.The accuracy of the classifiers is very high and close to 100%,but the sensitivity of all classifiers using the original dataset was close to zero.The performance of the three classifiers was then evaluated for data with different imbalance rates(10%to 50%)generated from the original data using SMOTE,SMOTE-Support Vector Machine(SMOTE-SVM)and SMOTE-Edited Nearest Neighbour(SMOTE-ENN).The classifiers were evaluated based on improvement in sensitivity and F-measure.Results showed that the sensitivity of all classifiers increases as the imbalance rate increases.SVM with radial basis function(RBF)kernel has the highest sensitivity when data is balanced(50:50)using SMOTE(Sensitivitytest=0.5686,Ftest=0.6927)compared to Naïve Bayes(Sensitivitytest=0.4033,Ftest=0.6218)and Logistic Regression(Sensitivitytest=0.4194,Ftest=0.621).Overall,the Gaussian Naïve Bayes model consistently improves sensitivity and F-measure as the imbalance ratio increases,but the sensitivity is below 50%.The classifiers performed better when data was balanced using SMOTE-SVM compared to SMOTE and SMOTE-ENN.展开更多
Class imbalance can substantially affect classification tasks using traditional classifiers,especially when identifying instances of minority categories.In addition to class imbalance,other challenges can also hinder ...Class imbalance can substantially affect classification tasks using traditional classifiers,especially when identifying instances of minority categories.In addition to class imbalance,other challenges can also hinder accurate classification.Researchers have explored various approaches to mitigate the effects of class imbalance.However,most studies focus only on processing correlations within a single category of samples.This paper introduces an ensemble framework called Inter-and Intra-Class Overlapping Ensemble(llCOE),which incorporates two sampling methods.The first method,which is based on classification hardness undersampling,targets majority category samples by using simple samples as the foundation for classification and improving performance by focusing on samples near classification boundaries.The second method addresses the issue of overfitting minority category samples in undersampling and ensemble learning.To mitigate this,an adaptive augment hybrid sampling method is proposed,which enhances the classification boundary of samples and reduces overfitting.This paper conducts multiple experiments on 15 public datasets and concludes that the IlCOE ensemble framework outperforms other ensemble learning algorithms in classifying imbalanced data.展开更多
Accurate fault diagnosis of heating,ventilation,and air conditioning(HVAC)systems is of significant importance for maintaining normal operation,reducing energy consumption,and minimizing maintenance costs.However,in p...Accurate fault diagnosis of heating,ventilation,and air conditioning(HVAC)systems is of significant importance for maintaining normal operation,reducing energy consumption,and minimizing maintenance costs.However,in practical applications,it is challenging to obtain sufficient fault data for HVAC systems,leading to imbalanced data,where the number of fault samples is much smaller than that of normal samples.Moreover,most existing HVAC system fault diagnosis methods heavily rely on balanced training sets to achieve high fault diagnosis accuracy.Therefore,to address this issue,a composite neural network fault diagnosis model is proposed,which combines SMOTETomek,multi-scale one-dimensional convolutional neural networks(M1DCNN),and support vector machine(SVM).This method first utilizes SMOTETomek to augment the minority class samples in the imbalanced dataset,achieving a balanced number of faulty and normal data.Then,it employs the M1DCNN model to extract feature information from the augmented dataset.Finally,it replaces the original Softmax classifier with an SVM classifier for classification,thus enhancing the fault diagnosis accuracy.Using the SMOTETomek-M1DCNN-SVM method,we conducted fault diagnosis validation on both the ASHRAE RP-1043 dataset and experimental dataset with an imbalance ratio of 1:10.The results demonstrate the superiority of this approach,providing a novel and promising solution for intelligent building management,with accuracy and F1 scores of 98.45%and 100%for the RP-1043 dataset and experimental dataset,respectively.展开更多
Transition towards carbon-neutral power systems has necessitated optimization of power dispatch in active distribution networks(ADNs)to facilitate integration of distributed renewable generation.Due to unavailability ...Transition towards carbon-neutral power systems has necessitated optimization of power dispatch in active distribution networks(ADNs)to facilitate integration of distributed renewable generation.Due to unavailability of network topology and line impedance in many distribution networks,physical model-based methods may not be applicable to their operations.To tackle this challenge,some studies have proposed constraint learning,which replicates physical models by training a neural network to evaluate feasibility of a decision(i.e.,whether a decision satisfies all critical constraints or not).To ensure accuracy of this trained neural network,training set should contain sufficient feasible and infeasible samples.However,since ADNs are mostly operated in a normal status,only very few historical samples are infeasible.Thus,the historical dataset is highly imbalanced,which poses a significant obstacle to neural network training.To address this issue,we propose an enhanced constraint learning method.First,it leverages constraint learning to train a neural network as surrogate of ADN's model.Then,it introduces Synthetic Minority Oversampling Technique to generate infeasible samples to mitigate imbalance of historical dataset.By incorporating historical and synthetic samples into the training set,we can significantly improve accuracy of neural network.Furthermore,we establish a trust region to constrain and thereafter enhance reliability of the solution.Simulations confirm the benefits of the proposed method in achieving desirable optimality and feasibility while maintaining low computational complexity.展开更多
As one important type of post-translational modifications(PTMs),protein lysine succinylation regulates many important biological processes.It is also closely involved with some major diseases in the aspects of Cardiom...As one important type of post-translational modifications(PTMs),protein lysine succinylation regulates many important biological processes.It is also closely involved with some major diseases in the aspects of Cardiometabolic,liver metabolic,nervous system and so on.Therefore,it is imperative to predict the succinylation sites in proteins for both basic research and drug development.In this paper,a novel predictor called i Succ Lys-BLS was proposed by not only introducing a new machine learning algorithm—Broad Learning System,but also optimizing the imbalanced data by randomly labeling samples.Rigorous cross-validation and independent test indicate that the success rate of i Succ Lys-BLS for positive samples is overwhelmingly higher than its counterparts.展开更多
Purpose: This paper aims to improve the classification performance when the data is imbalanced by applying different sampling techniques available in Machine Learning.Design/methodology/approach: The medical appointme...Purpose: This paper aims to improve the classification performance when the data is imbalanced by applying different sampling techniques available in Machine Learning.Design/methodology/approach: The medical appointment no-show dataset is imbalanced, and when classification algorithms are applied directly to the dataset, it is biased towards the majority class, ignoring the minority class. To avoid this issue, multiple sampling techniques such as Random Over Sampling(ROS), Random Under Sampling(RUS), Synthetic Minority Oversampling TEchnique(SMOTE), ADAptive SYNthetic Sampling(ADASYN), Edited Nearest Neighbor(ENN), and Condensed Nearest Neighbor(CNN) are applied in order to make the dataset balanced. The performance is assessed by the Decision Tree classifier with the listed sampling techniques and the best performance is identified.Findings: This study focuses on the comparison of the performance metrics of various sampling methods widely used. It is revealed that, compared to other techniques, the Recall is high when ENN is applied CNN and ADASYN have performed equally well on the Imbalanced data.Research limitations: The testing was carried out with limited dataset and needs to be tested with a larger dataset.Practical implications: This framework will be useful whenever the data is imbalanced in real world scenarios, which ultimately improves the performance.Originality/value: This paper uses the rebalancing framework on medical appointment no-show dataset to predict the no-shows and removes the bias towards minority class.展开更多
Recently,machine learning algorithms have been used in the detection and classification of network attacks.The performance of the algorithms has been evaluated by using benchmark network intrusion datasets such as DAR...Recently,machine learning algorithms have been used in the detection and classification of network attacks.The performance of the algorithms has been evaluated by using benchmark network intrusion datasets such as DARPA98,KDD’99,NSL-KDD,UNSW-NB15,and Caida DDoS.However,these datasets have two major challenges:imbalanced data and highdimensional data.Obtaining high accuracy for all attack types in the dataset allows for high accuracy in imbalanced datasets.On the other hand,having a large number of features increases the runtime load on the algorithms.A novel model is proposed in this paper to overcome these two concerns.The number of features in the model,which has been tested at CICIDS2017,is initially optimized by using genetic algorithms.This optimum feature set has been used to classify network attacks with six well-known classifiers according to high f1-score and g-mean value in minimumtime.Afterwards,amulti-layer perceptron based ensemble learning approach has been applied to improve the models’overall performance.The experimental results showthat the suggested model is acceptable for feature selection as well as classifying network attacks in an imbalanced dataset,with a high f1-score(0.91)and g-mean(0.99)value.Furthermore,it has outperformed base classifier models and voting procedures.展开更多
A generalization of supervised single-label learning based on the assumption that each sample in a dataset may belong to more than one class simultaneously is called multi-label learning.The main objective of this wor...A generalization of supervised single-label learning based on the assumption that each sample in a dataset may belong to more than one class simultaneously is called multi-label learning.The main objective of this work is to create a novel framework for learning and classifying imbalancedmulti-label data.This work proposes a framework of two phases.The imbalanced distribution of themulti-label dataset is addressed through the proposed Borderline MLSMOTE resampling method in phase 1.Later,an adaptive weighted l21 norm regularized(Elastic-net)multilabel logistic regression is used to predict unseen samples in phase 2.The proposed Borderline MLSMOTE resampling method focuses on samples with concurrent high labels in contrast to conventional MLSMOTE.The minority labels in these samples are called difficult minority labels and are more prone to penalize classification performance.The concurrentmeasure is considered borderline,and labels associated with samples are regarded as borderline labels in the decision boundary.In phase II,a novel adaptive l21 norm regularized weighted multi-label logistic regression is used to handle balanced data with different weighted synthetic samples.Experimentation on various benchmark datasets shows the outperformance of the proposed method and its powerful predictive performances over existing conventional state-of-the-art multi-label methods.展开更多
Data-driven methods are widely considered for fault diagnosis in complex systems.However,in practice,the between-class imbalance due to limited faulty samples may deteriorate their classification performance.To addres...Data-driven methods are widely considered for fault diagnosis in complex systems.However,in practice,the between-class imbalance due to limited faulty samples may deteriorate their classification performance.To address this issue,synthetic minority methods for enhancing data have been proved to be effective in many applications.Generative adversarial networks(GANs),capable of automatic features extraction,can also be adopted for augmenting the faulty samples.However,the monitoring data of a complex system may include not only continuous signals but also discrete/categorical signals.Since the current GAN methods still have some challenges in handling such heterogeneous monitoring data,a Mixed Dual Discriminator GAN(noted as M-D2GAN)is proposed in this work.In order to render the expanded fault samples more aligned with the real situation and improve the accuracy and robustness of the fault diagnosis model,different types of variables are generated in different ways,including floating-point,integer,categorical,and hierarchical.For effectively considering the class imbalance problem,proper modifications are made to the GAN model,where a normal class discriminator is added.A practical case study concerning the braking system of a high-speed train is carried out to verify the effectiveness of the proposed framework.Compared to the classic GAN,the proposed framework achieves better results with respect to F-measure and G-mean metrics.展开更多
Accurate prediction of fatal events in car accidents has significant health management implications.This research article explores the application of imbalanced data handling techniques in machine learning to enhance ...Accurate prediction of fatal events in car accidents has significant health management implications.This research article explores the application of imbalanced data handling techniques in machine learning to enhance prediction performance.By implementing these techniques on car accident data,health organizations can identify and forecast a fatal event,enabling more efficient and effective allocation of limited health resources.Concurrently,enhancing the performance of machine learning models through imbalanced data handling techniques can impact health management decisions.Our findings highlight the significance of imbalanced data handling techniques in predicting fatality in car accidents,ultimately contributing to improved road safety and better management of health resources.Moreover,the effective use of imbalanced data demonstrates a substantial improvement in the specificity of the prediction.Addressing the impact of machine learning techniques on imbalanced car accident data can significantly improve overall health outcomes.展开更多
文摘Imbalanced multiclass datasets pose challenges for machine learning algorithms.They often contain minority classes that are important for accurate predictions.However,when the data is sparsely distributed and overlaps with data points fromother classes,it introduces noise.As a result,existing resamplingmethods may fail to preserve the original data patterns,further disrupting data quality and reducingmodel performance.This paper introduces Neighbor Displacement-based Enhanced Synthetic Oversampling(NDESO),a hybridmethod that integrates a data displacement strategy with a resampling technique to achieve data balance.It begins by computing the average distance of noisy data points to their neighbors and adjusting their positions toward the center before applying random oversampling.Extensive evaluations compare 14 alternatives on nine classifiers across synthetic and 20 real-world datasetswith varying imbalance ratios.This evaluation was structured into two distinct test groups.First,the effects of k-neighbor variations and distance metrics are evaluated,followed by a comparison of resampled data distributions against alternatives,and finally,determining the most suitable oversampling technique for data balancing.Second,the overall performance of the NDESO algorithm was assessed,focusing on G-mean and statistical significance.The results demonstrate that our method is robust to a wide range of variations in these parameters and the overall performance achieves an average G-mean score of 0.90,which is among the highest.Additionally,it attains the lowest mean rank of 2.88,indicating statistically significant improvements over existing approaches.This advantage underscores its potential for effectively handling data imbalance in practical scenarios.
基金China Postdoctoral Science Foundation(No.2023M740237,2024M750254)Postdoctoral Fellowship Program of CPSF(No.GZB20230934)+1 种基金National Natural Science Foundation of China(No.71801113,72401029,72431003)China Scholarship Council(No.202006060162).
文摘Financial distress prediction(FDP)is a critical area of study for researchers,industry stakeholders,and regulatory authorities.However,FDP tasks present several challenges,including high-dimensional datasets,class imbalances,and the complexity of parameter optimization.These issues often hinder the predictive model’s ability to accurately identify companies at high risk of financial distress.To mitigate these challenges,we introduce FinMHSPE—a novel multi-heterogeneous self-paced ensemble(MHSPE)FDP learning framework.The proposed model uses pairwise comparisons of data from multiple time frames combined with the maximum relevance and minimum redundancy method to select an optimal subset of features,effectively resolving the high dimensionality issue.Furthermore,the proposed framework incorporates the MHSPE model to iteratively identify the most informative majority class data samples,effectively addressing the class imbalance issue.To optimize the model’s parameters,we leverage the particle swarm optimization algorithm.The robustness of our proposed model is validated through extensive experiments performed on a financial dataset of Chinese listed companies.The empirical results demonstrate that the proposed model outperforms existing competing models in the field of FDP.Specifically,our FinMHSPE framework achieves the highest performance,achieving an area under the curve(AUC)value of 0.9574,considerably surpassing all existing methods.A comparative analysis of AUC values further reveals that FinMHSPE outperforms state-of-the-art approaches that rely on financial features as inputs.Furthermore,our investigation identifies several valuable features for enhancing FDP model performance,notably those associated with a company’s information and growth potential.
基金supported by Fundamental Research Program of Shanxi Province(Nos.202203021211088,202403021212254,202403021221109)Graduate Research Innovation Project in Shanxi Province(No.2024KY616).
文摘Data collected in fields such as cybersecurity and biomedicine often encounter high dimensionality and class imbalance.To address the problem of low classification accuracy for minority class samples arising from numerous irrelevant and redundant features in high-dimensional imbalanced data,we proposed a novel feature selection method named AMF-SGSK based on adaptive multi-filter and subspace-based gaining sharing knowledge.Firstly,the balanced dataset was obtained by random under-sampling.Secondly,combining the feature importance score with the AUC score for each filter method,we proposed a concept called feature hardness to judge the importance of feature,which could adaptively select the essential features.Finally,the optimal feature subset was obtained by gaining sharing knowledge in multiple subspaces.This approach effectively achieved dimensionality reduction for high-dimensional imbalanced data.The experiment results on 30 benchmark imbalanced datasets showed that AMF-SGSK performed better than other eight commonly used algorithms including BGWO and IG-SSO in terms of F1-score,AUC,and G-mean.The mean values of F1-score,AUC,and Gmean for AMF-SGSK are 0.950,0.967,and 0.965,respectively,achieving the highest among all algorithms.And the mean value of Gmean is higher than those of IG-PSO,ReliefF-GWO,and BGOA by 3.72%,11.12%,and 20.06%,respectively.Furthermore,the selected feature ratio is below 0.01 across the selected ten datasets,further demonstrating the proposed method’s overall superiority over competing approaches.AMF-SGSK could adaptively remove irrelevant and redundant features and effectively improve the classification accuracy of high-dimensional imbalanced data,providing scientific and technological references for practical applications.
基金supported by the National Key Research and Development Program of China(2018YFB1003700)the Scientific and Technological Support Project(Society)of Jiangsu Province(BE2016776)+2 种基金the“333” project of Jiangsu Province(BRA2017228 BRA2017401)the Talent Project in Six Fields of Jiangsu Province(2015-JNHB-012)
文摘For imbalanced datasets, the focus of classification is to identify samples of the minority class. The performance of current data mining algorithms is not good enough for processing imbalanced datasets. The synthetic minority over-sampling technique(SMOTE) is specifically designed for learning from imbalanced datasets, generating synthetic minority class examples by interpolating between minority class examples nearby. However, the SMOTE encounters the overgeneralization problem. The densitybased spatial clustering of applications with noise(DBSCAN) is not rigorous when dealing with the samples near the borderline.We optimize the DBSCAN algorithm for this problem to make clustering more reasonable. This paper integrates the optimized DBSCAN and SMOTE, and proposes a density-based synthetic minority over-sampling technique(DSMOTE). First, the optimized DBSCAN is used to divide the samples of the minority class into three groups, including core samples, borderline samples and noise samples, and then the noise samples of minority class is removed to synthesize more effective samples. In order to make full use of the information of core samples and borderline samples,different strategies are used to over-sample core samples and borderline samples. Experiments show that DSMOTE can achieve better results compared with SMOTE and Borderline-SMOTE in terms of precision, recall and F-value.
基金supported in part by the National Science Foundation of USA(CMMI-1162482)
文摘Imbalanced data is one type of datasets that are frequently found in real-world applications,e.g.,fraud detection and cancer diagnosis.For this type of datasets,improving the accuracy to identify their minority class is a critically important issue.Feature selection is one method to address this issue.An effective feature selection method can choose a subset of features that favor in the accurate determination of the minority class.A decision tree is a classifier that can be built up by using different splitting criteria.Its advantage is the ease of detecting which feature is used as a splitting node.Thus,it is possible to use a decision tree splitting criterion as a feature selection method.In this paper,an embedded feature selection method using our proposed weighted Gini index(WGI)is proposed.Its comparison results with Chi2,F-statistic and Gini index feature selection methods show that F-statistic and Chi2 reach the best performance when only a few features are selected.As the number of selected features increases,our proposed method has the highest probability of achieving the best performance.The area under a receiver operating characteristic curve(ROC AUC)and F-measure are used as evaluation criteria.Experimental results with two datasets show that ROC AUC performance can be high,even if only a few features are selected and used,and only changes slightly as more and more features are selected.However,the performance of Fmeasure achieves excellent performance only if 20%or more of features are chosen.The results are helpful for practitioners to select a proper feature selection method when facing a practical problem.
基金partially supported by the Aeronautical Science Foundation of China(No.201920007001)National Natural Science Foundation of China(Nos.U20B2067,61790552 and 61790554)。
文摘Imbalanced data classification is an important research topic in real-world applications,like fault diagnosis in an aircraft manufacturing system.The over-sampling method is often used to solve this problem.It generates samples according to the distance between minority data.However,the traditional over-sampling method may change the original data distribution,which is harmful to the classification performance.In this paper,we propose a new method called Conditional SelfAttention Generative Adversarial Network with Differential Evolution(CSAGAN-DE)for imbalanced data classification.The new method aims at improving the classification performance of minority data by enhancing the quality of the generation of minority data.In CSAGAN-DE,the minority data are fed into the self-attention generative adversarial network to approximate the data distribution and create new data for the minority class.Then,the differential evolution algorithm is employed to automatically determine the number of generated minority data for achieving a satisfactory classification performance.Several experiments are conducted to evaluate the performance of the new CSAGAN-DE method.The results show that the new method can efficiently improve the classification performance compared with other related methods.
文摘Imbalanced data classification is one of the major problems in machine learning.This imbalanced dataset typically has significant differences in the number of data samples between its classes.In most cases,the performance of the machine learning algorithm such as Support Vector Machine(SVM)is affected when dealing with an imbalanced dataset.The classification accuracy is mostly skewed toward the majority class and poor results are exhibited in the prediction of minority-class samples.In this paper,a hybrid approach combining data pre-processing technique andSVMalgorithm based on improved Simulated Annealing(SA)was proposed.Firstly,the data preprocessing technique which primarily aims at solving the resampling strategy of handling imbalanced datasets was proposed.In this technique,the data were first synthetically generated to equalize the number of samples between classes and followed by a reduction step to remove redundancy and duplicated data.Next is the training of a balanced dataset using SVM.Since this algorithm requires an iterative process to search for the best penalty parameter during training,an improved SA algorithm was proposed for this task.In this proposed improvement,a new acceptance criterion for the solution to be accepted in the SA algorithm was introduced to enhance the accuracy of the optimization process.Experimental works based on ten publicly available imbalanced datasets have demonstrated higher accuracy in the classification tasks using the proposed approach in comparison with the conventional implementation of SVM.Registering at an average of 89.65%of accuracy for the binary class classification has demonstrated the good performance of the proposed works.
基金supported by the Yunnan Major Scientific and Technological Projects(Grant No.202302AD080001)the National Natural Science Foundation,China(No.52065033).
文摘When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.
基金The authors gratefully acknowledge financial support of national natural science foundation of China(No.52067021)natural science foundation of Xinjiang Uygur Autonomous Region(2022D01C35)+1 种基金excellent youth scientific and technological talents plan of Xinjiang(No.2019Q012)major science&technology special project of Xinjiang Uygur Autonomous Region(2022A01002-2).
文摘The imbalance of dissolved gas analysis(DGA)data will lead to over-fitting,weak generalization and poor recognition performance for fault diagnosis models based on deep learning.To handle this problem,a novel transformer fault diagnosis method based on improved auxiliary classifier generative adversarial network(ACGAN)under imbalanced data is proposed in this paper,which meets both the requirements of balancing DGA data and supplying accurate diagnosis results.The generator combines one-dimensional convolutional neural networks(1D-CNN)and long short-term memories(LSTM),which can deeply extract the features from DGA samples and be greatly beneficial to ACGAN’s data balancing and fault diagnosis.The discriminator adopts multilayer perceptron networks(MLP),which prevents the discriminator from losing important features of DGA data when the network is too complex and the number of layers is too large.The experimental results suggest that the presented approach can effectively improve the adverse effects of DGA data imbalance on the deep learning models,enhance fault diagnosis performance and supply desirable diagnosis accuracy up to 99.46%.Furthermore,the comparison results indicate the fault diagnosis performance of the proposed approach is superior to that of other conventional methods.Therefore,the method presented in this study has excellent and reliable fault diagnosis performance for various unbalanced datasets.In addition,the proposed approach can also solve the problems of insufficient and imbalanced fault data in other practical application fields.
基金Key research project of Hunan Provincial Administration of Traditional Chinese Medicine(A2023048)Key Research Foundation of Education Bureau of Hunan Province,China(23A0273).
文摘Objective Clinical medical record data associated with hepatitis B-related acute-on-chronic liver failure(HBV-ACLF)generally have small sample sizes and a class imbalance.However,most machine learning models are designed based on balanced data and lack interpretability.This study aimed to propose a traditional Chinese medicine(TCM)diagnostic model for HBV-ACLF based on the TCM syndrome differentiation and treatment theory,which is clinically interpretable and highly accurate.Methods We collected medical records from 261 patients diagnosed with HBV-ACLF,including three syndromes:Yang jaundice(214 cases),Yang-Yin jaundice(41 cases),and Yin jaundice(6 cases).To avoid overfitting of the machine learning model,we excluded the cases of Yin jaundice.After data standardization and cleaning,we obtained 255 relevant medical records of Yang jaundice and Yang-Yin jaundice.To address the class imbalance issue,we employed the oversampling method and five machine learning methods,including logistic regression(LR),support vector machine(SVM),decision tree(DT),random forest(RF),and extreme gradient boosting(XGBoost)to construct the syndrome diagnosis models.This study used precision,F1 score,the area under the receiver operating characteristic(ROC)curve(AUC),and accuracy as model evaluation metrics.The model with the best classification performance was selected to extract the diagnostic rule,and its clinical significance was thoroughly analyzed.Furthermore,we proposed a novel multiple-round stable rule extraction(MRSRE)method to obtain a stable rule set of features that can exhibit the model’s clinical interpretability.Results The precision of the five machine learning models built using oversampled balanced data exceeded 0.90.Among these models,the accuracy of RF classification of syndrome types was 0.92,and the mean F1 scores of the two categories of Yang jaundice and Yang-Yin jaundice were 0.93 and 0.94,respectively.Additionally,the AUC was 0.98.The extraction rules of the RF syndrome differentiation model based on the MRSRE method revealed that the common features of Yang jaundice and Yang-Yin jaundice were wiry pulse,yellowing of the urine,skin,and eyes,normal tongue body,healthy sublingual vessel,nausea,oil loathing,and poor appetite.The main features of Yang jaundice were a red tongue body and thickened sublingual vessels,whereas those of Yang-Yin jaundice were a dark tongue body,pale white tongue body,white tongue coating,lack of strength,slippery pulse,light red tongue body,slimy tongue coating,and abdominal distension.This is aligned with the classifications made by TCM experts based on TCM syndrome differentiation and treatment theory.Conclusion Our model can be utilized for differentiating HBV-ACLF syndromes,which has the potential to be applied to generate other clinically interpretable models with high accuracy on clinical data characterized by small sample sizes and a class imbalance.
基金supported under the research Grant(PO Number:920138936)from the Institute of Technology PETRONAS Sdn Bhd,32610,Bandar Seri Iskandar,Perak,Malaysia.
文摘Prediction of machine failure is challenging as the dataset is often imbalanced with a low failure rate.The common approach to han-dle classification involving imbalanced data is to balance the data using a sampling approach such as random undersampling,random oversampling,or Synthetic Minority Oversampling Technique(SMOTE)algorithms.This paper compared the classification performance of three popular classifiers(Logistic Regression,Gaussian Naïve Bayes,and Support Vector Machine)in predicting machine failure in the Oil and Gas industry.The original machine failure dataset consists of 20,473 hourly data and is imbalanced with 19945(97%)‘non-failure’and 528(3%)‘failure data’.The three independent variables to predict machine failure were pressure indicator,flow indicator,and level indicator.The accuracy of the classifiers is very high and close to 100%,but the sensitivity of all classifiers using the original dataset was close to zero.The performance of the three classifiers was then evaluated for data with different imbalance rates(10%to 50%)generated from the original data using SMOTE,SMOTE-Support Vector Machine(SMOTE-SVM)and SMOTE-Edited Nearest Neighbour(SMOTE-ENN).The classifiers were evaluated based on improvement in sensitivity and F-measure.Results showed that the sensitivity of all classifiers increases as the imbalance rate increases.SVM with radial basis function(RBF)kernel has the highest sensitivity when data is balanced(50:50)using SMOTE(Sensitivitytest=0.5686,Ftest=0.6927)compared to Naïve Bayes(Sensitivitytest=0.4033,Ftest=0.6218)and Logistic Regression(Sensitivitytest=0.4194,Ftest=0.621).Overall,the Gaussian Naïve Bayes model consistently improves sensitivity and F-measure as the imbalance ratio increases,but the sensitivity is below 50%.The classifiers performed better when data was balanced using SMOTE-SVM compared to SMOTE and SMOTE-ENN.
基金supported by the National Natural Science Foundation of China(No.62173158)the National Key Research and Development Program of China(No.2019YFC0119600)the Major Science and Technology Program of Hainan Province(No.ZDKJ202004).
文摘Class imbalance can substantially affect classification tasks using traditional classifiers,especially when identifying instances of minority categories.In addition to class imbalance,other challenges can also hinder accurate classification.Researchers have explored various approaches to mitigate the effects of class imbalance.However,most studies focus only on processing correlations within a single category of samples.This paper introduces an ensemble framework called Inter-and Intra-Class Overlapping Ensemble(llCOE),which incorporates two sampling methods.The first method,which is based on classification hardness undersampling,targets majority category samples by using simple samples as the foundation for classification and improving performance by focusing on samples near classification boundaries.The second method addresses the issue of overfitting minority category samples in undersampling and ensemble learning.To mitigate this,an adaptive augment hybrid sampling method is proposed,which enhances the classification boundary of samples and reduces overfitting.This paper conducts multiple experiments on 15 public datasets and concludes that the IlCOE ensemble framework outperforms other ensemble learning algorithms in classifying imbalanced data.
基金The authors of this paper acknowledge the support from the National Natural Science Foundation of China(No.51975191)the Funds for Science and Technology Creative Talents of Hubei,China(No.2023DJC048)This work was supported by the Xiangyang Hubei University of Technology Industrial Research Institute Funding Program(No.XYYJ2022B01).
文摘Accurate fault diagnosis of heating,ventilation,and air conditioning(HVAC)systems is of significant importance for maintaining normal operation,reducing energy consumption,and minimizing maintenance costs.However,in practical applications,it is challenging to obtain sufficient fault data for HVAC systems,leading to imbalanced data,where the number of fault samples is much smaller than that of normal samples.Moreover,most existing HVAC system fault diagnosis methods heavily rely on balanced training sets to achieve high fault diagnosis accuracy.Therefore,to address this issue,a composite neural network fault diagnosis model is proposed,which combines SMOTETomek,multi-scale one-dimensional convolutional neural networks(M1DCNN),and support vector machine(SVM).This method first utilizes SMOTETomek to augment the minority class samples in the imbalanced dataset,achieving a balanced number of faulty and normal data.Then,it employs the M1DCNN model to extract feature information from the augmented dataset.Finally,it replaces the original Softmax classifier with an SVM classifier for classification,thus enhancing the fault diagnosis accuracy.Using the SMOTETomek-M1DCNN-SVM method,we conducted fault diagnosis validation on both the ASHRAE RP-1043 dataset and experimental dataset with an imbalance ratio of 1:10.The results demonstrate the superiority of this approach,providing a novel and promising solution for intelligent building management,with accuracy and F1 scores of 98.45%and 100%for the RP-1043 dataset and experimental dataset,respectively.
基金supported in part by the Science and Technology Development Fund,Macao SAR,China(File no.SKL-IOTSC(UM)-2021-2023,File no.0003/2020/AKP,and File no.0011/2021/AGJ)。
文摘Transition towards carbon-neutral power systems has necessitated optimization of power dispatch in active distribution networks(ADNs)to facilitate integration of distributed renewable generation.Due to unavailability of network topology and line impedance in many distribution networks,physical model-based methods may not be applicable to their operations.To tackle this challenge,some studies have proposed constraint learning,which replicates physical models by training a neural network to evaluate feasibility of a decision(i.e.,whether a decision satisfies all critical constraints or not).To ensure accuracy of this trained neural network,training set should contain sufficient feasible and infeasible samples.However,since ADNs are mostly operated in a normal status,only very few historical samples are infeasible.Thus,the historical dataset is highly imbalanced,which poses a significant obstacle to neural network training.To address this issue,we propose an enhanced constraint learning method.First,it leverages constraint learning to train a neural network as surrogate of ADN's model.Then,it introduces Synthetic Minority Oversampling Technique to generate infeasible samples to mitigate imbalance of historical dataset.By incorporating historical and synthetic samples into the training set,we can significantly improve accuracy of neural network.Furthermore,we establish a trust region to constrain and thereafter enhance reliability of the solution.Simulations confirm the benefits of the proposed method in achieving desirable optimality and feasibility while maintaining low computational complexity.
基金the National Natural Science Foundation of China(61761023,31760315)the Natural Science Foundation of Jiangxi Province,China(20202BABL202004,20202BAB202007)the Scientific Research Plan of the Department of Education of Jiangxi Province(GJJ190695)。
文摘As one important type of post-translational modifications(PTMs),protein lysine succinylation regulates many important biological processes.It is also closely involved with some major diseases in the aspects of Cardiometabolic,liver metabolic,nervous system and so on.Therefore,it is imperative to predict the succinylation sites in proteins for both basic research and drug development.In this paper,a novel predictor called i Succ Lys-BLS was proposed by not only introducing a new machine learning algorithm—Broad Learning System,but also optimizing the imbalanced data by randomly labeling samples.Rigorous cross-validation and independent test indicate that the success rate of i Succ Lys-BLS for positive samples is overwhelmingly higher than its counterparts.
文摘Purpose: This paper aims to improve the classification performance when the data is imbalanced by applying different sampling techniques available in Machine Learning.Design/methodology/approach: The medical appointment no-show dataset is imbalanced, and when classification algorithms are applied directly to the dataset, it is biased towards the majority class, ignoring the minority class. To avoid this issue, multiple sampling techniques such as Random Over Sampling(ROS), Random Under Sampling(RUS), Synthetic Minority Oversampling TEchnique(SMOTE), ADAptive SYNthetic Sampling(ADASYN), Edited Nearest Neighbor(ENN), and Condensed Nearest Neighbor(CNN) are applied in order to make the dataset balanced. The performance is assessed by the Decision Tree classifier with the listed sampling techniques and the best performance is identified.Findings: This study focuses on the comparison of the performance metrics of various sampling methods widely used. It is revealed that, compared to other techniques, the Recall is high when ENN is applied CNN and ADASYN have performed equally well on the Imbalanced data.Research limitations: The testing was carried out with limited dataset and needs to be tested with a larger dataset.Practical implications: This framework will be useful whenever the data is imbalanced in real world scenarios, which ultimately improves the performance.Originality/value: This paper uses the rebalancing framework on medical appointment no-show dataset to predict the no-shows and removes the bias towards minority class.
文摘Recently,machine learning algorithms have been used in the detection and classification of network attacks.The performance of the algorithms has been evaluated by using benchmark network intrusion datasets such as DARPA98,KDD’99,NSL-KDD,UNSW-NB15,and Caida DDoS.However,these datasets have two major challenges:imbalanced data and highdimensional data.Obtaining high accuracy for all attack types in the dataset allows for high accuracy in imbalanced datasets.On the other hand,having a large number of features increases the runtime load on the algorithms.A novel model is proposed in this paper to overcome these two concerns.The number of features in the model,which has been tested at CICIDS2017,is initially optimized by using genetic algorithms.This optimum feature set has been used to classify network attacks with six well-known classifiers according to high f1-score and g-mean value in minimumtime.Afterwards,amulti-layer perceptron based ensemble learning approach has been applied to improve the models’overall performance.The experimental results showthat the suggested model is acceptable for feature selection as well as classifying network attacks in an imbalanced dataset,with a high f1-score(0.91)and g-mean(0.99)value.Furthermore,it has outperformed base classifier models and voting procedures.
基金partly supported by the Technology Development Program of MSS(No.S3033853)by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.2021R1A4A1031509).
文摘A generalization of supervised single-label learning based on the assumption that each sample in a dataset may belong to more than one class simultaneously is called multi-label learning.The main objective of this work is to create a novel framework for learning and classifying imbalancedmulti-label data.This work proposes a framework of two phases.The imbalanced distribution of themulti-label dataset is addressed through the proposed Borderline MLSMOTE resampling method in phase 1.Later,an adaptive weighted l21 norm regularized(Elastic-net)multilabel logistic regression is used to predict unseen samples in phase 2.The proposed Borderline MLSMOTE resampling method focuses on samples with concurrent high labels in contrast to conventional MLSMOTE.The minority labels in these samples are called difficult minority labels and are more prone to penalize classification performance.The concurrentmeasure is considered borderline,and labels associated with samples are regarded as borderline labels in the decision boundary.In phase II,a novel adaptive l21 norm regularized weighted multi-label logistic regression is used to handle balanced data with different weighted synthetic samples.Experimentation on various benchmark datasets shows the outperformance of the proposed method and its powerful predictive performances over existing conventional state-of-the-art multi-label methods.
文摘Data-driven methods are widely considered for fault diagnosis in complex systems.However,in practice,the between-class imbalance due to limited faulty samples may deteriorate their classification performance.To address this issue,synthetic minority methods for enhancing data have been proved to be effective in many applications.Generative adversarial networks(GANs),capable of automatic features extraction,can also be adopted for augmenting the faulty samples.However,the monitoring data of a complex system may include not only continuous signals but also discrete/categorical signals.Since the current GAN methods still have some challenges in handling such heterogeneous monitoring data,a Mixed Dual Discriminator GAN(noted as M-D2GAN)is proposed in this work.In order to render the expanded fault samples more aligned with the real situation and improve the accuracy and robustness of the fault diagnosis model,different types of variables are generated in different ways,including floating-point,integer,categorical,and hierarchical.For effectively considering the class imbalance problem,proper modifications are made to the GAN model,where a normal class discriminator is added.A practical case study concerning the braking system of a high-speed train is carried out to verify the effectiveness of the proposed framework.Compared to the classic GAN,the proposed framework achieves better results with respect to F-measure and G-mean metrics.
文摘Accurate prediction of fatal events in car accidents has significant health management implications.This research article explores the application of imbalanced data handling techniques in machine learning to enhance prediction performance.By implementing these techniques on car accident data,health organizations can identify and forecast a fatal event,enabling more efficient and effective allocation of limited health resources.Concurrently,enhancing the performance of machine learning models through imbalanced data handling techniques can impact health management decisions.Our findings highlight the significance of imbalanced data handling techniques in predicting fatality in car accidents,ultimately contributing to improved road safety and better management of health resources.Moreover,the effective use of imbalanced data demonstrates a substantial improvement in the specificity of the prediction.Addressing the impact of machine learning techniques on imbalanced car accident data can significantly improve overall health outcomes.