Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often stru...Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.展开更多
In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue...In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue anomalies.Traditionally,radiologists manually interpret these images,which can be labor-intensive and time-consuming due to the vast amount of data.To address this challenge,machine learning,and deep learning approaches can be utilized to improve the accuracy and efficiency of anomaly detection in MRI scans.This manuscript presents the use of the Deep AlexNet50 model for MRI classification with discriminative learning methods.There are three stages for learning;in the first stage,the whole dataset is used to learn the features.In the second stage,some layers of AlexNet50 are frozen with an augmented dataset,and in the third stage,AlexNet50 with an augmented dataset with the augmented dataset.This method used three publicly available MRI classification datasets:Harvard whole brain atlas(HWBA-dataset),the School of Biomedical Engineering of Southern Medical University(SMU-dataset),and The National Institute of Neuroscience and Hospitals brain MRI dataset(NINS-dataset)for analysis.Various hyperparameter optimizers like Adam,stochastic gradient descent(SGD),Root mean square propagation(RMS prop),Adamax,and AdamW have been used to compare the performance of the learning process.HWBA-dataset registers maximum classification performance.We evaluated the performance of the proposed classification model using several quantitative metrics,achieving an average accuracy of 98%.展开更多
Aiming at the limitations of rapid fault diagnosis of blast furnace, a novel strategy based on cost-conscious least squares support vector machine (LS-SVM) is proposed to solve this problem. Firstly, modified discre...Aiming at the limitations of rapid fault diagnosis of blast furnace, a novel strategy based on cost-conscious least squares support vector machine (LS-SVM) is proposed to solve this problem. Firstly, modified discrete particle swarm optimization is applied to optimize the feature selection and the LS-SVM parameters. Secondly, cost-con- scious formula is presented for fitness function and it contains in detail training time, recognition accuracy and the feature selection. The CLS-SVM algorithm is presented to increase the performance of the LS-SVM classifier. The new method can select the best fault features in much shorter time and have fewer support vectbrs and better general- ization performance in the application of fault diagnosis of the blast furnace. Thirdly, a gradual change binary tree is established for blast furnace faults diagnosis. It is a multi-class classification method based on center-of-gravity formula distance of cluster. A gradual change classification percentage ia used to select sample randomly. The proposed new metbod raises the sped of diagnosis, optimizes the classifieation scraraey and has good generalization ability for fault diagnosis of the application of blast furnace.展开更多
Considering strip steel surface defect samples, a multi-class classification method was proposed based on enhanced least squares twin support vector machines (ELS-TWSVMs) and binary tree. Firstly, pruning region sam...Considering strip steel surface defect samples, a multi-class classification method was proposed based on enhanced least squares twin support vector machines (ELS-TWSVMs) and binary tree. Firstly, pruning region samples center method with adjustable pruning scale was used to prune data samples. This method could reduce classifierr s training time and testing time. Secondly, ELS-TWSVM was proposed to classify the data samples. By introducing error variable contribution parameter and weight parameter, ELS-TWSVM could restrain the impact of noise sam- ples and have better classification accuracy. Finally, multi-class classification algorithms of ELS-TWSVM were pro- posed by combining ELS-TWSVM and complete binary tree. Some experiments were made on two-dimensional data- sets and strip steel surface defect datasets. The experiments showed that the multi-class classification methods of ELS-TWSVM had higher classification speed and accuracy for the datasets with large-scale, unbalanced and noise samples.展开更多
Defect classification is the key task of a steel surface defect detection system.The current defect classification algorithms have not taken the feature noise into consideration.In order to reduce the adverse impact o...Defect classification is the key task of a steel surface defect detection system.The current defect classification algorithms have not taken the feature noise into consideration.In order to reduce the adverse impact of feature noise,an anti-noise multi-class classification method was proposed for steel surface defects.On the one hand,a novel anti-noise support vector hyper-spheres(ASVHs)classifier was formulated.For N types of defects,the ASVHs classifier built N hyper-spheres.These hyper-spheres were insensitive to feature and label noise.On the other hand,in order to reduce the costs of online time and storage space,the defect samples were pruned by support vector data description with parameter iteration adjustment strategy.In the end,the ASVHs classifier was built with sparse defect samples set and auxiliary information.Experimental results show that the novel multi-class classification method has high efficiency and accuracy for corrupted defect samples in steel surface.展开更多
Focusing on strip steel surface defects classification, a novel support vector machine with adjustable hyper-sphere (AHSVM) is formulated. Meanwhile, a new multi-class classification method is proposed. Originated f...Focusing on strip steel surface defects classification, a novel support vector machine with adjustable hyper-sphere (AHSVM) is formulated. Meanwhile, a new multi-class classification method is proposed. Originated from support vector data description, AHSVM adopts hyper-sphere to solve classification problem. AHSVM can obey two principles: the margin maximization and inner-class dispersion minimization. Moreover, the hyper-sphere of AHSVM is adjustable, which makes the final classification hyper-sphere optimal for training dataset. On the other hand, AHSVM is combined with binary tree to solve multi-class classification for steel surface defects. A scheme of samples pruning in mapped feature space is provided, which can reduce the number of training samples under the premise of classification accuracy, resulting in the improvements of classification speed. Finally, some testing experiments are done for eight types of strip steel surface defects. Experimental results show that multi-class AHSVM classifier exhibits satisfactory results in classification accuracy and efficiency.展开更多
To solve the multi-class fault diagnosis tasks, decision tree support vector machine (DTSVM), which combines SVM and decision tree using the concept of dichotomy, is proposed. Since the classification performance of...To solve the multi-class fault diagnosis tasks, decision tree support vector machine (DTSVM), which combines SVM and decision tree using the concept of dichotomy, is proposed. Since the classification performance of DTSVM highly depends on its structure, to cluster the multi-classes with maximum distance between the clustering centers of the two sub-classes, genetic algorithm is introduced into the formation of decision tree, so that the most separable classes would be separated at each node of decisions tree. Numerical simulations conducted on three datasets compared with "one-against-all" and "one-against-one" demonstrate the proposed method has better performance and higher generalization ability than the two conventional methods.展开更多
The basic idea of multi-class classification is a disassembly method,which is to decompose a multi-class classification task into several binary classification tasks.In order to improve the accuracy of multi-class cla...The basic idea of multi-class classification is a disassembly method,which is to decompose a multi-class classification task into several binary classification tasks.In order to improve the accuracy of multi-class classification in the case of insufficient samples,this paper proposes a multi-class classification method combining K-means and multi-task relationship learning(MTRL).The method first uses the split method of One vs.Rest to disassemble the multi-class classification task into binary classification tasks.K-means is used to down sample the dataset of each task,which can prevent over-fitting of the model while reducing training costs.Finally,the sampled dataset is applied to the MTRL,and multiple binary classifiers are trained together.With the help of MTRL,this method can utilize the inter-task association to train the model,and achieve the purpose of improving the classification accuracy of each binary classifier.The effectiveness of the proposed approach is demonstrated by experimental results on the Iris dataset,Wine dataset,Multiple Features dataset,Wireless Indoor Localization dataset and Avila dataset.展开更多
The proposed deep learning algorithm will be integrated as a binary classifier under the umbrella of a multi-class classification tool to facilitate the automated detection of non-healthy deformities, anatomical landm...The proposed deep learning algorithm will be integrated as a binary classifier under the umbrella of a multi-class classification tool to facilitate the automated detection of non-healthy deformities, anatomical landmarks, pathological findings, other anomalies and normal cases, by examining medical endoscopic images of GI tract. Each binary classifier is trained to detect one specific non-healthy condition. The algorithm analyzed in the present work expands the ability of detection of this tool by classifying GI tract image snapshots into two classes, depicting haemorrhage and non-haemorrhage state. The proposed algorithm is the result of the collaboration between interdisciplinary specialists on AI and Data Analysis, Computer Vision, Gastroenterologists of four University Gastroenterology Departments of Greek Medical Schools. The data used are 195 videos (177 from non-healthy cases and 18 from healthy cases) videos captured from the PillCam<sup>(R)</sup> Medronics device, originated from 195 patients, all diagnosed with different forms of angioectasia, haemorrhages and other diseases from different sites of the gastrointestinal (GI), mainly including difficult cases of diagnosis. Our AI algorithm is based on convolutional neural network (CNN) trained on annotated images at image level, using a semantic tag indicating whether the image contains angioectasia and haemorrhage traces or not. At least 22 CNN architectures were created and evaluated some of which pre-trained applying transfer learning on ImageNet data. All the CNN variations were introduced, trained to a prevalence dataset of 50%, and evaluated of unseen data. On test data, the best results were obtained from our CNN architectures which do not utilize backbone of transfer learning. Across a balanced dataset from no-healthy images and healthy images from 39 videos from different patients, identified correct diagnosis with sensitivity 90%, specificity 92%, precision 91.8%, FPR 8%, FNR 10%. Besides, we compared the performance of our best CNN algorithm versus our same goal algorithm based on HSV colorimetric lesions features extracted of pixel-level annotations, both algorithms trained and tested on the same data. It is evaluated that the CNN trained on image level annotated images, is 9% less sensitive, achieves 2.6% less precision, 1.2% less FPR, and 7% less FNR, than that based on HSV filters, extracted from on pixel-level annotated training data.展开更多
Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or...Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or indirect slurs.To address this gap,we propose a hybrid framework combining Term Frequency-Inverse Document Frequency(TF-IDF),word-to-vector(Word2Vec),and Bidirectional Encoder Representations from Transformers(BERT)based models for multi-class cyberbullying detection.Our approach integrates TF-IDF for lexical specificity and Word2Vec for semantic relationships,fused with BERT’s contextual embeddings to capture syntactic and semantic complexities.We evaluate the framework on a publicly available dataset of 47,000 annotated social media posts across five cyberbullying categories:age,ethnicity,gender,religion,and indirect aggression.Among BERT variants tested,BERT Base Un-Cased achieved the highest performance with 93%accuracy(standard deviation across±1%5-fold cross-validation)and an average AUC of 0.96,outperforming standalone TF-IDF(78%)and Word2Vec(82%)models.Notably,it achieved near-perfect AUC scores(0.99)for age and ethnicity-based bullying.A comparative analysis with state-of-the-art benchmarks,including Generative Pre-trained Transformer 2(GPT-2)and Text-to-Text Transfer Transformer(T5)models highlights BERT’s superiority in handling ambiguous language.This work advances cyberbullying detection by demonstrating how hybrid feature extraction and transformer models improve multi-class classification,offering a scalable solution for moderating nuanced harmful content.展开更多
Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.Howev...Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.展开更多
Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and ...Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and severe class imbalance,and occasional imaging artifacts can create ambiguity for state-of-the-art convolutional neural networks(CNNs).We frame skin lesion recognition as graph-based reasoning and,to ensure fair evaluation and avoid data leakage,adopt a strict lesion-level partitioning strategy.Each image is first over-segmented using SLIC(Simple Linear Iterative Clustering)to produce perceptually homogeneous superpixels.These superpixels form the nodes of a region-adjacency graph whose edges encode spatial continuity.Node attributes are 1280-dimensional embeddings extracted with a lightweight yet expressive EfficientNet-B0 backbone,providing strong representational power at modest computational cost.The resulting graphs are processed by a five-layer Graph Attention Network(GAT)that learns to weight inter-node relationships dynamically and aggregates multi-hop context before classifying lesions into seven classes with a log-softmax output.Extensive experiments on the DermaMNIST benchmark show the proposed pipeline achieves 88.35%accuracy and 98.04%AUC,outperforming contemporary CNNs,AutoML approaches,and alternative graph neural networks.An ablation study indicates EfficientNet-B0 produces superior node descriptors compared with ResNet-18 and DenseNet,and that roughly five GAT layers strike a good balance between being too shallow and over-deep while avoiding oversmoothing.The method requires no data augmentation or external metadata,making it a drop-in upgrade for clinical computer-aided diagnosis systems.展开更多
This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 20...This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.展开更多
Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal...Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.展开更多
Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstruc...Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.展开更多
The rapid and increasing growth in the volume and number of cyber threats from malware is not a real danger;the real threat lies in the obfuscation of these cyberattacks,as they constantly change their behavior,making...The rapid and increasing growth in the volume and number of cyber threats from malware is not a real danger;the real threat lies in the obfuscation of these cyberattacks,as they constantly change their behavior,making detection more difficult.Numerous researchers and developers have devoted considerable attention to this topic;however,the research field has not yet been fully saturated with high-quality studies that address these problems.For this reason,this paper presents a novel multi-objective Markov-enhanced adaptive whale optimization(MOMEAWO)cybersecurity model to improve the classification of binary and multi-class malware threats through the proposed MOMEAWO approach.The proposed MOMEAWO cybersecurity model aims to provide an innovative solution for analyzing,detecting,and classifying the behavior of obfuscated malware within their respective families.The proposed model includes three classification types:Binary classification and multi-class classification(e.g.,four families and 16 malware families).To evaluate the performance of this model,we used a recently published dataset called the Canadian Institute for Cybersecurity Malware Memory Analysis(CIC-MalMem-2022)that contains balanced data.The results show near-perfect accuracy in binary classification and high accuracy in multi-class classification compared with related work using the same dataset.展开更多
The paper presents an improved-RFC(Random Forest Classifier)approach for multi-class disease classification problem.It consists of a combination of Random Forest machine learning algorithm,an attribute evaluator metho...The paper presents an improved-RFC(Random Forest Classifier)approach for multi-class disease classification problem.It consists of a combination of Random Forest machine learning algorithm,an attribute evaluator method and an instance filter method.It intends to improve the performance of Random Forest algorithm.The performance results confirm that the proposed improved-RFC approach performs better than Random Forest algorithm with increase in disease classification accuracy up to 97.80%for multi-class groundnut disease dataset.The performance of improved-RFC approach is tested for its efficiency on five benchmark datasets.It shows superior performance on all these datasets.展开更多
Artificial intelligence(AI),particularly deep learning algorithms utilizing convolutional neural networks,plays an increasingly pivotal role in enhancing medical image examination.It demonstrates the potential for imp...Artificial intelligence(AI),particularly deep learning algorithms utilizing convolutional neural networks,plays an increasingly pivotal role in enhancing medical image examination.It demonstrates the potential for improving diagnostic accuracy within dental care.Orthopantomograms(OPGs)are essential in dentistry;however,their manual interpretation is often inconsistent and tedious.To the best of our knowledge,this is the first comprehensive application of YOLOv5m for the simultaneous detection and classification of six distinct dental pathologies using panoramic OPG images.The model was trained and refined on a custom dataset that began with 232 panoramic radiographs and was later expanded to 604 samples.These included annotated subclasses representing Caries,Infection,Impacted Teeth,Fractured Teeth,Broken Crowns,and Healthy conditions.The training was performed using GPU resources alongside tuned hyperparameters of batch size,learning rate schedule,and early stopping tailored for generalization to prevent overfitting.Evaluation on a held-out test set showed strong performance in the detection and localization of various dental pathologies and robust overall accuracy.At an IoU of 0.5,the system obtained a mean precision of 94.22%and recall of 90.42%,with mAP being 93.71%.This research confirms the use of YOLOv5m as a robust,highly efficient AI technology for the analysis of dental pathologies using OPGs,providing a clinically useful solution to enhance workflow efficiency and aid in sustaining consistency in complex multi-dimensional case evaluations.展开更多
Multi-class classification can be solved by decomposing it into a set of binary classification problems according to some encoding rules,e.g.,one-vs-one,one-vs-rest,error-correcting output codes.Existing works solve t...Multi-class classification can be solved by decomposing it into a set of binary classification problems according to some encoding rules,e.g.,one-vs-one,one-vs-rest,error-correcting output codes.Existing works solve these binary classification problems in the original feature space,while it might be suboptimal as different binary classification problems correspond to different positive and negative examples.In this paper,we propose to learn label-specific features for each decomposed binary classification problem to consider the specific characteristics containing in its positive and negative examples.Specifically,to generate the label-specific features,clustering analysis is respectively conducted on the positive and negative examples in each decomposed binary data set to discover their inherent information and then label-specific features for one example are obtained by measuring the similarity between it and all cluster centers.Experiments clearly validate the effectiveness of learning label-specific features for decomposition-based multi-class classification.展开更多
Urban tree species provide various essential ecosystem services in cities,such as regulating urban temperatures,reducing noise,capturing carbon,and mitigating the urban heat island effect.The quality of these services...Urban tree species provide various essential ecosystem services in cities,such as regulating urban temperatures,reducing noise,capturing carbon,and mitigating the urban heat island effect.The quality of these services is influenced by species diversity,tree health,and the distribution and the composition of trees.Traditionally,data on urban trees has been collected through field surveys and manual interpretation of remote sensing images.In this study,we evaluated the effectiveness of multispectral airborne laser scanning(ALS)data in classifying 24 common urban roadside tree species in Espoo,Finland.Tree crown structure information,intensity features,and spectral data were used for classification.Eight different machine learning algorithms were tested,with the extra trees(ET)algorithm performing the best,achieving an overall accuracy of 71.7%using multispectral LiDAR data.This result highlights that integrating structural and spectral information within a single framework can improve the classification accuracy.Future research will focus on identifying the most important features for species classification and developing algorithms with greater efficiency and accuracy.展开更多
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01295).
文摘Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.
文摘In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue anomalies.Traditionally,radiologists manually interpret these images,which can be labor-intensive and time-consuming due to the vast amount of data.To address this challenge,machine learning,and deep learning approaches can be utilized to improve the accuracy and efficiency of anomaly detection in MRI scans.This manuscript presents the use of the Deep AlexNet50 model for MRI classification with discriminative learning methods.There are three stages for learning;in the first stage,the whole dataset is used to learn the features.In the second stage,some layers of AlexNet50 are frozen with an augmented dataset,and in the third stage,AlexNet50 with an augmented dataset with the augmented dataset.This method used three publicly available MRI classification datasets:Harvard whole brain atlas(HWBA-dataset),the School of Biomedical Engineering of Southern Medical University(SMU-dataset),and The National Institute of Neuroscience and Hospitals brain MRI dataset(NINS-dataset)for analysis.Various hyperparameter optimizers like Adam,stochastic gradient descent(SGD),Root mean square propagation(RMS prop),Adamax,and AdamW have been used to compare the performance of the learning process.HWBA-dataset registers maximum classification performance.We evaluated the performance of the proposed classification model using several quantitative metrics,achieving an average accuracy of 98%.
基金Item Sponsored by National Natural Science Foundation of China(60843007,61050006)
文摘Aiming at the limitations of rapid fault diagnosis of blast furnace, a novel strategy based on cost-conscious least squares support vector machine (LS-SVM) is proposed to solve this problem. Firstly, modified discrete particle swarm optimization is applied to optimize the feature selection and the LS-SVM parameters. Secondly, cost-con- scious formula is presented for fitness function and it contains in detail training time, recognition accuracy and the feature selection. The CLS-SVM algorithm is presented to increase the performance of the LS-SVM classifier. The new method can select the best fault features in much shorter time and have fewer support vectbrs and better general- ization performance in the application of fault diagnosis of the blast furnace. Thirdly, a gradual change binary tree is established for blast furnace faults diagnosis. It is a multi-class classification method based on center-of-gravity formula distance of cluster. A gradual change classification percentage ia used to select sample randomly. The proposed new metbod raises the sped of diagnosis, optimizes the classifieation scraraey and has good generalization ability for fault diagnosis of the application of blast furnace.
基金Item Sponsored by National Natural Science Foundation of China(61050006)
文摘Considering strip steel surface defect samples, a multi-class classification method was proposed based on enhanced least squares twin support vector machines (ELS-TWSVMs) and binary tree. Firstly, pruning region samples center method with adjustable pruning scale was used to prune data samples. This method could reduce classifierr s training time and testing time. Secondly, ELS-TWSVM was proposed to classify the data samples. By introducing error variable contribution parameter and weight parameter, ELS-TWSVM could restrain the impact of noise sam- ples and have better classification accuracy. Finally, multi-class classification algorithms of ELS-TWSVM were pro- posed by combining ELS-TWSVM and complete binary tree. Some experiments were made on two-dimensional data- sets and strip steel surface defect datasets. The experiments showed that the multi-class classification methods of ELS-TWSVM had higher classification speed and accuracy for the datasets with large-scale, unbalanced and noise samples.
基金This work was supported by the National Natural Science Foundation of China(No.51674140)Natural Science Foundation of Liaoning Province,China(No.20180550067)+2 种基金Department of Education of Liaoning Province,China(Nos.2017LNQN11 and 2020LNZD06)University of Science and Technology Liaoning Talent Project Grants(No.601011507-20)University of Science and Technology Liaoning Team Building Grants(No.601013360-17).
文摘Defect classification is the key task of a steel surface defect detection system.The current defect classification algorithms have not taken the feature noise into consideration.In order to reduce the adverse impact of feature noise,an anti-noise multi-class classification method was proposed for steel surface defects.On the one hand,a novel anti-noise support vector hyper-spheres(ASVHs)classifier was formulated.For N types of defects,the ASVHs classifier built N hyper-spheres.These hyper-spheres were insensitive to feature and label noise.On the other hand,in order to reduce the costs of online time and storage space,the defect samples were pruned by support vector data description with parameter iteration adjustment strategy.In the end,the ASVHs classifier was built with sparse defect samples set and auxiliary information.Experimental results show that the novel multi-class classification method has high efficiency and accuracy for corrupted defect samples in steel surface.
文摘Focusing on strip steel surface defects classification, a novel support vector machine with adjustable hyper-sphere (AHSVM) is formulated. Meanwhile, a new multi-class classification method is proposed. Originated from support vector data description, AHSVM adopts hyper-sphere to solve classification problem. AHSVM can obey two principles: the margin maximization and inner-class dispersion minimization. Moreover, the hyper-sphere of AHSVM is adjustable, which makes the final classification hyper-sphere optimal for training dataset. On the other hand, AHSVM is combined with binary tree to solve multi-class classification for steel surface defects. A scheme of samples pruning in mapped feature space is provided, which can reduce the number of training samples under the premise of classification accuracy, resulting in the improvements of classification speed. Finally, some testing experiments are done for eight types of strip steel surface defects. Experimental results show that multi-class AHSVM classifier exhibits satisfactory results in classification accuracy and efficiency.
基金supported by the National Natural Science Foundation of China (60604021 60874054)
文摘To solve the multi-class fault diagnosis tasks, decision tree support vector machine (DTSVM), which combines SVM and decision tree using the concept of dichotomy, is proposed. Since the classification performance of DTSVM highly depends on its structure, to cluster the multi-classes with maximum distance between the clustering centers of the two sub-classes, genetic algorithm is introduced into the formation of decision tree, so that the most separable classes would be separated at each node of decisions tree. Numerical simulations conducted on three datasets compared with "one-against-all" and "one-against-one" demonstrate the proposed method has better performance and higher generalization ability than the two conventional methods.
基金supported by the National Natural Science Foundation of China(61703131 61703129+1 种基金 61701148 61703128)
文摘The basic idea of multi-class classification is a disassembly method,which is to decompose a multi-class classification task into several binary classification tasks.In order to improve the accuracy of multi-class classification in the case of insufficient samples,this paper proposes a multi-class classification method combining K-means and multi-task relationship learning(MTRL).The method first uses the split method of One vs.Rest to disassemble the multi-class classification task into binary classification tasks.K-means is used to down sample the dataset of each task,which can prevent over-fitting of the model while reducing training costs.Finally,the sampled dataset is applied to the MTRL,and multiple binary classifiers are trained together.With the help of MTRL,this method can utilize the inter-task association to train the model,and achieve the purpose of improving the classification accuracy of each binary classifier.The effectiveness of the proposed approach is demonstrated by experimental results on the Iris dataset,Wine dataset,Multiple Features dataset,Wireless Indoor Localization dataset and Avila dataset.
文摘The proposed deep learning algorithm will be integrated as a binary classifier under the umbrella of a multi-class classification tool to facilitate the automated detection of non-healthy deformities, anatomical landmarks, pathological findings, other anomalies and normal cases, by examining medical endoscopic images of GI tract. Each binary classifier is trained to detect one specific non-healthy condition. The algorithm analyzed in the present work expands the ability of detection of this tool by classifying GI tract image snapshots into two classes, depicting haemorrhage and non-haemorrhage state. The proposed algorithm is the result of the collaboration between interdisciplinary specialists on AI and Data Analysis, Computer Vision, Gastroenterologists of four University Gastroenterology Departments of Greek Medical Schools. The data used are 195 videos (177 from non-healthy cases and 18 from healthy cases) videos captured from the PillCam<sup>(R)</sup> Medronics device, originated from 195 patients, all diagnosed with different forms of angioectasia, haemorrhages and other diseases from different sites of the gastrointestinal (GI), mainly including difficult cases of diagnosis. Our AI algorithm is based on convolutional neural network (CNN) trained on annotated images at image level, using a semantic tag indicating whether the image contains angioectasia and haemorrhage traces or not. At least 22 CNN architectures were created and evaluated some of which pre-trained applying transfer learning on ImageNet data. All the CNN variations were introduced, trained to a prevalence dataset of 50%, and evaluated of unseen data. On test data, the best results were obtained from our CNN architectures which do not utilize backbone of transfer learning. Across a balanced dataset from no-healthy images and healthy images from 39 videos from different patients, identified correct diagnosis with sensitivity 90%, specificity 92%, precision 91.8%, FPR 8%, FNR 10%. Besides, we compared the performance of our best CNN algorithm versus our same goal algorithm based on HSV colorimetric lesions features extracted of pixel-level annotations, both algorithms trained and tested on the same data. It is evaluated that the CNN trained on image level annotated images, is 9% less sensitive, achieves 2.6% less precision, 1.2% less FPR, and 7% less FNR, than that based on HSV filters, extracted from on pixel-level annotated training data.
基金funded by Scientific Research Deanship at University of Hail-Saudi Arabia through Project Number RG-23092.
文摘Cyberbullying on social media poses significant psychological risks,yet most detection systems over-simplify the task by focusing on binary classification,ignoring nuanced categories like passive-aggressive remarks or indirect slurs.To address this gap,we propose a hybrid framework combining Term Frequency-Inverse Document Frequency(TF-IDF),word-to-vector(Word2Vec),and Bidirectional Encoder Representations from Transformers(BERT)based models for multi-class cyberbullying detection.Our approach integrates TF-IDF for lexical specificity and Word2Vec for semantic relationships,fused with BERT’s contextual embeddings to capture syntactic and semantic complexities.We evaluate the framework on a publicly available dataset of 47,000 annotated social media posts across five cyberbullying categories:age,ethnicity,gender,religion,and indirect aggression.Among BERT variants tested,BERT Base Un-Cased achieved the highest performance with 93%accuracy(standard deviation across±1%5-fold cross-validation)and an average AUC of 0.96,outperforming standalone TF-IDF(78%)and Word2Vec(82%)models.Notably,it achieved near-perfect AUC scores(0.99)for age and ethnicity-based bullying.A comparative analysis with state-of-the-art benchmarks,including Generative Pre-trained Transformer 2(GPT-2)and Text-to-Text Transfer Transformer(T5)models highlights BERT’s superiority in handling ambiguous language.This work advances cyberbullying detection by demonstrating how hybrid feature extraction and transformer models improve multi-class classification,offering a scalable solution for moderating nuanced harmful content.
基金funded by the National Key Research and Development Program of China(Grant No.2024YFE0209000)the NSFC(Grant No.U23B2019).
文摘Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01296).
文摘Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and severe class imbalance,and occasional imaging artifacts can create ambiguity for state-of-the-art convolutional neural networks(CNNs).We frame skin lesion recognition as graph-based reasoning and,to ensure fair evaluation and avoid data leakage,adopt a strict lesion-level partitioning strategy.Each image is first over-segmented using SLIC(Simple Linear Iterative Clustering)to produce perceptually homogeneous superpixels.These superpixels form the nodes of a region-adjacency graph whose edges encode spatial continuity.Node attributes are 1280-dimensional embeddings extracted with a lightweight yet expressive EfficientNet-B0 backbone,providing strong representational power at modest computational cost.The resulting graphs are processed by a five-layer Graph Attention Network(GAT)that learns to weight inter-node relationships dynamically and aggregates multi-hop context before classifying lesions into seven classes with a log-softmax output.Extensive experiments on the DermaMNIST benchmark show the proposed pipeline achieves 88.35%accuracy and 98.04%AUC,outperforming contemporary CNNs,AutoML approaches,and alternative graph neural networks.An ablation study indicates EfficientNet-B0 produces superior node descriptors compared with ResNet-18 and DenseNet,and that roughly five GAT layers strike a good balance between being too shallow and over-deep while avoiding oversmoothing.The method requires no data augmentation or external metadata,making it a drop-in upgrade for clinical computer-aided diagnosis systems.
文摘This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.
文摘Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.
基金funded by the Directorate of Research and Community Service,Directorate General of Research and Development,Ministry of Higher Education,Science and Technologyin accordance with the Implementation Contract for the Operational Assistance Program for State Universities,Research Program Number:109/C3/DT.05.00/PL/2025.
文摘Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.
文摘The rapid and increasing growth in the volume and number of cyber threats from malware is not a real danger;the real threat lies in the obfuscation of these cyberattacks,as they constantly change their behavior,making detection more difficult.Numerous researchers and developers have devoted considerable attention to this topic;however,the research field has not yet been fully saturated with high-quality studies that address these problems.For this reason,this paper presents a novel multi-objective Markov-enhanced adaptive whale optimization(MOMEAWO)cybersecurity model to improve the classification of binary and multi-class malware threats through the proposed MOMEAWO approach.The proposed MOMEAWO cybersecurity model aims to provide an innovative solution for analyzing,detecting,and classifying the behavior of obfuscated malware within their respective families.The proposed model includes three classification types:Binary classification and multi-class classification(e.g.,four families and 16 malware families).To evaluate the performance of this model,we used a recently published dataset called the Canadian Institute for Cybersecurity Malware Memory Analysis(CIC-MalMem-2022)that contains balanced data.The results show near-perfect accuracy in binary classification and high accuracy in multi-class classification compared with related work using the same dataset.
文摘The paper presents an improved-RFC(Random Forest Classifier)approach for multi-class disease classification problem.It consists of a combination of Random Forest machine learning algorithm,an attribute evaluator method and an instance filter method.It intends to improve the performance of Random Forest algorithm.The performance results confirm that the proposed improved-RFC approach performs better than Random Forest algorithm with increase in disease classification accuracy up to 97.80%for multi-class groundnut disease dataset.The performance of improved-RFC approach is tested for its efficiency on five benchmark datasets.It shows superior performance on all these datasets.
基金funding from the Princess Nourah bint Abdulrahman University Researchers Supporting Project(PNURSP2025R195)the University of Bisha through its Fast-Track Research Support Program.
文摘Artificial intelligence(AI),particularly deep learning algorithms utilizing convolutional neural networks,plays an increasingly pivotal role in enhancing medical image examination.It demonstrates the potential for improving diagnostic accuracy within dental care.Orthopantomograms(OPGs)are essential in dentistry;however,their manual interpretation is often inconsistent and tedious.To the best of our knowledge,this is the first comprehensive application of YOLOv5m for the simultaneous detection and classification of six distinct dental pathologies using panoramic OPG images.The model was trained and refined on a custom dataset that began with 232 panoramic radiographs and was later expanded to 604 samples.These included annotated subclasses representing Caries,Infection,Impacted Teeth,Fractured Teeth,Broken Crowns,and Healthy conditions.The training was performed using GPU resources alongside tuned hyperparameters of batch size,learning rate schedule,and early stopping tailored for generalization to prevent overfitting.Evaluation on a held-out test set showed strong performance in the detection and localization of various dental pathologies and robust overall accuracy.At an IoU of 0.5,the system obtained a mean precision of 94.22%and recall of 90.42%,with mAP being 93.71%.This research confirms the use of YOLOv5m as a robust,highly efficient AI technology for the analysis of dental pathologies using OPGs,providing a clinically useful solution to enhance workflow efficiency and aid in sustaining consistency in complex multi-dimensional case evaluations.
基金supported by the National Natural Science Foundation of China(Grant No.62225602).
文摘Multi-class classification can be solved by decomposing it into a set of binary classification problems according to some encoding rules,e.g.,one-vs-one,one-vs-rest,error-correcting output codes.Existing works solve these binary classification problems in the original feature space,while it might be suboptimal as different binary classification problems correspond to different positive and negative examples.In this paper,we propose to learn label-specific features for each decomposed binary classification problem to consider the specific characteristics containing in its positive and negative examples.Specifically,to generate the label-specific features,clustering analysis is respectively conducted on the positive and negative examples in each decomposed binary data set to discover their inherent information and then label-specific features for one example are obtained by measuring the similarity between it and all cluster centers.Experiments clearly validate the effectiveness of learning label-specific features for decomposition-based multi-class classification.
文摘Urban tree species provide various essential ecosystem services in cities,such as regulating urban temperatures,reducing noise,capturing carbon,and mitigating the urban heat island effect.The quality of these services is influenced by species diversity,tree health,and the distribution and the composition of trees.Traditionally,data on urban trees has been collected through field surveys and manual interpretation of remote sensing images.In this study,we evaluated the effectiveness of multispectral airborne laser scanning(ALS)data in classifying 24 common urban roadside tree species in Espoo,Finland.Tree crown structure information,intensity features,and spectral data were used for classification.Eight different machine learning algorithms were tested,with the extra trees(ET)algorithm performing the best,achieving an overall accuracy of 71.7%using multispectral LiDAR data.This result highlights that integrating structural and spectral information within a single framework can improve the classification accuracy.Future research will focus on identifying the most important features for species classification and developing algorithms with greater efficiency and accuracy.