Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.Howev...Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.展开更多
Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and ...Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and severe class imbalance,and occasional imaging artifacts can create ambiguity for state-of-the-art convolutional neural networks(CNNs).We frame skin lesion recognition as graph-based reasoning and,to ensure fair evaluation and avoid data leakage,adopt a strict lesion-level partitioning strategy.Each image is first over-segmented using SLIC(Simple Linear Iterative Clustering)to produce perceptually homogeneous superpixels.These superpixels form the nodes of a region-adjacency graph whose edges encode spatial continuity.Node attributes are 1280-dimensional embeddings extracted with a lightweight yet expressive EfficientNet-B0 backbone,providing strong representational power at modest computational cost.The resulting graphs are processed by a five-layer Graph Attention Network(GAT)that learns to weight inter-node relationships dynamically and aggregates multi-hop context before classifying lesions into seven classes with a log-softmax output.Extensive experiments on the DermaMNIST benchmark show the proposed pipeline achieves 88.35%accuracy and 98.04%AUC,outperforming contemporary CNNs,AutoML approaches,and alternative graph neural networks.An ablation study indicates EfficientNet-B0 produces superior node descriptors compared with ResNet-18 and DenseNet,and that roughly five GAT layers strike a good balance between being too shallow and over-deep while avoiding oversmoothing.The method requires no data augmentation or external metadata,making it a drop-in upgrade for clinical computer-aided diagnosis systems.展开更多
This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 20...This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.展开更多
Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal...Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.展开更多
Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstruc...Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.展开更多
Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often stru...Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.展开更多
Accompanying rapid developments in hepatic surgery,the number of surgeries and identifications of histological types of primary hepatic space-occupying lesions (PHSOLs) have increased dramatically.This has led to many...Accompanying rapid developments in hepatic surgery,the number of surgeries and identifications of histological types of primary hepatic space-occupying lesions (PHSOLs) have increased dramatically.This has led to many changes in the surgicopathological spectrum of PHSOLs,and has contributed to a theoretical basis for modern hepatic surgery and oncological pathology.Between 1982 and 2009 at the Eastern Hepatobiliary Surgery Hospital (EHBH) in Shanghai,31 901 patients underwent surgery and were diagnosed as having a PHSOL.In this paper,we present an analysis of the PHSOL cases at the EHBH for this time period,along with results from a systematic literature review.We describe a surgicopathological spectrum comprising more than 100 types of PHSOLs that can be stratified into three types:tumor-like,benign,and malignant.We also stratified the PHSOLs into six subtypes derived from hepatocytes;cholangiocytes;vascular,lymphoid and hemopoietic tissues;muscular,fibrous and adipose tissues;neural and neuroendocrine tissues;and miscellaneous tissues.The present study provides a new classification system that can be used as a current reference for clinicians and pathologists to make correct diagnoses and differential diagnoses among various PHSOLs.展开更多
AIM: To establish a computed tomography (CT)-morphological classification for hepatic alveolar echinococcosis was the aim of the study.METHODS: The CT morphology of hepatic lesions in 228 patients with confirmed alveo...AIM: To establish a computed tomography (CT)-morphological classification for hepatic alveolar echinococcosis was the aim of the study.METHODS: The CT morphology of hepatic lesions in 228 patients with confirmed alveolar echinococcosis (AE) drawn from the Echinococcus Databank of the University Hospital of Ulm was reviewed retrospectively. For this reason, CT datasets of combined positron emission tomography (PET)-CT examinations were evaluated. The diagnosis of AE was made in patients with unequivocal seropositivity; positive histological findings following diagnostic puncture or partial resection of the liver; and/or findings typical for AE at either ultrasonography, CT, magnetic resonance imaging or PET-CT. The CT-morphological findings were grouped into the new classification scheme.RESULTS: Within the classification a lesion was dedicated to one out of five “primary morphologies” as well as to one out of six “patterns of calcification”. “primary morphology” and “pattern of calcification” are primarily focussed on separately from each other and combined, whereas the “primary morphology” V is not further characterized by a “pattern of calcification”. Based on the five primary morphologies, further descriptive sub-criteria were appended to types I-III. An analysis of the calcification pattern in relation to the primary morphology revealed the exclusive association of the central calcification with type IV primary morphology. Similarly, certain calcification patterns exhibited a clear predominance for other primary morphologies, which underscores the delimitation of the individual primary morphological types from each other. These relationships in terms of calcification patterns extend into the primary morphological sub-criteria, demonstrating the clear subordination of those criteria.CONCLUSION: The proposed CT-morphological classification (EMUC-CT) is intended to facilitate the recognition and interpretation of lesions in hepatic alveolar echinococcosis. This could help to interpret different clinical courses better and shall assist in the context of scientific studies to improve the comparability of CT findings.展开更多
In this paper the entanglement of pure 3-qubit states is discussed. The local unitary (LU) polynomial invariants that are closely related to the canonical forms are constructed and the relations of the coefficients ...In this paper the entanglement of pure 3-qubit states is discussed. The local unitary (LU) polynomial invariants that are closely related to the canonical forms are constructed and the relations of the coefficients of the canonical forms are given. Then the stochastic local operations and classlcal communication (SLOCC) classification of the states are discussed on the basis of the canonical forms, and the symmetric canonical form of the states without 3-tangle is discussed. Finally, we give the relation between the LU polynomial invariants and SLOCC classification.展开更多
The Dingqing ophiolite is located in the eastern segment of the Bangong-Nujiang suture zone. This suture zone is W–E trending parallel with the Yarlung–Zangbo suture zone and is an strategic area for exploring chrom...The Dingqing ophiolite is located in the eastern segment of the Bangong-Nujiang suture zone. This suture zone is W–E trending parallel with the Yarlung–Zangbo suture zone and is an strategic area for exploring chromite deposits in China. The Dingqign ophiolite is distributed in near SE-NW direction. According to the spatial distribution, the Dingqing ophiolite is sudivided into two massifs, including the East and the West massifs. The Dingqing ophiolite covers an area of nearly 600 km2. This ophiolite is composed of peridotite, pyroxenite, gabbro, diabase, basalt, plagiogranite and chert(Fig. 1). The peridotite is the main lithology of the Dingqing ophiolite. The peridotite covers about 90% of the total area of the Dingqing ophiolite. The Dingqing ophiolite is dominated by harzburgite with a small amounts of dunite. The Dingqing harzburgite displays different textures, such as massive, Taxitic, oriented and spherulitic textures(Fig. 2d–i). These four types of harzburgite occur in both the East and West massifs, especially in the Laraka area of the eastern part of the East massif. Dunites have different occurrences in the field outcrops, such as lenticular or stripshaped, thin-shell and agglomerate varieties(Fig. 2a–c). On the basis of detailed field work, we have discovered 83 chromitite bodies, including 27 in the East massif and 56 in the West massif. According to the occurrence scale and quantity of the chromitite bodies, we have identified four prospecting areas, namely Laraka, Latanguo, Langda and Nazona. Chromitites in the Dingqing ophiolite show different textures, including massive, disseminated, veined and disseminated-banded textures(Fig. 3). On the basis of the Cr#(=Cr/(Cr+Al)×100) of chromite, we have classified the Dingqing chromitite into high-Cr, medium high chromium type, medium chromium type and low chromium type chromitite(Figs. 4, 5). Among them, low chromium type chromitite Cr# is extremely low, ranging from 9.23 to 14.01, with an average of 11.89;TiO2 content is 0.00% to 0.04%, and the average value is 0.01%, which may be a new output type of chromitite. These different types of chromitites have different associations/assemblages of mineral inclusions. The inclusions in high chromium type chromitite are mainly clinopyroxene and a small amount of olivine;medium high chromium chromitite are mainly amphibole, a small amount of clinopyroxene and phlogopite;while low-chromium chromite rarely develops mineral inclusions, and micron-sized clinopyroxene inclusions are common in olivines which are gangue mineral in it. These different types of chromite ore bodies have a certain correspondence with the field output, and may also restrict their genesis. This part will be further developed in the follow-up work.展开更多
To explore the potential of conventional image processing techniques in the classification of cervical cancer cells, in this work, a co-occurrence histogram method was employed for image feature extraction and an ense...To explore the potential of conventional image processing techniques in the classification of cervical cancer cells, in this work, a co-occurrence histogram method was employed for image feature extraction and an ensemble classifier was developed by combining the base classifiers, namely, the artificial neural network(ANN),random forest(RF), and support vector machine(SVM), for image classification. The segmented pap-smear cell image dataset was constructed by the k-means clustering technique and used to evaluate the performance of the ensemble classifier which was formed by the combination of above considered base classifiers. The result was also compared with that achieved by the individual base classifiers as well as that trained with color, texture, and shape features. The maximum average classification accuracy of 93.44% was obtained when the ensemble classifier was applied and trained with co-occurrence histogram features, which indicates that the ensemble classifier trained with co-occurrence histogram features is more suitable and advantageous for the classification of cervical cancer cells.展开更多
In the Saharian domain, the Tarfaya-Laayoune coastal basin developed in a stable passive margin, where asymmetrical sedimentation increase from East to West and reach a sediment stack of about 14 kilometers. However, ...In the Saharian domain, the Tarfaya-Laayoune coastal basin developed in a stable passive margin, where asymmetrical sedimentation increase from East to West and reach a sediment stack of about 14 kilometers. However, the morphology of the studied area corresponds to a vast plateau (hamada) presenting occasional major reliefs. For this purpose, remote sensing approach has been applied to find the best approaches for truthful lithological mapping. The two supervised classification methods by machine learning (Artificial Neural Network and Spectral Information Divergence) have been evaluated for a most accurate classification to be used for our lithofacies mapping. The latest geological maps and RGB images were used for pseudo-color groups to identify important areas and collect the ROIs that will serve as facilities samples for the classifications. The results obtained showed a clear distinction between the various formation units, and very close results to the field reality in the ANN classification of the studied area. Thus, the ANN method is more accurate with an overall accuracy of 92.56% and a Kappa coefficient is 0.9143.展开更多
Urban tree species provide various essential ecosystem services in cities,such as regulating urban temperatures,reducing noise,capturing carbon,and mitigating the urban heat island effect.The quality of these services...Urban tree species provide various essential ecosystem services in cities,such as regulating urban temperatures,reducing noise,capturing carbon,and mitigating the urban heat island effect.The quality of these services is influenced by species diversity,tree health,and the distribution and the composition of trees.Traditionally,data on urban trees has been collected through field surveys and manual interpretation of remote sensing images.In this study,we evaluated the effectiveness of multispectral airborne laser scanning(ALS)data in classifying 24 common urban roadside tree species in Espoo,Finland.Tree crown structure information,intensity features,and spectral data were used for classification.Eight different machine learning algorithms were tested,with the extra trees(ET)algorithm performing the best,achieving an overall accuracy of 71.7%using multispectral LiDAR data.This result highlights that integrating structural and spectral information within a single framework can improve the classification accuracy.Future research will focus on identifying the most important features for species classification and developing algorithms with greater efficiency and accuracy.展开更多
The cleanliness of seed cotton plays a critical role in the pre-treatment of cotton textiles,and the removal of impurity during the harvesting process directly determines the quality and market value of cotton textile...The cleanliness of seed cotton plays a critical role in the pre-treatment of cotton textiles,and the removal of impurity during the harvesting process directly determines the quality and market value of cotton textiles.By fusing band combination optimization with deep learning,this study aims to achieve more efficient and accurate detection of film impurities in seed cotton on the production line.By applying hyperspectral imaging and a one-dimensional deep learning algorithm,we detect and classify impurities in seed cotton after harvest.The main categories detected include pure cotton,conveyor belt,film covering seed cotton,and film adhered to the conveyor belt.The proposed method achieves an impurity detection rate of 99.698%.To further ensure the feasibility and practical application potential of this strategy,we compare our results against existing mainstream methods.In addition,the model shows excellent recognition performance on pseudo-color images of real samples.With a processing time of 11.764μs per pixel from experimental data,it shows a much improved speed requirement while maintaining the accuracy of real production lines.This strategy provides an accurate and efficient method for removing impurities during cotton processing.展开更多
Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronar...Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.展开更多
Diagnosing cardiac diseases relies heavily on electrocardiogram(ECG)analysis,but detecting myocardial infarction-related arrhythmias remains challenging due to irregular heartbeats and signal variations.Despite advanc...Diagnosing cardiac diseases relies heavily on electrocardiogram(ECG)analysis,but detecting myocardial infarction-related arrhythmias remains challenging due to irregular heartbeats and signal variations.Despite advancements in machine learning,achieving both high accuracy and low computational cost for arrhythmia classification remains a critical issue.Computer-aided diagnosis systems can play a key role in early detection,reducing mortality rates associated with cardiac disorders.This study proposes a fully automated approach for ECG arrhythmia classification using deep learning and machine learning techniques to improve diagnostic accuracy while minimizing processing time.The methodology consists of three stages:1)preprocessing,where ECG signals undergo noise reduction and feature extraction;2)feature Identification,where deep convolutional neural network(CNN)blocks,combined with data augmentation and transfer learning,extract key parameters;3)classification,where a hybrid CNN-SVM model is employed for arrhythmia recognition.CNN-extracted features were fed into a binary support vector machine(SVM)classifier,and model performance was assessed using five-fold cross-validation.Experimental findings demonstrated that the CNN2 model achieved 85.52%accuracy,while the hybrid CNN2-SVM approach significantly improved accuracy to 97.33%,outperforming conventional methods.This model enhances classification efficiency while reducing computational complexity.The proposed approach bridges the gap between accuracy and processing speed in ECG arrhythmia classification,offering a promising solution for real-time clinical applications.Its superior performance compared to nonlinear classifiers highlights its potential for improving automated cardiac diagnosis.展开更多
In the era of precision medicine,the classification of diabetes mellitus has evolved beyond the traditional categories.Various classification methods now account for a multitude of factors,including variations in spec...In the era of precision medicine,the classification of diabetes mellitus has evolved beyond the traditional categories.Various classification methods now account for a multitude of factors,including variations in specific genes,type ofβ-cell impairment,degree of insulin resistance,and clinical characteristics of metabolic profiles.Improved classification methods enable healthcare providers to formulate blood glucose management strategies more precisely.Applying these updated classification systems,will assist clinicians in further optimising treatment plans,including targeted drug therapies,personalized dietary advice,and specific exercise plans.Ultimately,this will facilitate stricter blood glucose control,minimize the risks of hypoglycaemia and hyperglycaemia,and reduce long-term complications associated with diabetes.展开更多
In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue...In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue anomalies.Traditionally,radiologists manually interpret these images,which can be labor-intensive and time-consuming due to the vast amount of data.To address this challenge,machine learning,and deep learning approaches can be utilized to improve the accuracy and efficiency of anomaly detection in MRI scans.This manuscript presents the use of the Deep AlexNet50 model for MRI classification with discriminative learning methods.There are three stages for learning;in the first stage,the whole dataset is used to learn the features.In the second stage,some layers of AlexNet50 are frozen with an augmented dataset,and in the third stage,AlexNet50 with an augmented dataset with the augmented dataset.This method used three publicly available MRI classification datasets:Harvard whole brain atlas(HWBA-dataset),the School of Biomedical Engineering of Southern Medical University(SMU-dataset),and The National Institute of Neuroscience and Hospitals brain MRI dataset(NINS-dataset)for analysis.Various hyperparameter optimizers like Adam,stochastic gradient descent(SGD),Root mean square propagation(RMS prop),Adamax,and AdamW have been used to compare the performance of the learning process.HWBA-dataset registers maximum classification performance.We evaluated the performance of the proposed classification model using several quantitative metrics,achieving an average accuracy of 98%.展开更多
With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based...With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%.展开更多
基金funded by the National Key Research and Development Program of China(Grant No.2024YFE0209000)the NSFC(Grant No.U23B2019).
文摘Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01296).
文摘Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and severe class imbalance,and occasional imaging artifacts can create ambiguity for state-of-the-art convolutional neural networks(CNNs).We frame skin lesion recognition as graph-based reasoning and,to ensure fair evaluation and avoid data leakage,adopt a strict lesion-level partitioning strategy.Each image is first over-segmented using SLIC(Simple Linear Iterative Clustering)to produce perceptually homogeneous superpixels.These superpixels form the nodes of a region-adjacency graph whose edges encode spatial continuity.Node attributes are 1280-dimensional embeddings extracted with a lightweight yet expressive EfficientNet-B0 backbone,providing strong representational power at modest computational cost.The resulting graphs are processed by a five-layer Graph Attention Network(GAT)that learns to weight inter-node relationships dynamically and aggregates multi-hop context before classifying lesions into seven classes with a log-softmax output.Extensive experiments on the DermaMNIST benchmark show the proposed pipeline achieves 88.35%accuracy and 98.04%AUC,outperforming contemporary CNNs,AutoML approaches,and alternative graph neural networks.An ablation study indicates EfficientNet-B0 produces superior node descriptors compared with ResNet-18 and DenseNet,and that roughly five GAT layers strike a good balance between being too shallow and over-deep while avoiding oversmoothing.The method requires no data augmentation or external metadata,making it a drop-in upgrade for clinical computer-aided diagnosis systems.
文摘This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.
文摘Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.
基金funded by the Directorate of Research and Community Service,Directorate General of Research and Development,Ministry of Higher Education,Science and Technologyin accordance with the Implementation Contract for the Operational Assistance Program for State Universities,Research Program Number:109/C3/DT.05.00/PL/2025.
文摘Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01295).
文摘Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.
基金Supported by The National Nature Science Foundation of China,No.30872506 and No.81072026
文摘Accompanying rapid developments in hepatic surgery,the number of surgeries and identifications of histological types of primary hepatic space-occupying lesions (PHSOLs) have increased dramatically.This has led to many changes in the surgicopathological spectrum of PHSOLs,and has contributed to a theoretical basis for modern hepatic surgery and oncological pathology.Between 1982 and 2009 at the Eastern Hepatobiliary Surgery Hospital (EHBH) in Shanghai,31 901 patients underwent surgery and were diagnosed as having a PHSOL.In this paper,we present an analysis of the PHSOL cases at the EHBH for this time period,along with results from a systematic literature review.We describe a surgicopathological spectrum comprising more than 100 types of PHSOLs that can be stratified into three types:tumor-like,benign,and malignant.We also stratified the PHSOLs into six subtypes derived from hepatocytes;cholangiocytes;vascular,lymphoid and hemopoietic tissues;muscular,fibrous and adipose tissues;neural and neuroendocrine tissues;and miscellaneous tissues.The present study provides a new classification system that can be used as a current reference for clinicians and pathologists to make correct diagnoses and differential diagnoses among various PHSOLs.
文摘AIM: To establish a computed tomography (CT)-morphological classification for hepatic alveolar echinococcosis was the aim of the study.METHODS: The CT morphology of hepatic lesions in 228 patients with confirmed alveolar echinococcosis (AE) drawn from the Echinococcus Databank of the University Hospital of Ulm was reviewed retrospectively. For this reason, CT datasets of combined positron emission tomography (PET)-CT examinations were evaluated. The diagnosis of AE was made in patients with unequivocal seropositivity; positive histological findings following diagnostic puncture or partial resection of the liver; and/or findings typical for AE at either ultrasonography, CT, magnetic resonance imaging or PET-CT. The CT-morphological findings were grouped into the new classification scheme.RESULTS: Within the classification a lesion was dedicated to one out of five “primary morphologies” as well as to one out of six “patterns of calcification”. “primary morphology” and “pattern of calcification” are primarily focussed on separately from each other and combined, whereas the “primary morphology” V is not further characterized by a “pattern of calcification”. Based on the five primary morphologies, further descriptive sub-criteria were appended to types I-III. An analysis of the calcification pattern in relation to the primary morphology revealed the exclusive association of the central calcification with type IV primary morphology. Similarly, certain calcification patterns exhibited a clear predominance for other primary morphologies, which underscores the delimitation of the individual primary morphological types from each other. These relationships in terms of calcification patterns extend into the primary morphological sub-criteria, demonstrating the clear subordination of those criteria.CONCLUSION: The proposed CT-morphological classification (EMUC-CT) is intended to facilitate the recognition and interpretation of lesions in hepatic alveolar echinococcosis. This could help to interpret different clinical courses better and shall assist in the context of scientific studies to improve the comparability of CT findings.
基金The project supported by National Natural Science Foundation of China under Grant No. 6J3433050 and the Natural Science Foundation of Xuzhou Normal University (Key Project) under Grant No. 03XLA04
文摘In this paper the entanglement of pure 3-qubit states is discussed. The local unitary (LU) polynomial invariants that are closely related to the canonical forms are constructed and the relations of the coefficients of the canonical forms are given. Then the stochastic local operations and classlcal communication (SLOCC) classification of the states are discussed on the basis of the canonical forms, and the symmetric canonical form of the states without 3-tangle is discussed. Finally, we give the relation between the LU polynomial invariants and SLOCC classification.
基金granted by National Natural Science Foundation of China(Grant No.41720104009)China Geology Survey Project(Grant No.DD20160023-01)Foundation of MLR(Grant No.201511022)
文摘The Dingqing ophiolite is located in the eastern segment of the Bangong-Nujiang suture zone. This suture zone is W–E trending parallel with the Yarlung–Zangbo suture zone and is an strategic area for exploring chromite deposits in China. The Dingqign ophiolite is distributed in near SE-NW direction. According to the spatial distribution, the Dingqing ophiolite is sudivided into two massifs, including the East and the West massifs. The Dingqing ophiolite covers an area of nearly 600 km2. This ophiolite is composed of peridotite, pyroxenite, gabbro, diabase, basalt, plagiogranite and chert(Fig. 1). The peridotite is the main lithology of the Dingqing ophiolite. The peridotite covers about 90% of the total area of the Dingqing ophiolite. The Dingqing ophiolite is dominated by harzburgite with a small amounts of dunite. The Dingqing harzburgite displays different textures, such as massive, Taxitic, oriented and spherulitic textures(Fig. 2d–i). These four types of harzburgite occur in both the East and West massifs, especially in the Laraka area of the eastern part of the East massif. Dunites have different occurrences in the field outcrops, such as lenticular or stripshaped, thin-shell and agglomerate varieties(Fig. 2a–c). On the basis of detailed field work, we have discovered 83 chromitite bodies, including 27 in the East massif and 56 in the West massif. According to the occurrence scale and quantity of the chromitite bodies, we have identified four prospecting areas, namely Laraka, Latanguo, Langda and Nazona. Chromitites in the Dingqing ophiolite show different textures, including massive, disseminated, veined and disseminated-banded textures(Fig. 3). On the basis of the Cr#(=Cr/(Cr+Al)×100) of chromite, we have classified the Dingqing chromitite into high-Cr, medium high chromium type, medium chromium type and low chromium type chromitite(Figs. 4, 5). Among them, low chromium type chromitite Cr# is extremely low, ranging from 9.23 to 14.01, with an average of 11.89;TiO2 content is 0.00% to 0.04%, and the average value is 0.01%, which may be a new output type of chromitite. These different types of chromitites have different associations/assemblages of mineral inclusions. The inclusions in high chromium type chromitite are mainly clinopyroxene and a small amount of olivine;medium high chromium chromitite are mainly amphibole, a small amount of clinopyroxene and phlogopite;while low-chromium chromite rarely develops mineral inclusions, and micron-sized clinopyroxene inclusions are common in olivines which are gangue mineral in it. These different types of chromite ore bodies have a certain correspondence with the field output, and may also restrict their genesis. This part will be further developed in the follow-up work.
文摘To explore the potential of conventional image processing techniques in the classification of cervical cancer cells, in this work, a co-occurrence histogram method was employed for image feature extraction and an ensemble classifier was developed by combining the base classifiers, namely, the artificial neural network(ANN),random forest(RF), and support vector machine(SVM), for image classification. The segmented pap-smear cell image dataset was constructed by the k-means clustering technique and used to evaluate the performance of the ensemble classifier which was formed by the combination of above considered base classifiers. The result was also compared with that achieved by the individual base classifiers as well as that trained with color, texture, and shape features. The maximum average classification accuracy of 93.44% was obtained when the ensemble classifier was applied and trained with co-occurrence histogram features, which indicates that the ensemble classifier trained with co-occurrence histogram features is more suitable and advantageous for the classification of cervical cancer cells.
文摘In the Saharian domain, the Tarfaya-Laayoune coastal basin developed in a stable passive margin, where asymmetrical sedimentation increase from East to West and reach a sediment stack of about 14 kilometers. However, the morphology of the studied area corresponds to a vast plateau (hamada) presenting occasional major reliefs. For this purpose, remote sensing approach has been applied to find the best approaches for truthful lithological mapping. The two supervised classification methods by machine learning (Artificial Neural Network and Spectral Information Divergence) have been evaluated for a most accurate classification to be used for our lithofacies mapping. The latest geological maps and RGB images were used for pseudo-color groups to identify important areas and collect the ROIs that will serve as facilities samples for the classifications. The results obtained showed a clear distinction between the various formation units, and very close results to the field reality in the ANN classification of the studied area. Thus, the ANN method is more accurate with an overall accuracy of 92.56% and a Kappa coefficient is 0.9143.
文摘Urban tree species provide various essential ecosystem services in cities,such as regulating urban temperatures,reducing noise,capturing carbon,and mitigating the urban heat island effect.The quality of these services is influenced by species diversity,tree health,and the distribution and the composition of trees.Traditionally,data on urban trees has been collected through field surveys and manual interpretation of remote sensing images.In this study,we evaluated the effectiveness of multispectral airborne laser scanning(ALS)data in classifying 24 common urban roadside tree species in Espoo,Finland.Tree crown structure information,intensity features,and spectral data were used for classification.Eight different machine learning algorithms were tested,with the extra trees(ET)algorithm performing the best,achieving an overall accuracy of 71.7%using multispectral LiDAR data.This result highlights that integrating structural and spectral information within a single framework can improve the classification accuracy.Future research will focus on identifying the most important features for species classification and developing algorithms with greater efficiency and accuracy.
基金supported in part by the Six Talent Peaks Project in Jiangsu Province under Grant 013040315in part by the China Textile Industry Federation Science and Technology Guidance Project under Grant 2017107+1 种基金in part by the National Natural Science Foundation of China under Grant 31570714in part by the China Scholarship Council under Grant 202108320290。
文摘The cleanliness of seed cotton plays a critical role in the pre-treatment of cotton textiles,and the removal of impurity during the harvesting process directly determines the quality and market value of cotton textiles.By fusing band combination optimization with deep learning,this study aims to achieve more efficient and accurate detection of film impurities in seed cotton on the production line.By applying hyperspectral imaging and a one-dimensional deep learning algorithm,we detect and classify impurities in seed cotton after harvest.The main categories detected include pure cotton,conveyor belt,film covering seed cotton,and film adhered to the conveyor belt.The proposed method achieves an impurity detection rate of 99.698%.To further ensure the feasibility and practical application potential of this strategy,we compare our results against existing mainstream methods.In addition,the model shows excellent recognition performance on pseudo-color images of real samples.With a processing time of 11.764μs per pixel from experimental data,it shows a much improved speed requirement while maintaining the accuracy of real production lines.This strategy provides an accurate and efficient method for removing impurities during cotton processing.
基金the Research Grant of Kwangwoon University in 2024.
文摘Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.
文摘Diagnosing cardiac diseases relies heavily on electrocardiogram(ECG)analysis,but detecting myocardial infarction-related arrhythmias remains challenging due to irregular heartbeats and signal variations.Despite advancements in machine learning,achieving both high accuracy and low computational cost for arrhythmia classification remains a critical issue.Computer-aided diagnosis systems can play a key role in early detection,reducing mortality rates associated with cardiac disorders.This study proposes a fully automated approach for ECG arrhythmia classification using deep learning and machine learning techniques to improve diagnostic accuracy while minimizing processing time.The methodology consists of three stages:1)preprocessing,where ECG signals undergo noise reduction and feature extraction;2)feature Identification,where deep convolutional neural network(CNN)blocks,combined with data augmentation and transfer learning,extract key parameters;3)classification,where a hybrid CNN-SVM model is employed for arrhythmia recognition.CNN-extracted features were fed into a binary support vector machine(SVM)classifier,and model performance was assessed using five-fold cross-validation.Experimental findings demonstrated that the CNN2 model achieved 85.52%accuracy,while the hybrid CNN2-SVM approach significantly improved accuracy to 97.33%,outperforming conventional methods.This model enhances classification efficiency while reducing computational complexity.The proposed approach bridges the gap between accuracy and processing speed in ECG arrhythmia classification,offering a promising solution for real-time clinical applications.Its superior performance compared to nonlinear classifiers highlights its potential for improving automated cardiac diagnosis.
文摘In the era of precision medicine,the classification of diabetes mellitus has evolved beyond the traditional categories.Various classification methods now account for a multitude of factors,including variations in specific genes,type ofβ-cell impairment,degree of insulin resistance,and clinical characteristics of metabolic profiles.Improved classification methods enable healthcare providers to formulate blood glucose management strategies more precisely.Applying these updated classification systems,will assist clinicians in further optimising treatment plans,including targeted drug therapies,personalized dietary advice,and specific exercise plans.Ultimately,this will facilitate stricter blood glucose control,minimize the risks of hypoglycaemia and hyperglycaemia,and reduce long-term complications associated with diabetes.
文摘In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue anomalies.Traditionally,radiologists manually interpret these images,which can be labor-intensive and time-consuming due to the vast amount of data.To address this challenge,machine learning,and deep learning approaches can be utilized to improve the accuracy and efficiency of anomaly detection in MRI scans.This manuscript presents the use of the Deep AlexNet50 model for MRI classification with discriminative learning methods.There are three stages for learning;in the first stage,the whole dataset is used to learn the features.In the second stage,some layers of AlexNet50 are frozen with an augmented dataset,and in the third stage,AlexNet50 with an augmented dataset with the augmented dataset.This method used three publicly available MRI classification datasets:Harvard whole brain atlas(HWBA-dataset),the School of Biomedical Engineering of Southern Medical University(SMU-dataset),and The National Institute of Neuroscience and Hospitals brain MRI dataset(NINS-dataset)for analysis.Various hyperparameter optimizers like Adam,stochastic gradient descent(SGD),Root mean square propagation(RMS prop),Adamax,and AdamW have been used to compare the performance of the learning process.HWBA-dataset registers maximum classification performance.We evaluated the performance of the proposed classification model using several quantitative metrics,achieving an average accuracy of 98%.
基金supported by the National Key Research and Development Program of China No.2023YFA1009500.
文摘With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%.