Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstruc...Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.展开更多
This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 20...This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.展开更多
Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal...Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.展开更多
Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and ...Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and severe class imbalance,and occasional imaging artifacts can create ambiguity for state-of-the-art convolutional neural networks(CNNs).We frame skin lesion recognition as graph-based reasoning and,to ensure fair evaluation and avoid data leakage,adopt a strict lesion-level partitioning strategy.Each image is first over-segmented using SLIC(Simple Linear Iterative Clustering)to produce perceptually homogeneous superpixels.These superpixels form the nodes of a region-adjacency graph whose edges encode spatial continuity.Node attributes are 1280-dimensional embeddings extracted with a lightweight yet expressive EfficientNet-B0 backbone,providing strong representational power at modest computational cost.The resulting graphs are processed by a five-layer Graph Attention Network(GAT)that learns to weight inter-node relationships dynamically and aggregates multi-hop context before classifying lesions into seven classes with a log-softmax output.Extensive experiments on the DermaMNIST benchmark show the proposed pipeline achieves 88.35%accuracy and 98.04%AUC,outperforming contemporary CNNs,AutoML approaches,and alternative graph neural networks.An ablation study indicates EfficientNet-B0 produces superior node descriptors compared with ResNet-18 and DenseNet,and that roughly five GAT layers strike a good balance between being too shallow and over-deep while avoiding oversmoothing.The method requires no data augmentation or external metadata,making it a drop-in upgrade for clinical computer-aided diagnosis systems.展开更多
Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often stru...Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.展开更多
Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.Howev...Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.展开更多
Urban tree species provide various essential ecosystem services in cities,such as regulating urban temperatures,reducing noise,capturing carbon,and mitigating the urban heat island effect.The quality of these services...Urban tree species provide various essential ecosystem services in cities,such as regulating urban temperatures,reducing noise,capturing carbon,and mitigating the urban heat island effect.The quality of these services is influenced by species diversity,tree health,and the distribution and the composition of trees.Traditionally,data on urban trees has been collected through field surveys and manual interpretation of remote sensing images.In this study,we evaluated the effectiveness of multispectral airborne laser scanning(ALS)data in classifying 24 common urban roadside tree species in Espoo,Finland.Tree crown structure information,intensity features,and spectral data were used for classification.Eight different machine learning algorithms were tested,with the extra trees(ET)algorithm performing the best,achieving an overall accuracy of 71.7%using multispectral LiDAR data.This result highlights that integrating structural and spectral information within a single framework can improve the classification accuracy.Future research will focus on identifying the most important features for species classification and developing algorithms with greater efficiency and accuracy.展开更多
The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and hist...The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.展开更多
Automatic detection of Leukemia or blood cancer is one of the most challenging tasks that need to be addressed in the healthcare system.Analysis of white blood cells(WBCs)in the blood or bone marrow microscopic slide ...Automatic detection of Leukemia or blood cancer is one of the most challenging tasks that need to be addressed in the healthcare system.Analysis of white blood cells(WBCs)in the blood or bone marrow microscopic slide images play a crucial part in early identification to facilitate medical experts.For Acute Lymphocytic Leukemia(ALL),the most preferred part of the blood or marrow is to be analyzed by the experts before it spreads in the whole body and the condition becomes worse.The researchers have done a lot of work in this field,to demonstrate a comprehensive analysis few literature reviews have been published focusing on various artificial intelligence-based techniques like machine and deep learning detection of ALL.The systematic review has been done in this article under the PRISMA guidelines which presents the most recent advancements in this field.Different image segmentation techniques were broadly studied and categorized from various online databases like Google Scholar,Science Direct,and PubMed as image processing-based,traditional machine and deep learning-based,and advanced deep learning-based models were presented.Convolutional Neural Networks(CNN)based on traditional models and then the recent advancements in CNN used for the classification of ALL into its subtypes.A critical analysis of the existing methods is provided to offer clarity on the current state of the field.Finally,the paper concludes with insights and suggestions for future research,aiming to guide new researchers in the development of advanced automated systems for detecting life-threatening diseases.展开更多
The cleanliness of seed cotton plays a critical role in the pre-treatment of cotton textiles,and the removal of impurity during the harvesting process directly determines the quality and market value of cotton textile...The cleanliness of seed cotton plays a critical role in the pre-treatment of cotton textiles,and the removal of impurity during the harvesting process directly determines the quality and market value of cotton textiles.By fusing band combination optimization with deep learning,this study aims to achieve more efficient and accurate detection of film impurities in seed cotton on the production line.By applying hyperspectral imaging and a one-dimensional deep learning algorithm,we detect and classify impurities in seed cotton after harvest.The main categories detected include pure cotton,conveyor belt,film covering seed cotton,and film adhered to the conveyor belt.The proposed method achieves an impurity detection rate of 99.698%.To further ensure the feasibility and practical application potential of this strategy,we compare our results against existing mainstream methods.In addition,the model shows excellent recognition performance on pseudo-color images of real samples.With a processing time of 11.764μs per pixel from experimental data,it shows a much improved speed requirement while maintaining the accuracy of real production lines.This strategy provides an accurate and efficient method for removing impurities during cotton processing.展开更多
Diagnosing cardiac diseases relies heavily on electrocardiogram(ECG)analysis,but detecting myocardial infarction-related arrhythmias remains challenging due to irregular heartbeats and signal variations.Despite advanc...Diagnosing cardiac diseases relies heavily on electrocardiogram(ECG)analysis,but detecting myocardial infarction-related arrhythmias remains challenging due to irregular heartbeats and signal variations.Despite advancements in machine learning,achieving both high accuracy and low computational cost for arrhythmia classification remains a critical issue.Computer-aided diagnosis systems can play a key role in early detection,reducing mortality rates associated with cardiac disorders.This study proposes a fully automated approach for ECG arrhythmia classification using deep learning and machine learning techniques to improve diagnostic accuracy while minimizing processing time.The methodology consists of three stages:1)preprocessing,where ECG signals undergo noise reduction and feature extraction;2)feature Identification,where deep convolutional neural network(CNN)blocks,combined with data augmentation and transfer learning,extract key parameters;3)classification,where a hybrid CNN-SVM model is employed for arrhythmia recognition.CNN-extracted features were fed into a binary support vector machine(SVM)classifier,and model performance was assessed using five-fold cross-validation.Experimental findings demonstrated that the CNN2 model achieved 85.52%accuracy,while the hybrid CNN2-SVM approach significantly improved accuracy to 97.33%,outperforming conventional methods.This model enhances classification efficiency while reducing computational complexity.The proposed approach bridges the gap between accuracy and processing speed in ECG arrhythmia classification,offering a promising solution for real-time clinical applications.Its superior performance compared to nonlinear classifiers highlights its potential for improving automated cardiac diagnosis.展开更多
In the era of precision medicine,the classification of diabetes mellitus has evolved beyond the traditional categories.Various classification methods now account for a multitude of factors,including variations in spec...In the era of precision medicine,the classification of diabetes mellitus has evolved beyond the traditional categories.Various classification methods now account for a multitude of factors,including variations in specific genes,type ofβ-cell impairment,degree of insulin resistance,and clinical characteristics of metabolic profiles.Improved classification methods enable healthcare providers to formulate blood glucose management strategies more precisely.Applying these updated classification systems,will assist clinicians in further optimising treatment plans,including targeted drug therapies,personalized dietary advice,and specific exercise plans.Ultimately,this will facilitate stricter blood glucose control,minimize the risks of hypoglycaemia and hyperglycaemia,and reduce long-term complications associated with diabetes.展开更多
With the widespread use of upper gastrointestinal endoscopy,more and more gastric polyps(GPs)are being detected.Traditional management strategies often rely on histopathologic examination,which can be time-consuming a...With the widespread use of upper gastrointestinal endoscopy,more and more gastric polyps(GPs)are being detected.Traditional management strategies often rely on histopathologic examination,which can be time-consuming and may not guide immediate clinical decisions.This paper aims to introduce a novel classification system for GPs based on their potential risk of malignant transformation,categorizing them as"good","bad",and"ugly".A review of the literature and clinical case analysis were conducted to explore the clinical implications,management strategies,and the system's application in endoscopic practice.Good polyps,mainly including fundic gland polyps and inflammatory fibrous polyps,have a low risk of malignancy and typically require minimal or no intervention.Bad polyps,mainly including hyperplastic polyps and adenomas,pose an intermediate risk of malignancy,necessitating closer monitoring or removal.Ugly polyps,mainly including type 3 neuroendocrine tumors and early gastric cancer,indicate a high potential for malignancy and require urgent and comprehensive treatment.The new classification system provides a simplified and practical framework for diagnosing and managing GPs,improving diagnostic accuracy,guiding individualized treatment,and promoting advancements in endoscopic techniques.Despite some challenges,such as the risk of misclassification due to similar endoscopic appearances,this system is essential for the standardized management of GPs.It also lays the foundation for future research into biomarkers and the development of personalized medicine.展开更多
Lunar impact glasses have been identified as crucial indicators of geochemical information regarding their source regions. Impact glasses can be categorized as either local or exotic. Those preserving geochemical sign...Lunar impact glasses have been identified as crucial indicators of geochemical information regarding their source regions. Impact glasses can be categorized as either local or exotic. Those preserving geochemical signatures matching local lithologies (e.g., mare basalts or their single minerals) or regolith bulk soil compositions are classified as “local”. Otherwise, they could be defined as “exotic”. The analysis of exotic glasses provides the opportunity to explore previously unsampled lunar areas. This study focuses on the identification of exotic glasses within the Chang’e-5 (CE-5) soil sample by analyzing the trace elements of 28 impact glasses with distinct major element compositions in comparison with the CE-5 bulk soil. However, the results indicate that 18 of the analyzed glasses exhibit trace element compositions comparable to those of the local CE-5 materials. In particular, some of them could match the local single mineral component in major and trace elements, suggesting a local origin. Therefore, it is recommended that the investigation be expanded from using major elements to including nonvolatile trace elements, with a view to enhancing our understanding on the provenance of lunar impact glasses. To achieve a more accurate identification of exotic glasses within the CE-5 soil sample, a novel classification plot of Mg# versus La is proposed. The remaining 10 glasses, which exhibit diverse trace element variations, were identified as exotic. A comparative analysis of their chemical characteristics with remote sensing data indicates that they may have originated from the Aristarchus, Mairan, Sharp, or Pythagoras craters. This study elucidates the classification and possible provenance of exotic materials within the CE-5 soil sample, thereby providing constraints for the enhanced identification of local and exotic components at the CE-5 landing site.展开更多
Preservation of the crops depends on early and accurate detection of pests on crops as they cause several diseases decreasing crop production and quality. Several deep-learning techniques have been applied to overcome...Preservation of the crops depends on early and accurate detection of pests on crops as they cause several diseases decreasing crop production and quality. Several deep-learning techniques have been applied to overcome the issue of pest detection on crops. We have developed the YOLOCSP-PEST model for Pest localization and classification. With the Cross Stage Partial Network (CSPNET) backbone, the proposed model is a modified version of You Only Look Once Version 7 (YOLOv7) that is intended primarily for pest localization and classification. Our proposed model gives exceptionally good results under conditions that are very challenging for any other comparable models especially conditions where we have issues with the luminance and the orientation of the images. It helps farmers working out on their crops in distant areas to determine any infestation quickly and accurately on their crops which helps in the quality and quantity of the production yield. The model has been trained and tested on 2 datasets namely the IP102 data set and a local crop data set on both of which it has shown exceptional results. It gave us a mean average precision (mAP) of 88.40% along with a precision of 85.55% and a recall of 84.25% on the IP102 dataset meanwhile giving a mAP of 97.18% on the local data set along with a recall of 94.88% and a precision of 97.50%. These findings demonstrate that the proposed model is very effective in detecting real-life scenarios and can help in the production of crops improving the yield quality and quantity at the same time.展开更多
With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based...With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%.展开更多
In this study,eight different varieties of maize seeds were used as the research objects.Conduct 81 types of combined preprocessing on the original spectra.Through comparison,Savitzky-Golay(SG)-multivariate scattering...In this study,eight different varieties of maize seeds were used as the research objects.Conduct 81 types of combined preprocessing on the original spectra.Through comparison,Savitzky-Golay(SG)-multivariate scattering correction(MSC)-maximum-minimum normalization(MN)was identified as the optimal preprocessing technique.The competitive adaptive reweighted sampling(CARS),successive projections algorithm(SPA),and their combined methods were employed to extract feature wavelengths.Classification models based on back propagation(BP),support vector machine(SVM),random forest(RF),and partial least squares(PLS)were established using full-band data and feature wavelengths.Among all models,the(CARS-SPA)-BP model achieved the highest accuracy rate of 98.44%.This study offers novel insights and methodologies for the rapid and accurate identification of corn seeds as well as other crop seeds.展开更多
Purpose:Interdisciplinary research has become a critical approach to addressing complex societal,economic,technological,and environmental challenges,driving innovation and integrating scientific knowledge.While interd...Purpose:Interdisciplinary research has become a critical approach to addressing complex societal,economic,technological,and environmental challenges,driving innovation and integrating scientific knowledge.While interdisciplinarity indicators are widely used to evaluate research performance,the impact of classification granularity on these assessments remains underexplored.Design/methodology/approach:This study investigates how different levels of classification granularity-macro,meso,and micro-affect the evaluation of interdisciplinarity in research institutes.Using a dataset of 262 institutes from four major German non-university organizations(FHG,HGF,MPG,WGL)from 2018 to 2022,we examine inconsistencies in interdisciplinarity across levels,analyze ranking changes,and explore the influence of institutional fields and research focus(applied vs.basic).Findings:Our findings reveal significant inconsistencies in interdisciplinarity across classification levels,with rankings varying substantially.Notably,the Fraunhofer Society(FHG),which performs well at the macro level,experiences significant ranking declines at meso and micro levels.Normalizing interdisciplinarity by research field confirmed that these declines persist.The research focus of institutes,whether applied,basic,or mixed,does not significantly explain the observed ranking dynamics.Research limitations:This study has only considered the publication-based dimension of institutional interdisciplinarity and has not explored other aspects.Practical implications:The findings provide insights for policymakers,research managers,and scholars to better interpret interdisciplinarity metrics and support interdisciplinary research effectively.Originality/value:This study underscores the critical role of classification granularity in interdisciplinarity assessment and emphasizes the need for standardized approaches to ensure robust and fair evaluations.展开更多
Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronar...Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.展开更多
In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue...In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue anomalies.Traditionally,radiologists manually interpret these images,which can be labor-intensive and time-consuming due to the vast amount of data.To address this challenge,machine learning,and deep learning approaches can be utilized to improve the accuracy and efficiency of anomaly detection in MRI scans.This manuscript presents the use of the Deep AlexNet50 model for MRI classification with discriminative learning methods.There are three stages for learning;in the first stage,the whole dataset is used to learn the features.In the second stage,some layers of AlexNet50 are frozen with an augmented dataset,and in the third stage,AlexNet50 with an augmented dataset with the augmented dataset.This method used three publicly available MRI classification datasets:Harvard whole brain atlas(HWBA-dataset),the School of Biomedical Engineering of Southern Medical University(SMU-dataset),and The National Institute of Neuroscience and Hospitals brain MRI dataset(NINS-dataset)for analysis.Various hyperparameter optimizers like Adam,stochastic gradient descent(SGD),Root mean square propagation(RMS prop),Adamax,and AdamW have been used to compare the performance of the learning process.HWBA-dataset registers maximum classification performance.We evaluated the performance of the proposed classification model using several quantitative metrics,achieving an average accuracy of 98%.展开更多
基金funded by the Directorate of Research and Community Service,Directorate General of Research and Development,Ministry of Higher Education,Science and Technologyin accordance with the Implementation Contract for the Operational Assistance Program for State Universities,Research Program Number:109/C3/DT.05.00/PL/2025.
文摘Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.
文摘This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.
文摘Honeycombing Lung(HCL)is a chronic lung condition marked by advanced fibrosis,resulting in enlarged air spaces with thick fibrotic walls,which are visible on Computed Tomography(CT)scans.Differentiating between normal lung tissue,honeycombing lungs,and Ground Glass Opacity(GGO)in CT images is often challenging for radiologists and may lead to misinterpretations.Although earlier studies have proposed models to detect and classify HCL,many faced limitations such as high computational demands,lower accuracy,and difficulty distinguishing between HCL and GGO.CT images are highly effective for lung classification due to their high resolution,3D visualization,and sensitivity to tissue density variations.This study introduces Honeycombing Lungs Network(HCL Net),a novel classification algorithm inspired by ResNet50V2 and enhanced to overcome the shortcomings of previous approaches.HCL Net incorporates additional residual blocks,refined preprocessing techniques,and selective parameter tuning to improve classification performance.The dataset,sourced from the University Malaya Medical Centre(UMMC)and verified by expert radiologists,consists of CT images of normal,honeycombing,and GGO lungs.Experimental evaluations across five assessments demonstrated that HCL Net achieved an outstanding classification accuracy of approximately 99.97%.It also recorded strong performance in other metrics,achieving 93%precision,100%sensitivity,89%specificity,and an AUC-ROC score of 97%.Comparative analysis with baseline feature engineering methods confirmed the superior efficacy of HCL Net.The model significantly reduces misclassification,particularly between honeycombing and GGO lungs,enhancing diagnostic precision and reliability in lung image analysis.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01296).
文摘Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and severe class imbalance,and occasional imaging artifacts can create ambiguity for state-of-the-art convolutional neural networks(CNNs).We frame skin lesion recognition as graph-based reasoning and,to ensure fair evaluation and avoid data leakage,adopt a strict lesion-level partitioning strategy.Each image is first over-segmented using SLIC(Simple Linear Iterative Clustering)to produce perceptually homogeneous superpixels.These superpixels form the nodes of a region-adjacency graph whose edges encode spatial continuity.Node attributes are 1280-dimensional embeddings extracted with a lightweight yet expressive EfficientNet-B0 backbone,providing strong representational power at modest computational cost.The resulting graphs are processed by a five-layer Graph Attention Network(GAT)that learns to weight inter-node relationships dynamically and aggregates multi-hop context before classifying lesions into seven classes with a log-softmax output.Extensive experiments on the DermaMNIST benchmark show the proposed pipeline achieves 88.35%accuracy and 98.04%AUC,outperforming contemporary CNNs,AutoML approaches,and alternative graph neural networks.An ablation study indicates EfficientNet-B0 produces superior node descriptors compared with ResNet-18 and DenseNet,and that roughly five GAT layers strike a good balance between being too shallow and over-deep while avoiding oversmoothing.The method requires no data augmentation or external metadata,making it a drop-in upgrade for clinical computer-aided diagnosis systems.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01295).
文摘Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.
基金funded by the National Key Research and Development Program of China(Grant No.2024YFE0209000)the NSFC(Grant No.U23B2019).
文摘Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.
文摘Urban tree species provide various essential ecosystem services in cities,such as regulating urban temperatures,reducing noise,capturing carbon,and mitigating the urban heat island effect.The quality of these services is influenced by species diversity,tree health,and the distribution and the composition of trees.Traditionally,data on urban trees has been collected through field surveys and manual interpretation of remote sensing images.In this study,we evaluated the effectiveness of multispectral airborne laser scanning(ALS)data in classifying 24 common urban roadside tree species in Espoo,Finland.Tree crown structure information,intensity features,and spectral data were used for classification.Eight different machine learning algorithms were tested,with the extra trees(ET)algorithm performing the best,achieving an overall accuracy of 71.7%using multispectral LiDAR data.This result highlights that integrating structural and spectral information within a single framework can improve the classification accuracy.Future research will focus on identifying the most important features for species classification and developing algorithms with greater efficiency and accuracy.
文摘The integration of image analysis through deep learning(DL)into rock classification represents a significant leap forward in geological research.While traditional methods remain invaluable for their expertise and historical context,DL offers a powerful complement by enhancing the speed,objectivity,and precision of the classification process.This research explores the significance of image data augmentation techniques in optimizing the performance of convolutional neural networks(CNNs)for geological image analysis,particularly in the classification of igneous,metamorphic,and sedimentary rock types from rock thin section(RTS)images.This study primarily focuses on classic image augmentation techniques and evaluates their impact on model accuracy and precision.Results demonstrate that augmentation techniques like Equalize significantly enhance the model's classification capabilities,achieving an F1-Score of 0.9869 for igneous rocks,0.9884 for metamorphic rocks,and 0.9929 for sedimentary rocks,representing improvements compared to the baseline original results.Moreover,the weighted average F1-Score across all classes and techniques is 0.9886,indicating an enhancement.Conversely,methods like Distort lead to decreased accuracy and F1-Score,with an F1-Score of 0.949 for igneous rocks,0.954 for metamorphic rocks,and 0.9416 for sedimentary rocks,exacerbating the performance compared to the baseline.The study underscores the practicality of image data augmentation in geological image classification and advocates for the adoption of DL methods in this domain for automation and improved results.The findings of this study can benefit various fields,including remote sensing,mineral exploration,and environmental monitoring,by enhancing the accuracy of geological image analysis both for scientific research and industrial applications.
基金supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(RS-2024-00460621,Developing BCI-Based Digital Health Technologies for Mental Illness and Pain Management).
文摘Automatic detection of Leukemia or blood cancer is one of the most challenging tasks that need to be addressed in the healthcare system.Analysis of white blood cells(WBCs)in the blood or bone marrow microscopic slide images play a crucial part in early identification to facilitate medical experts.For Acute Lymphocytic Leukemia(ALL),the most preferred part of the blood or marrow is to be analyzed by the experts before it spreads in the whole body and the condition becomes worse.The researchers have done a lot of work in this field,to demonstrate a comprehensive analysis few literature reviews have been published focusing on various artificial intelligence-based techniques like machine and deep learning detection of ALL.The systematic review has been done in this article under the PRISMA guidelines which presents the most recent advancements in this field.Different image segmentation techniques were broadly studied and categorized from various online databases like Google Scholar,Science Direct,and PubMed as image processing-based,traditional machine and deep learning-based,and advanced deep learning-based models were presented.Convolutional Neural Networks(CNN)based on traditional models and then the recent advancements in CNN used for the classification of ALL into its subtypes.A critical analysis of the existing methods is provided to offer clarity on the current state of the field.Finally,the paper concludes with insights and suggestions for future research,aiming to guide new researchers in the development of advanced automated systems for detecting life-threatening diseases.
基金supported in part by the Six Talent Peaks Project in Jiangsu Province under Grant 013040315in part by the China Textile Industry Federation Science and Technology Guidance Project under Grant 2017107+1 种基金in part by the National Natural Science Foundation of China under Grant 31570714in part by the China Scholarship Council under Grant 202108320290。
文摘The cleanliness of seed cotton plays a critical role in the pre-treatment of cotton textiles,and the removal of impurity during the harvesting process directly determines the quality and market value of cotton textiles.By fusing band combination optimization with deep learning,this study aims to achieve more efficient and accurate detection of film impurities in seed cotton on the production line.By applying hyperspectral imaging and a one-dimensional deep learning algorithm,we detect and classify impurities in seed cotton after harvest.The main categories detected include pure cotton,conveyor belt,film covering seed cotton,and film adhered to the conveyor belt.The proposed method achieves an impurity detection rate of 99.698%.To further ensure the feasibility and practical application potential of this strategy,we compare our results against existing mainstream methods.In addition,the model shows excellent recognition performance on pseudo-color images of real samples.With a processing time of 11.764μs per pixel from experimental data,it shows a much improved speed requirement while maintaining the accuracy of real production lines.This strategy provides an accurate and efficient method for removing impurities during cotton processing.
文摘Diagnosing cardiac diseases relies heavily on electrocardiogram(ECG)analysis,but detecting myocardial infarction-related arrhythmias remains challenging due to irregular heartbeats and signal variations.Despite advancements in machine learning,achieving both high accuracy and low computational cost for arrhythmia classification remains a critical issue.Computer-aided diagnosis systems can play a key role in early detection,reducing mortality rates associated with cardiac disorders.This study proposes a fully automated approach for ECG arrhythmia classification using deep learning and machine learning techniques to improve diagnostic accuracy while minimizing processing time.The methodology consists of three stages:1)preprocessing,where ECG signals undergo noise reduction and feature extraction;2)feature Identification,where deep convolutional neural network(CNN)blocks,combined with data augmentation and transfer learning,extract key parameters;3)classification,where a hybrid CNN-SVM model is employed for arrhythmia recognition.CNN-extracted features were fed into a binary support vector machine(SVM)classifier,and model performance was assessed using five-fold cross-validation.Experimental findings demonstrated that the CNN2 model achieved 85.52%accuracy,while the hybrid CNN2-SVM approach significantly improved accuracy to 97.33%,outperforming conventional methods.This model enhances classification efficiency while reducing computational complexity.The proposed approach bridges the gap between accuracy and processing speed in ECG arrhythmia classification,offering a promising solution for real-time clinical applications.Its superior performance compared to nonlinear classifiers highlights its potential for improving automated cardiac diagnosis.
文摘In the era of precision medicine,the classification of diabetes mellitus has evolved beyond the traditional categories.Various classification methods now account for a multitude of factors,including variations in specific genes,type ofβ-cell impairment,degree of insulin resistance,and clinical characteristics of metabolic profiles.Improved classification methods enable healthcare providers to formulate blood glucose management strategies more precisely.Applying these updated classification systems,will assist clinicians in further optimising treatment plans,including targeted drug therapies,personalized dietary advice,and specific exercise plans.Ultimately,this will facilitate stricter blood glucose control,minimize the risks of hypoglycaemia and hyperglycaemia,and reduce long-term complications associated with diabetes.
文摘With the widespread use of upper gastrointestinal endoscopy,more and more gastric polyps(GPs)are being detected.Traditional management strategies often rely on histopathologic examination,which can be time-consuming and may not guide immediate clinical decisions.This paper aims to introduce a novel classification system for GPs based on their potential risk of malignant transformation,categorizing them as"good","bad",and"ugly".A review of the literature and clinical case analysis were conducted to explore the clinical implications,management strategies,and the system's application in endoscopic practice.Good polyps,mainly including fundic gland polyps and inflammatory fibrous polyps,have a low risk of malignancy and typically require minimal or no intervention.Bad polyps,mainly including hyperplastic polyps and adenomas,pose an intermediate risk of malignancy,necessitating closer monitoring or removal.Ugly polyps,mainly including type 3 neuroendocrine tumors and early gastric cancer,indicate a high potential for malignancy and require urgent and comprehensive treatment.The new classification system provides a simplified and practical framework for diagnosing and managing GPs,improving diagnostic accuracy,guiding individualized treatment,and promoting advancements in endoscopic techniques.Despite some challenges,such as the risk of misclassification due to similar endoscopic appearances,this system is essential for the standardized management of GPs.It also lays the foundation for future research into biomarkers and the development of personalized medicine.
基金funded by the National Natural Science Foundation of China (Grant Nos. 42241103 and 62227901)the Key Research Program of the Institute of Geology and Geophysics, Chinese Academy of Sciences (Grant Nos. IGGCAS-202101 and IGGCAS-202401)
文摘Lunar impact glasses have been identified as crucial indicators of geochemical information regarding their source regions. Impact glasses can be categorized as either local or exotic. Those preserving geochemical signatures matching local lithologies (e.g., mare basalts or their single minerals) or regolith bulk soil compositions are classified as “local”. Otherwise, they could be defined as “exotic”. The analysis of exotic glasses provides the opportunity to explore previously unsampled lunar areas. This study focuses on the identification of exotic glasses within the Chang’e-5 (CE-5) soil sample by analyzing the trace elements of 28 impact glasses with distinct major element compositions in comparison with the CE-5 bulk soil. However, the results indicate that 18 of the analyzed glasses exhibit trace element compositions comparable to those of the local CE-5 materials. In particular, some of them could match the local single mineral component in major and trace elements, suggesting a local origin. Therefore, it is recommended that the investigation be expanded from using major elements to including nonvolatile trace elements, with a view to enhancing our understanding on the provenance of lunar impact glasses. To achieve a more accurate identification of exotic glasses within the CE-5 soil sample, a novel classification plot of Mg# versus La is proposed. The remaining 10 glasses, which exhibit diverse trace element variations, were identified as exotic. A comparative analysis of their chemical characteristics with remote sensing data indicates that they may have originated from the Aristarchus, Mairan, Sharp, or Pythagoras craters. This study elucidates the classification and possible provenance of exotic materials within the CE-5 soil sample, thereby providing constraints for the enhanced identification of local and exotic components at the CE-5 landing site.
基金supported by King Saud University,Riyadh,Saudi Arabia,through the Researchers Supporting Project under Grant RSPD2025R697.
文摘Preservation of the crops depends on early and accurate detection of pests on crops as they cause several diseases decreasing crop production and quality. Several deep-learning techniques have been applied to overcome the issue of pest detection on crops. We have developed the YOLOCSP-PEST model for Pest localization and classification. With the Cross Stage Partial Network (CSPNET) backbone, the proposed model is a modified version of You Only Look Once Version 7 (YOLOv7) that is intended primarily for pest localization and classification. Our proposed model gives exceptionally good results under conditions that are very challenging for any other comparable models especially conditions where we have issues with the luminance and the orientation of the images. It helps farmers working out on their crops in distant areas to determine any infestation quickly and accurately on their crops which helps in the quality and quantity of the production yield. The model has been trained and tested on 2 datasets namely the IP102 data set and a local crop data set on both of which it has shown exceptional results. It gave us a mean average precision (mAP) of 88.40% along with a precision of 85.55% and a recall of 84.25% on the IP102 dataset meanwhile giving a mAP of 97.18% on the local data set along with a recall of 94.88% and a precision of 97.50%. These findings demonstrate that the proposed model is very effective in detecting real-life scenarios and can help in the production of crops improving the yield quality and quantity at the same time.
基金supported by the National Key Research and Development Program of China No.2023YFA1009500.
文摘With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%.
基金supported by the Science and Technology Development Plan Project of Jilin Provincial Department of Science and Technology (No.20220203112S)the Jilin Provincial Department of Education Science and Technology Research Project (No.JJKH20210039KJ)。
文摘In this study,eight different varieties of maize seeds were used as the research objects.Conduct 81 types of combined preprocessing on the original spectra.Through comparison,Savitzky-Golay(SG)-multivariate scattering correction(MSC)-maximum-minimum normalization(MN)was identified as the optimal preprocessing technique.The competitive adaptive reweighted sampling(CARS),successive projections algorithm(SPA),and their combined methods were employed to extract feature wavelengths.Classification models based on back propagation(BP),support vector machine(SVM),random forest(RF),and partial least squares(PLS)were established using full-band data and feature wavelengths.Among all models,the(CARS-SPA)-BP model achieved the highest accuracy rate of 98.44%.This study offers novel insights and methodologies for the rapid and accurate identification of corn seeds as well as other crop seeds.
文摘Purpose:Interdisciplinary research has become a critical approach to addressing complex societal,economic,technological,and environmental challenges,driving innovation and integrating scientific knowledge.While interdisciplinarity indicators are widely used to evaluate research performance,the impact of classification granularity on these assessments remains underexplored.Design/methodology/approach:This study investigates how different levels of classification granularity-macro,meso,and micro-affect the evaluation of interdisciplinarity in research institutes.Using a dataset of 262 institutes from four major German non-university organizations(FHG,HGF,MPG,WGL)from 2018 to 2022,we examine inconsistencies in interdisciplinarity across levels,analyze ranking changes,and explore the influence of institutional fields and research focus(applied vs.basic).Findings:Our findings reveal significant inconsistencies in interdisciplinarity across classification levels,with rankings varying substantially.Notably,the Fraunhofer Society(FHG),which performs well at the macro level,experiences significant ranking declines at meso and micro levels.Normalizing interdisciplinarity by research field confirmed that these declines persist.The research focus of institutes,whether applied,basic,or mixed,does not significantly explain the observed ranking dynamics.Research limitations:This study has only considered the publication-based dimension of institutional interdisciplinarity and has not explored other aspects.Practical implications:The findings provide insights for policymakers,research managers,and scholars to better interpret interdisciplinarity metrics and support interdisciplinary research effectively.Originality/value:This study underscores the critical role of classification granularity in interdisciplinarity assessment and emphasizes the need for standardized approaches to ensure robust and fair evaluations.
基金the Research Grant of Kwangwoon University in 2024.
文摘Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.
文摘In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue anomalies.Traditionally,radiologists manually interpret these images,which can be labor-intensive and time-consuming due to the vast amount of data.To address this challenge,machine learning,and deep learning approaches can be utilized to improve the accuracy and efficiency of anomaly detection in MRI scans.This manuscript presents the use of the Deep AlexNet50 model for MRI classification with discriminative learning methods.There are three stages for learning;in the first stage,the whole dataset is used to learn the features.In the second stage,some layers of AlexNet50 are frozen with an augmented dataset,and in the third stage,AlexNet50 with an augmented dataset with the augmented dataset.This method used three publicly available MRI classification datasets:Harvard whole brain atlas(HWBA-dataset),the School of Biomedical Engineering of Southern Medical University(SMU-dataset),and The National Institute of Neuroscience and Hospitals brain MRI dataset(NINS-dataset)for analysis.Various hyperparameter optimizers like Adam,stochastic gradient descent(SGD),Root mean square propagation(RMS prop),Adamax,and AdamW have been used to compare the performance of the learning process.HWBA-dataset registers maximum classification performance.We evaluated the performance of the proposed classification model using several quantitative metrics,achieving an average accuracy of 98%.