Background:Accurate classification of brain tumors from Magnetic Resonance Imaging(MRI)is essential for clinical decision-making but remains challenging due to tumor heterogeneity.Existing approaches often focus solel...Background:Accurate classification of brain tumors from Magnetic Resonance Imaging(MRI)is essential for clinical decision-making but remains challenging due to tumor heterogeneity.Existing approaches often focus solely on classification or treat segmentation and classification as separate tasks,limiting overall performance and interpretability.Methods:This study proposes an end-to-end automated framework that integrates optimized tumor localization with multiclass classification.An optimized segmentation model is first employed to generate tumor masks,which are then overlaid on MRI scans to produce attention-enhanced inputs.These inputs are subsequently used to train a convolutional neural network(CNN)classifier.Experiments were conducted on a public dataset comprising 4,237 MRI scans across four categories:normal,glioma,meningioma,and pituitary tumors.Results:Three widely used segmentation models were systematically evaluated,with an optimized U-Net achieving the best performance(accuracy=0.9939,Dice=0.8893).Segmentation-guided classification consistently improved performance across six CNN architectures,with the most notable gains observed in heterogeneous tumor types such as glioma and meningioma.Among the classifiers,EfficientNet-V2 achieved the highest performance,with an accuracy of 0.9835,precision of 0.9858,recall of 0.9804,and F1-score of 0.9828.The framework was further validated on an independent external dataset,demonstrating consistent performance and robustness across diverse MRI sources.Conclusion:The proposed framework demonstrates strong potential for multiclass brain tumor classification by effectively combining segmentation and classification.This segmentation-driven approach not only enhances predictive accuracy but also improves interpretability,making it more suitable for clinical applications.展开更多
In recent decades,the proliferation of email communication has markedly escalated,resulting in a concomitant surge in spam emails that congest networks and presenting security risks.This study introduces an innovative...In recent decades,the proliferation of email communication has markedly escalated,resulting in a concomitant surge in spam emails that congest networks and presenting security risks.This study introduces an innovative spam detection method utilizing the Horse Herd Optimization Algorithm(HHOA),designed for binary classification within multi⁃objective framework.The method proficiently identifies essential features,minimizing redundancy and improving classification precision.The suggested HHOA attained an impressive accuracy of 97.21%on the Kaggle email dataset,with precision of 94.30%,recall of 90.50%,and F1⁃score of 92.80%.Compared to conventional techniques,such as Support Vector Machine(93.89%accuracy),Random Forest(96.14%accuracy),and K⁃Nearest Neighbours(92.08%accuracy),HHOA exhibited enhanced performance with reduced computing complexity.The suggested method demonstrated enhanced feature selection efficiency,decreasing the number of selected features while maintaining high classification accuracy.The results underscore the efficacy of HHOA in spam identification and indicate its potential for further applications in practical email filtering systems.展开更多
Visual diagnosis of skin cancer is challenging due to subtle inter-class similarities,variations in skin texture,the presence of hair,and inconsistent illumination.Deep learning models have shown promise in assisting ...Visual diagnosis of skin cancer is challenging due to subtle inter-class similarities,variations in skin texture,the presence of hair,and inconsistent illumination.Deep learning models have shown promise in assisting early detection,yet their performance is often limited by the severe class imbalance present in dermoscopic datasets.This paper proposes CANNSkin,a skin cancer classification framework that integrates a convolutional autoencoder with latent-space oversampling to address this imbalance.The autoencoder is trained to reconstruct lesion images,and its latent embeddings are used as features for classification.To enhance minority-class representation,the Synthetic Minority Oversampling Technique(SMOTE)is applied directly to the latent vectors before classifier training.The encoder and classifier are first trained independently and later fine-tuned end-to-end.On the HAM10000 dataset,CANNSkin achieves an accuracy of 93.01%,a macro-F1 of 88.54%,and an ROC–AUC of 98.44%,demonstrating strong robustness across ten test subsets.Evaluation on the more complex ISIC 2019 dataset further confirms the model’s effectiveness,where CANNSkin achieves 94.27%accuracy,93.95%precision,94.09%recall,and 99.02%F1-score,supported by high reconstruction fidelity(PSNR 35.03 dB,SSIM 0.86).These results demonstrate the effectiveness of our proposed latent-space balancing and fine-tuned representation learning as a new benchmark method for robust and accurate skin cancer classification across heterogeneous datasets.展开更多
Classifying job offers into occupational categories is a fundamental task in human resource information systems,as it improves and streamlines indexing,search,and matching between openings and job seekers.Comprehensiv...Classifying job offers into occupational categories is a fundamental task in human resource information systems,as it improves and streamlines indexing,search,and matching between openings and job seekers.Comprehensive occupational databases such as O∗NET or ESCO provide detailed taxonomies of interrelated positions that can be leveraged to align the textual content of postings with occupational categories,thereby facilitating standardization,cross-system interoperability,and access to metadata for each occupation(e.g.,tasks,knowledge,skills,and abilities).In this work,we explore the effectiveness of fine-tuning existing language models(LMs)to classify job offers with occupational descriptors from O∗NET.This enables a more precise assessment of candidate suitability by identifying the specific knowledge and skills required for each position,and helps automate recruitment processes by mitigating human bias and subjectivity in candidate selection.We evaluate three representative BERT-like models:BERT,RoBERTa,and DeBERTa.BERT serves as the baseline encoder-only architecture;RoBERTa incorporates advances in pretraining objectives and data scale;and DeBERTa introduces architectural improvements through disentangled attention mechanisms.The best performance was achieved with the DeBERTa model,although the other models also produced strong results,and no statistically significant differences were observed acrossmodels.We also find that these models typically reach optimal performance after only a few training epochs,and that training with smaller,balanced datasets is effective.Consequently,comparable results can be obtained with models that require fewer computational resources and less training time,facilitating deployment and practical use.展开更多
Distributed Denial-of-Service(DDoS)attacks pose severe threats to Industrial Control Networks(ICNs),where service disruption can cause significant economic losses and operational risks.Existing signature-based methods...Distributed Denial-of-Service(DDoS)attacks pose severe threats to Industrial Control Networks(ICNs),where service disruption can cause significant economic losses and operational risks.Existing signature-based methods are ineffective against novel attacks,and traditional machine learning models struggle to capture the complex temporal dependencies and dynamic traffic patterns inherent in ICN environments.To address these challenges,this study proposes a deep feature-driven hybrid framework that integrates Transformer,BiLSTM,and KNN to achieve accurate and robust DDoS detection.The Transformer component extracts global temporal dependencies from network traffic flows,while BiLSTM captures fine-grained sequential dynamics.The learned embeddings are then classified using an instance-based KNN layer,enhancing decision boundary precision.This cascaded architecture balances feature abstraction and locality preservation,improving both generalization and robustness.The proposed approach was evaluated on a newly collected real-time ICN traffic dataset and further validated using the public CIC-IDS2017 and Edge-IIoT datasets to demonstrate generalization.Comprehensive metrics including accuracy,precision,recall,F1-score,ROC-AUC,PR-AUC,false positive rate(FPR),and detection latency were employed.Results show that the hybrid framework achieves 98.42%accuracy with an ROC-AUC of 0.992 and FPR below 1%,outperforming baseline machine learning and deep learning models.Robustness experiments under Gaussian noise perturbations confirmed stable performance with less than 2%accuracy degradation.Moreover,detection latency remained below 2.1 ms per sample,indicating suitability for real-time ICS deployment.In summary,the proposed hybrid temporal learning and instance-based classification model offers a scalable and effective solution for DDoS detection in industrial control environments.By combining global contextual modeling,sequential learning,and instance-based refinement,the framework demonstrates strong adaptability across datasets and resilience against noise,providing practical utility for safeguarding critical infrastructure.展开更多
Through tracing the background and customary usage of classification of fine-grained sedimentary rocks and terminology,and comparing current“sedimentary petrology”textbooks and monographs,this paper proposes a class...Through tracing the background and customary usage of classification of fine-grained sedimentary rocks and terminology,and comparing current“sedimentary petrology”textbooks and monographs,this paper proposes a classification scheme for fine-grained sedimentary rocks and clarifies related terminology.The comprehensive analysis indicates that the classification of clastic rocks,volcanic clastic rocks,chemical rocks,and biogenic(carbonate)rocks is unified,and the definitions of terms such as lamination,bedding and beds are consistent.However,there is a disagreement on the definition of“mud”.European and American scholars commonly use the term“mud”to include silt and clay(particle size less than 0.0625 mm).Chinese scholars equate the term“mud”to“clay”(particle size less than 0.0039 mm or less than 0.01 mm).Combined with the discussion on terms such as sedimentary structures(bedding,lamination and lamellation),shale,mudstone,mudrocks/argillaceous rocks and mud shale,it is recommended to use“fine-grained sedimentary rocks”as the general term for all sedimentary rocks composed of fine-grained materials with particle size less than 0.0625 mm,including claystone/mudrocks and siltstone.Claystone/mudrocks are further classified into argillaceous(or clayey)mudstone/shale,calcareous mudstone/shale,siliceous mudstone/shale,silty mudstone/shale and silt-containing mudstone/shale.Argillaceous(or clayey)mudstone/shale emphasizes a content of clay minerals or clay-sized particles exceeding 50%.Other mudstones/shales emphasize a content of particles(particle size less than 0.0625 mm)exceeding 50%.The commonly referred term“shale”should not include siltstone.It is necessary to establish a reasonable,standardized,and applicable classification scheme for fine-grained sedimentary rocks in the future.An integrated shale microfacies research at the thin-section scale should be carried out,and combined with well logging data interpretation and seismic attribute analysis,a geological model of lithology/lithofacies will be iteratively upgraded to accurately determine sweet layer,locate target layer,and evaluate favorable area.展开更多
With the recent increase in data volume and diversity,traditional text representation techniques are struggling to capture context,particularly in environments with sparse data.To address these challenges,this study p...With the recent increase in data volume and diversity,traditional text representation techniques are struggling to capture context,particularly in environments with sparse data.To address these challenges,this study proposes a new model,the Masked Joint Representation Model(MJRM).MJRM approximates the original hypothesis by leveraging multiple elements in a limited context.It dynamically adapts to changes in characteristics based on data distribution through three main components.First,masking-based representation learning,termed selective dynamic masking,integrates topic modeling and sentiment clustering to generate and train multiple instances across different data subsets,whose predictions are then aggregated with optimized weights.This design alleviates sparsity,suppresses noise,and preserves contextual structures.Second,regularization-based improvements are applied.Third,techniques for addressing sparse data are used to perform final inference.As a result,MJRM improves performance by up to 4%compared to existing AI techniques.In our experiments,we analyzed the contribution of each factor,demonstrating that masking,dynamic learning,and aggregating multiple instances complement each other to improve performance.This demonstrates that a masking-based multi-learning strategy is effective for context-aware sparse text classification,and can be useful even in challenging situations such as data shortage or data distribution variations.We expect that the approach can be extended to diverse fields such as sentiment analysis,spam filtering,and domain-specific document classification.展开更多
Container transportation is pivotal in global trade due to its efficiency,safety,and cost-effectiveness.However,structural defects—particularly in grapple slots—can result in cargo damage,financial loss,and elevated...Container transportation is pivotal in global trade due to its efficiency,safety,and cost-effectiveness.However,structural defects—particularly in grapple slots—can result in cargo damage,financial loss,and elevated safety risks,including container drops during lifting operations.Timely and accurate inspection before and after transit is therefore essential.Traditional inspection methods rely heavily on manual observation of internal and external surfaces,which are time-consuming,resource-intensive,and prone to subjective errors.Container roofs pose additional challenges due to limited visibility,while grapple slots are especially vulnerable to wear from frequent use.This study proposes a two-stage automated detection framework targeting defects in container roof grapple slots.In the first stage,YOLOv7 is employed to localize grapple slot regions with high precision.In the second stage,ResNet50 classifies the extracted slots as either intact or defective.The results from both stages are integrated into a human-machine interface for real-time visualization and user verification.Experimental evaluations demonstrate that YOLOv7 achieves a 99%detection rate at 100 frames per second(FPS),while ResNet50 attains 87%classification accuracy at 34 FPS.Compared to some state of the arts,the proposed system offers significant speed,reliability,and usability improvements,enabling efficient defect identification and visual reconfirmation via the interface.展开更多
Evaluating the adversarial robustness of classification algorithms in machine learning is a crucial domain.However,current methods lack measurable and interpretable metrics.To address this issue,this paper introduces ...Evaluating the adversarial robustness of classification algorithms in machine learning is a crucial domain.However,current methods lack measurable and interpretable metrics.To address this issue,this paper introduces a visual evaluation index named confidence centroid skewing quadrilateral,which is based on a classification confidence-based confusion matrix,offering a quantitative and visual comparison of the adversarial robustness among different classification algorithms,and enhances intuitiveness and interpretability of attack impacts.We first conduct a validity test and sensitive analysis of the method.Then,prove its effectiveness through the experiments of five classification algorithms including artificial neural network(ANN),logistic regression(LR),support vector machine(SVM),convolutional neural network(CNN)and transformer against three adversarial attacks such as fast gradient sign method(FGSM),DeepFool,and projected gradient descent(PGD)attack.展开更多
Climate change and anthropogenic activities have profoundly affected coastal systems,making geomorphological research a critical focus for coastal protection and sustainable development.In this study,a comprehensive c...Climate change and anthropogenic activities have profoundly affected coastal systems,making geomorphological research a critical focus for coastal protection and sustainable development.In this study,a comprehensive classification of beach states around Hainan Island is conducted for the first time by utilizing theΩ-RTR model and geological control modes.Six distinct classic beach states ranging from dissipative to reflective are identified:barred dissipative beaches or no-barred dissipative beaches(BD or NBD),barred beaches(B),low-tide terrace or low-tide bar with rip(LTTR or LTBR),and reflective state(R).Among these,the BD and B types are predominant on Hainan Island.Notably,the beach states are subject to multiple factors,such as hydrodynamic forcings,geomorphic features and underlying substrates,and exhibit remarkable spatiotemporal variability.During extreme events,hydrodynamic forcings impact beach states more substantially than geological and geomorphic features do,leading to a more homogeneous distribution of beach states.Under normal circumstance,beach states are predominantly controlled by geological and geomorphic features.Coastal geological and geomorphic features have a pronounced influence on beach morphology and stability.For example,hard substrates underpin wide and stable dissipative beaches,whereas softer substrates lead to narrower,erosion-prone beaches.Three geological control modes are identified,namely,gently sloping hard substrates with dissipative beaches,moderately sloping hard substrates with seasonally variable reflective beaches,and steeply sloping soft substrates with dynamic sandbar-dominated beaches.These findings highlight the necessity of integrating geological settings in tandem with hydrodynamic forcings into coastal management practices.A dual-mode strategy is proposed:maintaining geomorphic self-organization on hard-substrate coasts under normal conditions and implementing hybrid engineering–ecological measures(e.g.,artificial sand replenishment and vegetation restoration)on erosion-prone soft substrates.展开更多
Accurate soil classification is essential for pavement design;however,the traditional American Association of State Highway and Transportation Officials(AASHTO)classification system relies on extensive laboratory test...Accurate soil classification is essential for pavement design;however,the traditional American Association of State Highway and Transportation Officials(AASHTO)classification system relies on extensive laboratory testing and subjective judgment.This study presents an artificial intelligence(AI)enhanced framework for AASHTO soil classification.A synthetic dataset of 349,015 samples was generated using parameter ranges for five AASHTO input variables to support model development.Four machine learning models were trained,analyzed,and compared where the random forest(RF)consistently achieved the highest accuracy of 100%among the four models in predicting AASHTO soil groups.Feature importance analysis indicates that percent passing the No.200 sieve is the most influential factor,and under missing input scenarios.Additionally,the models remain reliable under partial input loss,though accuracy is most sensitive to the absence of percent passing the No.200 sieve,dropping to 85.8%,while all other variables maintain accuracies of at least 93.1%.Prediction uncertainty using Monte Carlo simulations shows model performance within a 95%confidence interval.Overall,the proposed AI models can accurately and efficiently predict AASHTO soil groups using incomplete datasets for geotechnical engineering.展开更多
With the evolution of next-generation network technologies,the complexity of network management has significantly increased,and the means of network attacks are diversified,bringing new challenges to network traffic c...With the evolution of next-generation network technologies,the complexity of network management has significantly increased,and the means of network attacks are diversified,bringing new challenges to network traffic classification.This paper presents a general AIdriven network traffic classification workflow and elaborates on a traffic data and feature engineering framework.Most importantly,it analyzes the concept and causes of data distribution shifts in ne twork traffic,proposing detection methods and countermeasures.Experimental results on real traffic collected at different time intervals show that application evolution can induce data distribution shifts,which in turn lead to a noticeable degradation in traffic classification performance.Comparative drift detection experiments further confirm that such shifts are more evident over long-term intervals,while short-term traffic remains relatively stable.These findings demonstrate the necessity of incorporating drift-aware mechanisms into AI-driven network traffic classification systems.展开更多
Arrhythmias are a frequently occurring phenomenon in clinical practice,but how to accurately dis-tinguish subtle rhythm abnormalities remains an ongoing difficulty faced by the entire research community when conductin...Arrhythmias are a frequently occurring phenomenon in clinical practice,but how to accurately dis-tinguish subtle rhythm abnormalities remains an ongoing difficulty faced by the entire research community when conducting ECG-based studies.From a review of existing studies,two main factors appear to contribute to this problem:the uneven distribution of arrhythmia classes and the limited expressiveness of features learned by current models.To overcome these limitations,this study proposes a dual-path multimodal framework,termed DM-EHC(Dual-Path Multimodal ECG Heartbeat Classifier),for ECG-based heartbeat classification.The proposed framework links 1D ECG temporal features with 2D time–frequency features.By setting up the dual paths described above,the model can process more dimensions of feature information.The MIT-BIH arrhythmia database was selected as the baseline dataset for the experiments.Experimental results show that the proposed method outperforms single modalities and performs better for certain specific types of arrhythmias.The model achieved mean precision,recall,and F1 score of 95.14%,92.26%,and 93.65%,respectively.These results indicate that the framework is robust and has potential value in automated arrhythmia classification.展开更多
Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstruc...Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.展开更多
Objective To develop a dual-branch deep learning framework for accurate multi-label classification of fundus diseases,addressing the key limitations of insufficient complementary feature extraction and inadequate cros...Objective To develop a dual-branch deep learning framework for accurate multi-label classification of fundus diseases,addressing the key limitations of insufficient complementary feature extraction and inadequate cross-modal feature fusion in existing automated diagnostic methods.Methods The fundus multi-label classification dataset with 12 disease categories(FMLC-12)dataset was constructed by integrating complementary samples from Ocular Disease Intelligent Recognition(ODIR)and Retinal Fundus Multi-Disease Image Dataset(RFMiD),yielding 6936 fundus images across 12 retinal pathology categories,and the framework was validated on both FMLC-12 and ODIR.Inspired by the holistic multi-regional assessment principle of the Five Wheels theory in traditional Chinese medicine(TCM)ophthalmology,the dualbranch multi-label network(DBMNet)was developed as a novel framework integrating complementary visual feature extraction with pathological correlation modeling.The architecture employed a TransNeXt backbone within a dual-branch design:one branch processed redgreen-blue(RGB)images to capture color-dependent features,such as vascular patterns and lesion morphology,while the other processed grayscale-converted images to enhance subtle textural details and contrast variations.A feature interaction module(FIM)effectively integrated the multi-scale features from both branches.Comprehensive ablation studies were conducted to evaluate the contributions of the dual-branch architecture and the FIM.The performance of DBMNet was compared against four state-of-the-art methods,including EfficientNet Ensemble,transfer learning-based convolutional neural network(CNN),BFENet,and EyeDeep-Net,using mean average precision(mAP),F1-score,and Cohen's kappa coefficient.Results The dual-branch architecture improved mAP by 15.44 percentage points over the single-branch TransNeXt baseline,increasing from 34.41%to 44.24%,and the addition of FIM further boosted mAP to 49.85%.On FMLC-12,DBMNet achieved an mAP of 49.85%,a Cohen’s kappa coefficient of 62.14%,and an F1-score of 70.21%.Compared with BFENet(mAP:45.42%,kappa:46.64%,F1-score:71.34%),DBMNet outperformed it by 4.43 percentage points in mAP and 15.50 percentage points in kappa,while BFENet achieved a marginally higher F1-score.On ODIR,DBMNet achieved an F1-score of 85.50%,comparable to state-of-the-art methods.Conclusion DBMNet effectively integrates RGB and grayscale visual modalities through a dual-branch architecture,significantly improving multi-label fundus disease classification.The framework not only addresses the issue of insufficient feature fusion in existing methods but also demonstrates outstanding performance in balancing detection across both common and rare diseases,providing a promising and clinically applicable pathway for standardized,intelligent fundus disease classification.展开更多
This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 20...This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.展开更多
Legal case classification involves the categorization of legal documents into predefined categories,which facilitates legal information retrieval and case management.However,real-world legal datasets often suffer from...Legal case classification involves the categorization of legal documents into predefined categories,which facilitates legal information retrieval and case management.However,real-world legal datasets often suffer from class imbalances due to the uneven distribution of case types across legal domains.This leads to biased model performance,in the form of high accuracy for overrepresented categories and underperformance for minority classes.To address this issue,in this study,we propose a data augmentation method that masks unimportant terms within a document selectively while preserving key terms fromthe perspective of the legal domain.This approach enhances data diversity and improves the generalization capability of conventional models.Our experiments demonstrate consistent improvements achieved by the proposed augmentation strategy in terms of accuracy and F1 score across all models,validating the effectiveness of the proposed method in legal case classification.展开更多
Near-Earth objects are important not only in studying the early formation of the Solar System,but also because they pose a serious hazard to humanity when they make close approaches to the Earth.Study of their physica...Near-Earth objects are important not only in studying the early formation of the Solar System,but also because they pose a serious hazard to humanity when they make close approaches to the Earth.Study of their physical properties can provide useful information on their origin,evolution,and hazard to human beings.However,it remains challenging to investigate small,newly discovered,near-Earth objects because of our limited observational window.This investigation seeks to determine the visible colors of near-Earth asteroids(NEAs),perform an initial taxonomic classification based on visible colors and analyze possible correlations between the distribution of taxonomic classification and asteroid size or orbital parameters.Observations were performed in the broadband BVRI Johnson−Cousins photometric system,applied to images from the Yaoan High Precision Telescope and the 1.88 m telescope at the Kottamia Astronomical Observatory.We present new photometric observations of 84 near-Earth asteroids,and classify 80 of them taxonomically,based on their photometric colors.We find that nearly half(46.3%)of the objects in our sample can be classified as S-complex,26.3%as C-complex,6%as D-complex,and 15.0%as X-complex;the remaining belong to the A-or V-types.Additionally,we identify three P-type NEAs in our sample,according to the Tholen scheme.The fractional abundances of the C/X-complex members with absolute magnitude H≥17.0 were more than twice as large as those with H<17.0.However,the fractions of C-and S-complex members with diameters≤1 km and>1 km are nearly equal,while X-complex members tend to have sub-kilometer diameters.In our sample,the C/D-complex objects are predominant among those with a Jovian Tisserand parameter of T_(J)<3.1.These bodies could have a cometary origin.C-and S-complex members account for a considerable proportion of the asteroids that are potentially hazardous.展开更多
Background:Lumbar disc degeneration(LDD)displays considerable heterogeneity in terms of clinical features and pathological changes.However,researchers have not clearly determined whether the transcriptome variations i...Background:Lumbar disc degeneration(LDD)displays considerable heterogeneity in terms of clinical features and pathological changes.However,researchers have not clearly determined whether the transcriptome variations in LDD could be used to identify or interpret the causes of heterogeneity in clinical features.This study aimed to identify the transcriptomic classification of degenerated discs in LDD patients and whether the molecular subtypes of LDD could be accurately predicted using clinical features.Methods:One hundred and twenty-two nucleus pulposus(NP)tissues from 108 patients were consecutively collected for bulk RNA sequencing(RNA-seq).An unsupervised clustering method was employed to analyze the bulk RNA matrix.Differential analysis was performed to characterize the transcriptional signatures and subtype-specific extracellular matrix(ECM)dysregulation.The cell subpopulation states of each subtype were inferred by integrating bulk and single-cell sequencing datasets.Transwell and dual-luciferase reporter gene assays were employed to investigate possible molecular mechanisms involved.Machine learning algorithm diagnostic prediction models were developed to correlate molecular classification with clinical features.Results:LDD was classified into 4 subtypes with distinct molecular signatures and ECM remodeling:C1 with collagenesis,C2 with ossification,C3 with low chondrogenesis,and C4 with fibrogenesis.Chond1-3 in C1 dominated disc collagenesis via the activation of the mechanosensors TRPV4 and PIEZO1;NP progenitor cells in C2 exhibited chondrogenic and osteogenic phenotypes;Chond1 in C3 was linked to a disrupted hypoxic microenvironment leading to reduced chondrogenesis;Macrophages in C4 played a crucial role in disc fibrogenesis via the secretion of tumor necrosis factor-α(TNF-α).Furthermore,the random forest diagnostic prediction model was proven to have a robust performance[area under the receiver operating characteristic(ROC)curve:0.9312;accuracy:0.84]in stratifying the molecular subtypes of LDD based on 12 clinical features.Conclusions:Our study delineates 4 distinct molecular subtypes of LDD that can be accurately stratified on the basis of clinical features.The identification of these subtypes would facilitate precise diagnostics and guide the development of personalized treatment strategies for LDD.展开更多
With the rapid development of digital culture,a large number of cultural texts are presented in the form of digital and network.These texts have significant characteristics such as sparsity,real-time and non-standard ...With the rapid development of digital culture,a large number of cultural texts are presented in the form of digital and network.These texts have significant characteristics such as sparsity,real-time and non-standard expression,which bring serious challenges to traditional classification methods.In order to cope with the above problems,this paper proposes a new ASSC(ALBERT,SVD,Self-Attention and Cross-Entropy)-TextRCNN digital cultural text classification model.Based on the framework of TextRCNN,the Albert pre-training language model is introduced to improve the depth and accuracy of semantic embedding.Combined with the dual attention mechanism,the model’s ability to capture and model potential key information in short texts is strengthened.The Singular Value Decomposition(SVD)was used to replace the traditional Max pooling operation,which effectively reduced the feature loss rate and retained more key semantic information.The cross-entropy loss function was used to optimize the prediction results,making the model more robust in class distribution learning.The experimental results indicate that,in the digital cultural text classification task,as compared to the baseline model,the proposed ASSC-TextRCNN method achieves an 11.85%relative improvement in accuracy and an 11.97%relative increase in the F1 score.Meanwhile,the relative error rate decreases by 53.18%.This achievement not only validates the effectiveness and advanced nature of the proposed approach but also offers a novel technical route and methodological underpinnings for the intelligent analysis and dissemination of digital cultural texts.It holds great significance for promoting the in-depth exploration and value realization of digital culture.展开更多
文摘Background:Accurate classification of brain tumors from Magnetic Resonance Imaging(MRI)is essential for clinical decision-making but remains challenging due to tumor heterogeneity.Existing approaches often focus solely on classification or treat segmentation and classification as separate tasks,limiting overall performance and interpretability.Methods:This study proposes an end-to-end automated framework that integrates optimized tumor localization with multiclass classification.An optimized segmentation model is first employed to generate tumor masks,which are then overlaid on MRI scans to produce attention-enhanced inputs.These inputs are subsequently used to train a convolutional neural network(CNN)classifier.Experiments were conducted on a public dataset comprising 4,237 MRI scans across four categories:normal,glioma,meningioma,and pituitary tumors.Results:Three widely used segmentation models were systematically evaluated,with an optimized U-Net achieving the best performance(accuracy=0.9939,Dice=0.8893).Segmentation-guided classification consistently improved performance across six CNN architectures,with the most notable gains observed in heterogeneous tumor types such as glioma and meningioma.Among the classifiers,EfficientNet-V2 achieved the highest performance,with an accuracy of 0.9835,precision of 0.9858,recall of 0.9804,and F1-score of 0.9828.The framework was further validated on an independent external dataset,demonstrating consistent performance and robustness across diverse MRI sources.Conclusion:The proposed framework demonstrates strong potential for multiclass brain tumor classification by effectively combining segmentation and classification.This segmentation-driven approach not only enhances predictive accuracy but also improves interpretability,making it more suitable for clinical applications.
文摘In recent decades,the proliferation of email communication has markedly escalated,resulting in a concomitant surge in spam emails that congest networks and presenting security risks.This study introduces an innovative spam detection method utilizing the Horse Herd Optimization Algorithm(HHOA),designed for binary classification within multi⁃objective framework.The method proficiently identifies essential features,minimizing redundancy and improving classification precision.The suggested HHOA attained an impressive accuracy of 97.21%on the Kaggle email dataset,with precision of 94.30%,recall of 90.50%,and F1⁃score of 92.80%.Compared to conventional techniques,such as Support Vector Machine(93.89%accuracy),Random Forest(96.14%accuracy),and K⁃Nearest Neighbours(92.08%accuracy),HHOA exhibited enhanced performance with reduced computing complexity.The suggested method demonstrated enhanced feature selection efficiency,decreasing the number of selected features while maintaining high classification accuracy.The results underscore the efficacy of HHOA in spam identification and indicate its potential for further applications in practical email filtering systems.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2601).
文摘Visual diagnosis of skin cancer is challenging due to subtle inter-class similarities,variations in skin texture,the presence of hair,and inconsistent illumination.Deep learning models have shown promise in assisting early detection,yet their performance is often limited by the severe class imbalance present in dermoscopic datasets.This paper proposes CANNSkin,a skin cancer classification framework that integrates a convolutional autoencoder with latent-space oversampling to address this imbalance.The autoencoder is trained to reconstruct lesion images,and its latent embeddings are used as features for classification.To enhance minority-class representation,the Synthetic Minority Oversampling Technique(SMOTE)is applied directly to the latent vectors before classifier training.The encoder and classifier are first trained independently and later fine-tuned end-to-end.On the HAM10000 dataset,CANNSkin achieves an accuracy of 93.01%,a macro-F1 of 88.54%,and an ROC–AUC of 98.44%,demonstrating strong robustness across ten test subsets.Evaluation on the more complex ISIC 2019 dataset further confirms the model’s effectiveness,where CANNSkin achieves 94.27%accuracy,93.95%precision,94.09%recall,and 99.02%F1-score,supported by high reconstruction fidelity(PSNR 35.03 dB,SSIM 0.86).These results demonstrate the effectiveness of our proposed latent-space balancing and fine-tuned representation learning as a new benchmark method for robust and accurate skin cancer classification across heterogeneous datasets.
文摘Classifying job offers into occupational categories is a fundamental task in human resource information systems,as it improves and streamlines indexing,search,and matching between openings and job seekers.Comprehensive occupational databases such as O∗NET or ESCO provide detailed taxonomies of interrelated positions that can be leveraged to align the textual content of postings with occupational categories,thereby facilitating standardization,cross-system interoperability,and access to metadata for each occupation(e.g.,tasks,knowledge,skills,and abilities).In this work,we explore the effectiveness of fine-tuning existing language models(LMs)to classify job offers with occupational descriptors from O∗NET.This enables a more precise assessment of candidate suitability by identifying the specific knowledge and skills required for each position,and helps automate recruitment processes by mitigating human bias and subjectivity in candidate selection.We evaluate three representative BERT-like models:BERT,RoBERTa,and DeBERTa.BERT serves as the baseline encoder-only architecture;RoBERTa incorporates advances in pretraining objectives and data scale;and DeBERTa introduces architectural improvements through disentangled attention mechanisms.The best performance was achieved with the DeBERTa model,although the other models also produced strong results,and no statistically significant differences were observed acrossmodels.We also find that these models typically reach optimal performance after only a few training epochs,and that training with smaller,balanced datasets is effective.Consequently,comparable results can be obtained with models that require fewer computational resources and less training time,facilitating deployment and practical use.
基金supported by the Extral High Voltage Power Transmission Company,China Southern Power Grid Co.,Ltd.
文摘Distributed Denial-of-Service(DDoS)attacks pose severe threats to Industrial Control Networks(ICNs),where service disruption can cause significant economic losses and operational risks.Existing signature-based methods are ineffective against novel attacks,and traditional machine learning models struggle to capture the complex temporal dependencies and dynamic traffic patterns inherent in ICN environments.To address these challenges,this study proposes a deep feature-driven hybrid framework that integrates Transformer,BiLSTM,and KNN to achieve accurate and robust DDoS detection.The Transformer component extracts global temporal dependencies from network traffic flows,while BiLSTM captures fine-grained sequential dynamics.The learned embeddings are then classified using an instance-based KNN layer,enhancing decision boundary precision.This cascaded architecture balances feature abstraction and locality preservation,improving both generalization and robustness.The proposed approach was evaluated on a newly collected real-time ICN traffic dataset and further validated using the public CIC-IDS2017 and Edge-IIoT datasets to demonstrate generalization.Comprehensive metrics including accuracy,precision,recall,F1-score,ROC-AUC,PR-AUC,false positive rate(FPR),and detection latency were employed.Results show that the hybrid framework achieves 98.42%accuracy with an ROC-AUC of 0.992 and FPR below 1%,outperforming baseline machine learning and deep learning models.Robustness experiments under Gaussian noise perturbations confirmed stable performance with less than 2%accuracy degradation.Moreover,detection latency remained below 2.1 ms per sample,indicating suitability for real-time ICS deployment.In summary,the proposed hybrid temporal learning and instance-based classification model offers a scalable and effective solution for DDoS detection in industrial control environments.By combining global contextual modeling,sequential learning,and instance-based refinement,the framework demonstrates strong adaptability across datasets and resilience against noise,providing practical utility for safeguarding critical infrastructure.
基金Supported by the Integrated Project of National Natural Science Foundation and Enterprise Innovation Development Joint Foundation(U24B6004)。
文摘Through tracing the background and customary usage of classification of fine-grained sedimentary rocks and terminology,and comparing current“sedimentary petrology”textbooks and monographs,this paper proposes a classification scheme for fine-grained sedimentary rocks and clarifies related terminology.The comprehensive analysis indicates that the classification of clastic rocks,volcanic clastic rocks,chemical rocks,and biogenic(carbonate)rocks is unified,and the definitions of terms such as lamination,bedding and beds are consistent.However,there is a disagreement on the definition of“mud”.European and American scholars commonly use the term“mud”to include silt and clay(particle size less than 0.0625 mm).Chinese scholars equate the term“mud”to“clay”(particle size less than 0.0039 mm or less than 0.01 mm).Combined with the discussion on terms such as sedimentary structures(bedding,lamination and lamellation),shale,mudstone,mudrocks/argillaceous rocks and mud shale,it is recommended to use“fine-grained sedimentary rocks”as the general term for all sedimentary rocks composed of fine-grained materials with particle size less than 0.0625 mm,including claystone/mudrocks and siltstone.Claystone/mudrocks are further classified into argillaceous(or clayey)mudstone/shale,calcareous mudstone/shale,siliceous mudstone/shale,silty mudstone/shale and silt-containing mudstone/shale.Argillaceous(or clayey)mudstone/shale emphasizes a content of clay minerals or clay-sized particles exceeding 50%.Other mudstones/shales emphasize a content of particles(particle size less than 0.0625 mm)exceeding 50%.The commonly referred term“shale”should not include siltstone.It is necessary to establish a reasonable,standardized,and applicable classification scheme for fine-grained sedimentary rocks in the future.An integrated shale microfacies research at the thin-section scale should be carried out,and combined with well logging data interpretation and seismic attribute analysis,a geological model of lithology/lithofacies will be iteratively upgraded to accurately determine sweet layer,locate target layer,and evaluate favorable area.
基金supported by the SungKyunKwan University and the BK21 FOUR(Graduate School Innovation)funded by the Ministry of Education(MOE,Korea)and National Research Foundation of Korea(NRF).
文摘With the recent increase in data volume and diversity,traditional text representation techniques are struggling to capture context,particularly in environments with sparse data.To address these challenges,this study proposes a new model,the Masked Joint Representation Model(MJRM).MJRM approximates the original hypothesis by leveraging multiple elements in a limited context.It dynamically adapts to changes in characteristics based on data distribution through three main components.First,masking-based representation learning,termed selective dynamic masking,integrates topic modeling and sentiment clustering to generate and train multiple instances across different data subsets,whose predictions are then aggregated with optimized weights.This design alleviates sparsity,suppresses noise,and preserves contextual structures.Second,regularization-based improvements are applied.Third,techniques for addressing sparse data are used to perform final inference.As a result,MJRM improves performance by up to 4%compared to existing AI techniques.In our experiments,we analyzed the contribution of each factor,demonstrating that masking,dynamic learning,and aggregating multiple instances complement each other to improve performance.This demonstrates that a masking-based multi-learning strategy is effective for context-aware sparse text classification,and can be useful even in challenging situations such as data shortage or data distribution variations.We expect that the approach can be extended to diverse fields such as sentiment analysis,spam filtering,and domain-specific document classification.
文摘Container transportation is pivotal in global trade due to its efficiency,safety,and cost-effectiveness.However,structural defects—particularly in grapple slots—can result in cargo damage,financial loss,and elevated safety risks,including container drops during lifting operations.Timely and accurate inspection before and after transit is therefore essential.Traditional inspection methods rely heavily on manual observation of internal and external surfaces,which are time-consuming,resource-intensive,and prone to subjective errors.Container roofs pose additional challenges due to limited visibility,while grapple slots are especially vulnerable to wear from frequent use.This study proposes a two-stage automated detection framework targeting defects in container roof grapple slots.In the first stage,YOLOv7 is employed to localize grapple slot regions with high precision.In the second stage,ResNet50 classifies the extracted slots as either intact or defective.The results from both stages are integrated into a human-machine interface for real-time visualization and user verification.Experimental evaluations demonstrate that YOLOv7 achieves a 99%detection rate at 100 frames per second(FPS),while ResNet50 attains 87%classification accuracy at 34 FPS.Compared to some state of the arts,the proposed system offers significant speed,reliability,and usability improvements,enabling efficient defect identification and visual reconfirmation via the interface.
文摘Evaluating the adversarial robustness of classification algorithms in machine learning is a crucial domain.However,current methods lack measurable and interpretable metrics.To address this issue,this paper introduces a visual evaluation index named confidence centroid skewing quadrilateral,which is based on a classification confidence-based confusion matrix,offering a quantitative and visual comparison of the adversarial robustness among different classification algorithms,and enhances intuitiveness and interpretability of attack impacts.We first conduct a validity test and sensitive analysis of the method.Then,prove its effectiveness through the experiments of five classification algorithms including artificial neural network(ANN),logistic regression(LR),support vector machine(SVM),convolutional neural network(CNN)and transformer against three adversarial attacks such as fast gradient sign method(FGSM),DeepFool,and projected gradient descent(PGD)attack.
基金The Science and Technology Basic Investigation Program of China,No.2022FY202404。
文摘Climate change and anthropogenic activities have profoundly affected coastal systems,making geomorphological research a critical focus for coastal protection and sustainable development.In this study,a comprehensive classification of beach states around Hainan Island is conducted for the first time by utilizing theΩ-RTR model and geological control modes.Six distinct classic beach states ranging from dissipative to reflective are identified:barred dissipative beaches or no-barred dissipative beaches(BD or NBD),barred beaches(B),low-tide terrace or low-tide bar with rip(LTTR or LTBR),and reflective state(R).Among these,the BD and B types are predominant on Hainan Island.Notably,the beach states are subject to multiple factors,such as hydrodynamic forcings,geomorphic features and underlying substrates,and exhibit remarkable spatiotemporal variability.During extreme events,hydrodynamic forcings impact beach states more substantially than geological and geomorphic features do,leading to a more homogeneous distribution of beach states.Under normal circumstance,beach states are predominantly controlled by geological and geomorphic features.Coastal geological and geomorphic features have a pronounced influence on beach morphology and stability.For example,hard substrates underpin wide and stable dissipative beaches,whereas softer substrates lead to narrower,erosion-prone beaches.Three geological control modes are identified,namely,gently sloping hard substrates with dissipative beaches,moderately sloping hard substrates with seasonally variable reflective beaches,and steeply sloping soft substrates with dynamic sandbar-dominated beaches.These findings highlight the necessity of integrating geological settings in tandem with hydrodynamic forcings into coastal management practices.A dual-mode strategy is proposed:maintaining geomorphic self-organization on hard-substrate coasts under normal conditions and implementing hybrid engineering–ecological measures(e.g.,artificial sand replenishment and vegetation restoration)on erosion-prone soft substrates.
文摘Accurate soil classification is essential for pavement design;however,the traditional American Association of State Highway and Transportation Officials(AASHTO)classification system relies on extensive laboratory testing and subjective judgment.This study presents an artificial intelligence(AI)enhanced framework for AASHTO soil classification.A synthetic dataset of 349,015 samples was generated using parameter ranges for five AASHTO input variables to support model development.Four machine learning models were trained,analyzed,and compared where the random forest(RF)consistently achieved the highest accuracy of 100%among the four models in predicting AASHTO soil groups.Feature importance analysis indicates that percent passing the No.200 sieve is the most influential factor,and under missing input scenarios.Additionally,the models remain reliable under partial input loss,though accuracy is most sensitive to the absence of percent passing the No.200 sieve,dropping to 85.8%,while all other variables maintain accuracies of at least 93.1%.Prediction uncertainty using Monte Carlo simulations shows model performance within a 95%confidence interval.Overall,the proposed AI models can accurately and efficiently predict AASHTO soil groups using incomplete datasets for geotechnical engineering.
基金supported by ZTE Industry-University-Institute Cooperation Funds under Grant No.HC-CN-20220607009。
文摘With the evolution of next-generation network technologies,the complexity of network management has significantly increased,and the means of network attacks are diversified,bringing new challenges to network traffic classification.This paper presents a general AIdriven network traffic classification workflow and elaborates on a traffic data and feature engineering framework.Most importantly,it analyzes the concept and causes of data distribution shifts in ne twork traffic,proposing detection methods and countermeasures.Experimental results on real traffic collected at different time intervals show that application evolution can induce data distribution shifts,which in turn lead to a noticeable degradation in traffic classification performance.Comparative drift detection experiments further confirm that such shifts are more evident over long-term intervals,while short-term traffic remains relatively stable.These findings demonstrate the necessity of incorporating drift-aware mechanisms into AI-driven network traffic classification systems.
基金supported by the Innovative Human Resource Development for Local Intel-lectualization program through the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.IITP-2026-2020-0-01741)the research fund of Hanyang University(HY-2025-1110).
文摘Arrhythmias are a frequently occurring phenomenon in clinical practice,but how to accurately dis-tinguish subtle rhythm abnormalities remains an ongoing difficulty faced by the entire research community when conducting ECG-based studies.From a review of existing studies,two main factors appear to contribute to this problem:the uneven distribution of arrhythmia classes and the limited expressiveness of features learned by current models.To overcome these limitations,this study proposes a dual-path multimodal framework,termed DM-EHC(Dual-Path Multimodal ECG Heartbeat Classifier),for ECG-based heartbeat classification.The proposed framework links 1D ECG temporal features with 2D time–frequency features.By setting up the dual paths described above,the model can process more dimensions of feature information.The MIT-BIH arrhythmia database was selected as the baseline dataset for the experiments.Experimental results show that the proposed method outperforms single modalities and performs better for certain specific types of arrhythmias.The model achieved mean precision,recall,and F1 score of 95.14%,92.26%,and 93.65%,respectively.These results indicate that the framework is robust and has potential value in automated arrhythmia classification.
基金funded by the Directorate of Research and Community Service,Directorate General of Research and Development,Ministry of Higher Education,Science and Technologyin accordance with the Implementation Contract for the Operational Assistance Program for State Universities,Research Program Number:109/C3/DT.05.00/PL/2025.
文摘Sudden wildfires cause significant global ecological damage.While satellite imagery has advanced early fire detection and mitigation,image-based systems face limitations including high false alarm rates,visual obstructions,and substantial computational demands,especially in complex forest terrains.To address these challenges,this study proposes a novel forest fire detection model utilizing audio classification and machine learning.We developed an audio-based pipeline using real-world environmental sound recordings.Sounds were converted into Mel-spectrograms and classified via a Convolutional Neural Network(CNN),enabling the capture of distinctive fire acoustic signatures(e.g.,crackling,roaring)that are minimally impacted by visual or weather conditions.Internet of Things(IoT)sound sensors were crucial for generating complex environmental parameters to optimize feature extraction.The CNN model achieved high performance in stratified 5-fold cross-validation(92.4%±1.6 accuracy,91.2%±1.8 F1-score)and on test data(94.93%accuracy,93.04%F1-score),with 98.44%precision and 88.32%recall,demonstrating reliability across environmental conditions.These results indicate that the audio-based approach not only improves detection reliability but also markedly reduces computational overhead compared to traditional image-based methods.The findings suggest that acoustic sensing integrated with machine learning offers a powerful,low-cost,and efficient solution for real-time forest fire monitoring in complex,dynamic environments.
基金Natural Science Foundation of Hunan Province(2025JJ90031)Key Research and Development Program of Hunan Province of China(23A0273)Hunan Provincial Administration of Traditional Chinese Medicine(A2023048).
文摘Objective To develop a dual-branch deep learning framework for accurate multi-label classification of fundus diseases,addressing the key limitations of insufficient complementary feature extraction and inadequate cross-modal feature fusion in existing automated diagnostic methods.Methods The fundus multi-label classification dataset with 12 disease categories(FMLC-12)dataset was constructed by integrating complementary samples from Ocular Disease Intelligent Recognition(ODIR)and Retinal Fundus Multi-Disease Image Dataset(RFMiD),yielding 6936 fundus images across 12 retinal pathology categories,and the framework was validated on both FMLC-12 and ODIR.Inspired by the holistic multi-regional assessment principle of the Five Wheels theory in traditional Chinese medicine(TCM)ophthalmology,the dualbranch multi-label network(DBMNet)was developed as a novel framework integrating complementary visual feature extraction with pathological correlation modeling.The architecture employed a TransNeXt backbone within a dual-branch design:one branch processed redgreen-blue(RGB)images to capture color-dependent features,such as vascular patterns and lesion morphology,while the other processed grayscale-converted images to enhance subtle textural details and contrast variations.A feature interaction module(FIM)effectively integrated the multi-scale features from both branches.Comprehensive ablation studies were conducted to evaluate the contributions of the dual-branch architecture and the FIM.The performance of DBMNet was compared against four state-of-the-art methods,including EfficientNet Ensemble,transfer learning-based convolutional neural network(CNN),BFENet,and EyeDeep-Net,using mean average precision(mAP),F1-score,and Cohen's kappa coefficient.Results The dual-branch architecture improved mAP by 15.44 percentage points over the single-branch TransNeXt baseline,increasing from 34.41%to 44.24%,and the addition of FIM further boosted mAP to 49.85%.On FMLC-12,DBMNet achieved an mAP of 49.85%,a Cohen’s kappa coefficient of 62.14%,and an F1-score of 70.21%.Compared with BFENet(mAP:45.42%,kappa:46.64%,F1-score:71.34%),DBMNet outperformed it by 4.43 percentage points in mAP and 15.50 percentage points in kappa,while BFENet achieved a marginally higher F1-score.On ODIR,DBMNet achieved an F1-score of 85.50%,comparable to state-of-the-art methods.Conclusion DBMNet effectively integrates RGB and grayscale visual modalities through a dual-branch architecture,significantly improving multi-label fundus disease classification.The framework not only addresses the issue of insufficient feature fusion in existing methods but also demonstrates outstanding performance in balancing detection across both common and rare diseases,providing a promising and clinically applicable pathway for standardized,intelligent fundus disease classification.
文摘This systematic review aims to comprehensively examine and compare deep learning methods for brain tumor segmentation and classification using MRI and other imaging modalities,focusing on recent trends from 2022 to 2025.The primary objective is to evaluate methodological advancements,model performance,dataset usage,and existing challenges in developing clinically robust AI systems.We included peer-reviewed journal articles and highimpact conference papers published between 2022 and 2025,written in English,that proposed or evaluated deep learning methods for brain tumor segmentation and/or classification.Excluded were non-open-access publications,books,and non-English articles.A structured search was conducted across Scopus,Google Scholar,Wiley,and Taylor&Francis,with the last search performed in August 2025.Risk of bias was not formally quantified but considered during full-text screening based on dataset diversity,validation methods,and availability of performance metrics.We used narrative synthesis and tabular benchmarking to compare performance metrics(e.g.,accuracy,Dice score)across model types(CNN,Transformer,Hybrid),imaging modalities,and datasets.A total of 49 studies were included(43 journal articles and 6 conference papers).These studies spanned over 9 public datasets(e.g.,BraTS,Figshare,REMBRANDT,MOLAB)and utilized a range of imaging modalities,predominantly MRI.Hybrid models,especially ResViT and UNetFormer,consistently achieved high performance,with classification accuracy exceeding 98%and segmentation Dice scores above 0.90 across multiple studies.Transformers and hybrid architectures showed increasing adoption post2023.Many studies lacked external validation and were evaluated only on a few benchmark datasets,raising concerns about generalizability and dataset bias.Few studies addressed clinical interpretability or uncertainty quantification.Despite promising results,particularly for hybrid deep learning models,widespread clinical adoption remains limited due to lack of validation,interpretability concerns,and real-world deployment barriers.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)[RS-2021-II211341,Artificial Intelligence Graduate School Program(Chung-Ang University)],and by the Chung-Ang University Graduate Research Scholarship in 2024.
文摘Legal case classification involves the categorization of legal documents into predefined categories,which facilitates legal information retrieval and case management.However,real-world legal datasets often suffer from class imbalances due to the uneven distribution of case types across legal domains.This leads to biased model performance,in the form of high accuracy for overrepresented categories and underperformance for minority classes.To address this issue,in this study,we propose a data augmentation method that masks unimportant terms within a document selectively while preserving key terms fromthe perspective of the legal domain.This approach enhances data diversity and improves the generalization capability of conventional models.Our experiments demonstrate consistent improvements achieved by the proposed augmentation strategy in terms of accuracy and F1 score across all models,validating the effectiveness of the proposed method in legal case classification.
基金funded by the China National Space Administration(KJSP2023020105)supported by the National Key R&D Program of China(Grant No.2023YFA1608100)+2 种基金the NSFC(Grant No.62227901)the Minor Planet Foundationsupported by the Egyptian Science,Technology&Innovation Funding Authority(STDF)under Grant No.48102.
文摘Near-Earth objects are important not only in studying the early formation of the Solar System,but also because they pose a serious hazard to humanity when they make close approaches to the Earth.Study of their physical properties can provide useful information on their origin,evolution,and hazard to human beings.However,it remains challenging to investigate small,newly discovered,near-Earth objects because of our limited observational window.This investigation seeks to determine the visible colors of near-Earth asteroids(NEAs),perform an initial taxonomic classification based on visible colors and analyze possible correlations between the distribution of taxonomic classification and asteroid size or orbital parameters.Observations were performed in the broadband BVRI Johnson−Cousins photometric system,applied to images from the Yaoan High Precision Telescope and the 1.88 m telescope at the Kottamia Astronomical Observatory.We present new photometric observations of 84 near-Earth asteroids,and classify 80 of them taxonomically,based on their photometric colors.We find that nearly half(46.3%)of the objects in our sample can be classified as S-complex,26.3%as C-complex,6%as D-complex,and 15.0%as X-complex;the remaining belong to the A-or V-types.Additionally,we identify three P-type NEAs in our sample,according to the Tholen scheme.The fractional abundances of the C/X-complex members with absolute magnitude H≥17.0 were more than twice as large as those with H<17.0.However,the fractions of C-and S-complex members with diameters≤1 km and>1 km are nearly equal,while X-complex members tend to have sub-kilometer diameters.In our sample,the C/D-complex objects are predominant among those with a Jovian Tisserand parameter of T_(J)<3.1.These bodies could have a cometary origin.C-and S-complex members account for a considerable proportion of the asteroids that are potentially hazardous.
基金supported by the National Natural Science Foundation of China(32270887,82272507,32200654,82430079,and 82472519)the National Key Research and Development Program of China(2022YFA1103202)+7 种基金the Chongqing High-End Medical Talents for Middle-aged and Young(YXGD202408)the Army Scientific and Technological Innovation Talents Prioritized Suppor t Program(2023-124)the Natural Science Foundation of Chongqing(CSTB2023NSCQ-ZDJO008)the Postdoctoral Innovative Talent Support Program(BX20220397)the Open Project of State Key Laboratory of TraumaBurns and Combined Injury(SFLKF202201)the Project for Enhancing Innovation of Army Medical University(2023XJS39)the Talent Innovation Training Program at the Army Medical Center(ZXZYTSYS09)。
文摘Background:Lumbar disc degeneration(LDD)displays considerable heterogeneity in terms of clinical features and pathological changes.However,researchers have not clearly determined whether the transcriptome variations in LDD could be used to identify or interpret the causes of heterogeneity in clinical features.This study aimed to identify the transcriptomic classification of degenerated discs in LDD patients and whether the molecular subtypes of LDD could be accurately predicted using clinical features.Methods:One hundred and twenty-two nucleus pulposus(NP)tissues from 108 patients were consecutively collected for bulk RNA sequencing(RNA-seq).An unsupervised clustering method was employed to analyze the bulk RNA matrix.Differential analysis was performed to characterize the transcriptional signatures and subtype-specific extracellular matrix(ECM)dysregulation.The cell subpopulation states of each subtype were inferred by integrating bulk and single-cell sequencing datasets.Transwell and dual-luciferase reporter gene assays were employed to investigate possible molecular mechanisms involved.Machine learning algorithm diagnostic prediction models were developed to correlate molecular classification with clinical features.Results:LDD was classified into 4 subtypes with distinct molecular signatures and ECM remodeling:C1 with collagenesis,C2 with ossification,C3 with low chondrogenesis,and C4 with fibrogenesis.Chond1-3 in C1 dominated disc collagenesis via the activation of the mechanosensors TRPV4 and PIEZO1;NP progenitor cells in C2 exhibited chondrogenic and osteogenic phenotypes;Chond1 in C3 was linked to a disrupted hypoxic microenvironment leading to reduced chondrogenesis;Macrophages in C4 played a crucial role in disc fibrogenesis via the secretion of tumor necrosis factor-α(TNF-α).Furthermore,the random forest diagnostic prediction model was proven to have a robust performance[area under the receiver operating characteristic(ROC)curve:0.9312;accuracy:0.84]in stratifying the molecular subtypes of LDD based on 12 clinical features.Conclusions:Our study delineates 4 distinct molecular subtypes of LDD that can be accurately stratified on the basis of clinical features.The identification of these subtypes would facilitate precise diagnostics and guide the development of personalized treatment strategies for LDD.
基金funded by China National Innovation and Entrepreneurship Project Fund Innovation Training Program(202410451009).
文摘With the rapid development of digital culture,a large number of cultural texts are presented in the form of digital and network.These texts have significant characteristics such as sparsity,real-time and non-standard expression,which bring serious challenges to traditional classification methods.In order to cope with the above problems,this paper proposes a new ASSC(ALBERT,SVD,Self-Attention and Cross-Entropy)-TextRCNN digital cultural text classification model.Based on the framework of TextRCNN,the Albert pre-training language model is introduced to improve the depth and accuracy of semantic embedding.Combined with the dual attention mechanism,the model’s ability to capture and model potential key information in short texts is strengthened.The Singular Value Decomposition(SVD)was used to replace the traditional Max pooling operation,which effectively reduced the feature loss rate and retained more key semantic information.The cross-entropy loss function was used to optimize the prediction results,making the model more robust in class distribution learning.The experimental results indicate that,in the digital cultural text classification task,as compared to the baseline model,the proposed ASSC-TextRCNN method achieves an 11.85%relative improvement in accuracy and an 11.97%relative increase in the F1 score.Meanwhile,the relative error rate decreases by 53.18%.This achievement not only validates the effectiveness and advanced nature of the proposed approach but also offers a novel technical route and methodological underpinnings for the intelligent analysis and dissemination of digital cultural texts.It holds great significance for promoting the in-depth exploration and value realization of digital culture.