In response to the scarcity of infrared aircraft samples and the tendency of traditional deep learning to overfit,a few-shot infrared aircraft classification method based on cross-correlation networks is proposed.This...In response to the scarcity of infrared aircraft samples and the tendency of traditional deep learning to overfit,a few-shot infrared aircraft classification method based on cross-correlation networks is proposed.This method combines two core modules:a simple parameter-free self-attention and cross-attention.By analyzing the self-correlation and cross-correlation between support images and query images,it achieves effective classification of infrared aircraft under few-shot conditions.The proposed cross-correlation network integrates these two modules and is trained in an end-to-end manner.The simple parameter-free self-attention is responsible for extracting the internal structure of the image while the cross-attention can calculate the cross-correlation between images further extracting and fusing the features between images.Compared with existing few-shot infrared target classification models,this model focuses on the geometric structure and thermal texture information of infrared images by modeling the semantic relevance between the features of the support set and query set,thus better attending to the target objects.Experimental results show that this method outperforms existing infrared aircraft classification methods in various classification tasks,with the highest classification accuracy improvement exceeding 3%.In addition,ablation experiments and comparative experiments also prove the effectiveness of the method.展开更多
The existing few-shot learning(FSL) approaches based on metric-learning usually lack attention to the distinction of feature contributions,and the importance of each sample is often ignored when obtaining the class re...The existing few-shot learning(FSL) approaches based on metric-learning usually lack attention to the distinction of feature contributions,and the importance of each sample is often ignored when obtaining the class representation,where the performance of the model is limited.Additionally,similarity metric method is also worthy of attention.Therefore,a few-shot learning approach called MWNet based on multi-attention fusion and weighted class representation(WCR) is proposed in this paper.Firstly,a multi-attention fusion module is introduced into the model to highlight the valuable part of the feature and reduce the interference of irrelevant content.Then,when obtaining the class representation,weight is given to each support set sample,and the weighted class representation is used to better express the class.Moreover,a mutual similarity metric method is used to obtain a more accurate similarity relationship through the mutual similarity for each representation.Experiments prove that the approach in this paper performs well in few-shot image classification,and also shows remarkable excellence and competitiveness compared with related advanced techniques.展开更多
Target recognition based on deep learning relies on a large quantity of samples,but in some specific remote sensing scenes,the samples are very rare.Currently,few-shot learning can obtain high-performance target class...Target recognition based on deep learning relies on a large quantity of samples,but in some specific remote sensing scenes,the samples are very rare.Currently,few-shot learning can obtain high-performance target classification models using only a few samples,but most researches are based on the natural scene.Therefore,this paper proposes a metric-based few-shot classification technology in remote sensing.First,we constructed a dataset(RSD-FSC)for few-shot classification in remote sensing,which contained 21 classes typical target sample slices of remote sensing images.Second,based on metric learning,a k-nearest neighbor classification network is proposed,to find multiple training samples similar to the testing target,and then the similarity between the testing target and multiple similar samples is calculated to classify the testing target.Finally,the 5-way 1-shot,5-way 5-shot and 5-way 10-shot experiments are conducted to improve the generalization of the model on few-shot classification tasks.The experimental results show that for the newly emerged classes few-shot samples,when the number of training samples is 1,5 and 10,the average accuracy of target recognition can reach 59.134%,82.553%and 87.796%,respectively.It demonstrates that our proposed method can resolve few-shot classification in remote sensing image and perform better than other few-shot classification methods.展开更多
In recent decades,the proliferation of email communication has markedly escalated,resulting in a concomitant surge in spam emails that congest networks and presenting security risks.This study introduces an innovative...In recent decades,the proliferation of email communication has markedly escalated,resulting in a concomitant surge in spam emails that congest networks and presenting security risks.This study introduces an innovative spam detection method utilizing the Horse Herd Optimization Algorithm(HHOA),designed for binary classification within multi⁃objective framework.The method proficiently identifies essential features,minimizing redundancy and improving classification precision.The suggested HHOA attained an impressive accuracy of 97.21%on the Kaggle email dataset,with precision of 94.30%,recall of 90.50%,and F1⁃score of 92.80%.Compared to conventional techniques,such as Support Vector Machine(93.89%accuracy),Random Forest(96.14%accuracy),and K⁃Nearest Neighbours(92.08%accuracy),HHOA exhibited enhanced performance with reduced computing complexity.The suggested method demonstrated enhanced feature selection efficiency,decreasing the number of selected features while maintaining high classification accuracy.The results underscore the efficacy of HHOA in spam identification and indicate its potential for further applications in practical email filtering systems.展开更多
Visual diagnosis of skin cancer is challenging due to subtle inter-class similarities,variations in skin texture,the presence of hair,and inconsistent illumination.Deep learning models have shown promise in assisting ...Visual diagnosis of skin cancer is challenging due to subtle inter-class similarities,variations in skin texture,the presence of hair,and inconsistent illumination.Deep learning models have shown promise in assisting early detection,yet their performance is often limited by the severe class imbalance present in dermoscopic datasets.This paper proposes CANNSkin,a skin cancer classification framework that integrates a convolutional autoencoder with latent-space oversampling to address this imbalance.The autoencoder is trained to reconstruct lesion images,and its latent embeddings are used as features for classification.To enhance minority-class representation,the Synthetic Minority Oversampling Technique(SMOTE)is applied directly to the latent vectors before classifier training.The encoder and classifier are first trained independently and later fine-tuned end-to-end.On the HAM10000 dataset,CANNSkin achieves an accuracy of 93.01%,a macro-F1 of 88.54%,and an ROC–AUC of 98.44%,demonstrating strong robustness across ten test subsets.Evaluation on the more complex ISIC 2019 dataset further confirms the model’s effectiveness,where CANNSkin achieves 94.27%accuracy,93.95%precision,94.09%recall,and 99.02%F1-score,supported by high reconstruction fidelity(PSNR 35.03 dB,SSIM 0.86).These results demonstrate the effectiveness of our proposed latent-space balancing and fine-tuned representation learning as a new benchmark method for robust and accurate skin cancer classification across heterogeneous datasets.展开更多
Background:Accurate classification of brain tumors from Magnetic Resonance Imaging(MRI)is essential for clinical decision-making but remains challenging due to tumor heterogeneity.Existing approaches often focus solel...Background:Accurate classification of brain tumors from Magnetic Resonance Imaging(MRI)is essential for clinical decision-making but remains challenging due to tumor heterogeneity.Existing approaches often focus solely on classification or treat segmentation and classification as separate tasks,limiting overall performance and interpretability.Methods:This study proposes an end-to-end automated framework that integrates optimized tumor localization with multiclass classification.An optimized segmentation model is first employed to generate tumor masks,which are then overlaid on MRI scans to produce attention-enhanced inputs.These inputs are subsequently used to train a convolutional neural network(CNN)classifier.Experiments were conducted on a public dataset comprising 4,237 MRI scans across four categories:normal,glioma,meningioma,and pituitary tumors.Results:Three widely used segmentation models were systematically evaluated,with an optimized U-Net achieving the best performance(accuracy=0.9939,Dice=0.8893).Segmentation-guided classification consistently improved performance across six CNN architectures,with the most notable gains observed in heterogeneous tumor types such as glioma and meningioma.Among the classifiers,EfficientNet-V2 achieved the highest performance,with an accuracy of 0.9835,precision of 0.9858,recall of 0.9804,and F1-score of 0.9828.The framework was further validated on an independent external dataset,demonstrating consistent performance and robustness across diverse MRI sources.Conclusion:The proposed framework demonstrates strong potential for multiclass brain tumor classification by effectively combining segmentation and classification.This segmentation-driven approach not only enhances predictive accuracy but also improves interpretability,making it more suitable for clinical applications.展开更多
Arrhythmias are a frequently occurring phenomenon in clinical practice,but how to accurately dis-tinguish subtle rhythm abnormalities remains an ongoing difficulty faced by the entire research community when conductin...Arrhythmias are a frequently occurring phenomenon in clinical practice,but how to accurately dis-tinguish subtle rhythm abnormalities remains an ongoing difficulty faced by the entire research community when conducting ECG-based studies.From a review of existing studies,two main factors appear to contribute to this problem:the uneven distribution of arrhythmia classes and the limited expressiveness of features learned by current models.To overcome these limitations,this study proposes a dual-path multimodal framework,termed DM-EHC(Dual-Path Multimodal ECG Heartbeat Classifier),for ECG-based heartbeat classification.The proposed framework links 1D ECG temporal features with 2D time–frequency features.By setting up the dual paths described above,the model can process more dimensions of feature information.The MIT-BIH arrhythmia database was selected as the baseline dataset for the experiments.Experimental results show that the proposed method outperforms single modalities and performs better for certain specific types of arrhythmias.The model achieved mean precision,recall,and F1 score of 95.14%,92.26%,and 93.65%,respectively.These results indicate that the framework is robust and has potential value in automated arrhythmia classification.展开更多
Near-Earth objects are important not only in studying the early formation of the Solar System,but also because they pose a serious hazard to humanity when they make close approaches to the Earth.Study of their physica...Near-Earth objects are important not only in studying the early formation of the Solar System,but also because they pose a serious hazard to humanity when they make close approaches to the Earth.Study of their physical properties can provide useful information on their origin,evolution,and hazard to human beings.However,it remains challenging to investigate small,newly discovered,near-Earth objects because of our limited observational window.This investigation seeks to determine the visible colors of near-Earth asteroids(NEAs),perform an initial taxonomic classification based on visible colors and analyze possible correlations between the distribution of taxonomic classification and asteroid size or orbital parameters.Observations were performed in the broadband BVRI Johnson−Cousins photometric system,applied to images from the Yaoan High Precision Telescope and the 1.88 m telescope at the Kottamia Astronomical Observatory.We present new photometric observations of 84 near-Earth asteroids,and classify 80 of them taxonomically,based on their photometric colors.We find that nearly half(46.3%)of the objects in our sample can be classified as S-complex,26.3%as C-complex,6%as D-complex,and 15.0%as X-complex;the remaining belong to the A-or V-types.Additionally,we identify three P-type NEAs in our sample,according to the Tholen scheme.The fractional abundances of the C/X-complex members with absolute magnitude H≥17.0 were more than twice as large as those with H<17.0.However,the fractions of C-and S-complex members with diameters≤1 km and>1 km are nearly equal,while X-complex members tend to have sub-kilometer diameters.In our sample,the C/D-complex objects are predominant among those with a Jovian Tisserand parameter of T_(J)<3.1.These bodies could have a cometary origin.C-and S-complex members account for a considerable proportion of the asteroids that are potentially hazardous.展开更多
Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.Howev...Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.展开更多
In the context of rural revitalization and the development of smart agriculture, image classification technology based on deep learning has emerged as a crucial tool for digital monitoring and intelligent prevention a...In the context of rural revitalization and the development of smart agriculture, image classification technology based on deep learning has emerged as a crucial tool for digital monitoring and intelligent prevention and control of agricultural diseases. This paper provides a systematic review of the evolutionary development of algorithms within this field. Addressing challenges such as domain drift and limited global awareness in classical convolutional neural networks (CNNs) applied to complex agricultural environments, the paper focuses on the latest advancements in vision transformers (ViT) and their hybrid architectures to enhance cross-domain robustness and fine-grained recognition capabilities. In response to the challenges posed by scarce long-tail data and limited edge computing power in real-world scenarios, the paper explores solutions related to few-shot learning and ultra-lightweight network deployment. Finally, a forward-looking analysis is presented on the application paradigms of multimodal feature fusion, vision-based large models, and explainable artificial intelligence (AI) within smart plant protection. This analysis aims to offer theoretical insights for the development of efficient and transparent intelligent diagnostic systems for agricultural diseases, thereby supporting the advancement of digital agriculture and the construction of a robust agricultural nation.展开更多
With the recent increase in data volume and diversity,traditional text representation techniques are struggling to capture context,particularly in environments with sparse data.To address these challenges,this study p...With the recent increase in data volume and diversity,traditional text representation techniques are struggling to capture context,particularly in environments with sparse data.To address these challenges,this study proposes a new model,the Masked Joint Representation Model(MJRM).MJRM approximates the original hypothesis by leveraging multiple elements in a limited context.It dynamically adapts to changes in characteristics based on data distribution through three main components.First,masking-based representation learning,termed selective dynamic masking,integrates topic modeling and sentiment clustering to generate and train multiple instances across different data subsets,whose predictions are then aggregated with optimized weights.This design alleviates sparsity,suppresses noise,and preserves contextual structures.Second,regularization-based improvements are applied.Third,techniques for addressing sparse data are used to perform final inference.As a result,MJRM improves performance by up to 4%compared to existing AI techniques.In our experiments,we analyzed the contribution of each factor,demonstrating that masking,dynamic learning,and aggregating multiple instances complement each other to improve performance.This demonstrates that a masking-based multi-learning strategy is effective for context-aware sparse text classification,and can be useful even in challenging situations such as data shortage or data distribution variations.We expect that the approach can be extended to diverse fields such as sentiment analysis,spam filtering,and domain-specific document classification.展开更多
Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and ...Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and severe class imbalance,and occasional imaging artifacts can create ambiguity for state-of-the-art convolutional neural networks(CNNs).We frame skin lesion recognition as graph-based reasoning and,to ensure fair evaluation and avoid data leakage,adopt a strict lesion-level partitioning strategy.Each image is first over-segmented using SLIC(Simple Linear Iterative Clustering)to produce perceptually homogeneous superpixels.These superpixels form the nodes of a region-adjacency graph whose edges encode spatial continuity.Node attributes are 1280-dimensional embeddings extracted with a lightweight yet expressive EfficientNet-B0 backbone,providing strong representational power at modest computational cost.The resulting graphs are processed by a five-layer Graph Attention Network(GAT)that learns to weight inter-node relationships dynamically and aggregates multi-hop context before classifying lesions into seven classes with a log-softmax output.Extensive experiments on the DermaMNIST benchmark show the proposed pipeline achieves 88.35%accuracy and 98.04%AUC,outperforming contemporary CNNs,AutoML approaches,and alternative graph neural networks.An ablation study indicates EfficientNet-B0 produces superior node descriptors compared with ResNet-18 and DenseNet,and that roughly five GAT layers strike a good balance between being too shallow and over-deep while avoiding oversmoothing.The method requires no data augmentation or external metadata,making it a drop-in upgrade for clinical computer-aided diagnosis systems.展开更多
Classifying job offers into occupational categories is a fundamental task in human resource information systems,as it improves and streamlines indexing,search,and matching between openings and job seekers.Comprehensiv...Classifying job offers into occupational categories is a fundamental task in human resource information systems,as it improves and streamlines indexing,search,and matching between openings and job seekers.Comprehensive occupational databases such as O∗NET or ESCO provide detailed taxonomies of interrelated positions that can be leveraged to align the textual content of postings with occupational categories,thereby facilitating standardization,cross-system interoperability,and access to metadata for each occupation(e.g.,tasks,knowledge,skills,and abilities).In this work,we explore the effectiveness of fine-tuning existing language models(LMs)to classify job offers with occupational descriptors from O∗NET.This enables a more precise assessment of candidate suitability by identifying the specific knowledge and skills required for each position,and helps automate recruitment processes by mitigating human bias and subjectivity in candidate selection.We evaluate three representative BERT-like models:BERT,RoBERTa,and DeBERTa.BERT serves as the baseline encoder-only architecture;RoBERTa incorporates advances in pretraining objectives and data scale;and DeBERTa introduces architectural improvements through disentangled attention mechanisms.The best performance was achieved with the DeBERTa model,although the other models also produced strong results,and no statistically significant differences were observed acrossmodels.We also find that these models typically reach optimal performance after only a few training epochs,and that training with smaller,balanced datasets is effective.Consequently,comparable results can be obtained with models that require fewer computational resources and less training time,facilitating deployment and practical use.展开更多
Container transportation is pivotal in global trade due to its efficiency,safety,and cost-effectiveness.However,structural defects—particularly in grapple slots—can result in cargo damage,financial loss,and elevated...Container transportation is pivotal in global trade due to its efficiency,safety,and cost-effectiveness.However,structural defects—particularly in grapple slots—can result in cargo damage,financial loss,and elevated safety risks,including container drops during lifting operations.Timely and accurate inspection before and after transit is therefore essential.Traditional inspection methods rely heavily on manual observation of internal and external surfaces,which are time-consuming,resource-intensive,and prone to subjective errors.Container roofs pose additional challenges due to limited visibility,while grapple slots are especially vulnerable to wear from frequent use.This study proposes a two-stage automated detection framework targeting defects in container roof grapple slots.In the first stage,YOLOv7 is employed to localize grapple slot regions with high precision.In the second stage,ResNet50 classifies the extracted slots as either intact or defective.The results from both stages are integrated into a human-machine interface for real-time visualization and user verification.Experimental evaluations demonstrate that YOLOv7 achieves a 99%detection rate at 100 frames per second(FPS),while ResNet50 attains 87%classification accuracy at 34 FPS.Compared to some state of the arts,the proposed system offers significant speed,reliability,and usability improvements,enabling efficient defect identification and visual reconfirmation via the interface.展开更多
Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to priva...Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to privacy leaks.Federated learning provides an effective solution to data leakage by eliminating the need for data transmission,relying instead on the exchange of model parameters.However,the uneven distribution of client data can still affect the model’s ability to generalize effectively.To address these challenges,we propose a new framework for point cloud classification called Federated Dynamic Aggregation Selection Strategy-based Multi-Receptive Field Fusion Classification Framework(FDASS-MRFCF).Specifically,we tackle these challenges with two key innovations:(1)During the client local training phase,we propose a Multi-Receptive Field Fusion Classification Model(MRFCM),which captures local and global structures in point cloud data through dynamic convolution and multi-scale feature fusion,enhancing the robustness of point cloud classification.(2)In the server aggregation phase,we introduce a Federated Dynamic Aggregation Selection Strategy(FDASS),which employs a hybrid strategy to average client model parameters,skip aggregation,or reallocate local models to different clients,thereby balancing global consistency and local diversity.We evaluate our framework using the ModelNet40 and ShapeNetPart benchmarks,demonstrating its effectiveness.The proposed method is expected to significantly advance the field of point cloud classification in a secure environment.展开更多
Accurate,up to date,and quick information related to any disaster supports disaster management team/authorities to perform quick,easy,and cost-effective response to enhance rescue operations to alleviate the possible ...Accurate,up to date,and quick information related to any disaster supports disaster management team/authorities to perform quick,easy,and cost-effective response to enhance rescue operations to alleviate the possible loss of lives,financial risks,and properties.Due to damaged infrastructure in disaster-affected areas,social media is the only way to share/exchange real time information.Therefore,‘X’(formerly Twitter)has become a major platform for disseminating real-time information during disaster events or emergencies,i.e.,floods and earthquake.Rapid identification of actionable content is critical for effective humanitarian response;however,the brief and noisy nature of tweets makes automated classification challenging.To tackle this problem,this study proposes a hybrid classification framework that integrates term frequency–inverse document frequency(TF-IDF)features with graph convolutional networks(GCNs)to enhance disaster-related tweet analysis.The proposed model performs three classification tasks:identifying disaster-related tweets(achieving 94.47%accuracy),categorizing disaster types(earthquake,flood,and non-disaster)with 91.78%accuracy,and detecting aid requests such as food,donations,and medical assistance(94.64%accuracy).By combining the statistical strengths of TF-IDF with the relational learning capabilities of GCNs,the model attains high accuracy while maintaining computational efficiency and interpretability.The results demonstrate the framework’s strong potential for real-time disaster response,offering valuable insights to support emergency management systems and humanitarian decision-making.展开更多
Evaluating the adversarial robustness of classification algorithms in machine learning is a crucial domain.However,current methods lack measurable and interpretable metrics.To address this issue,this paper introduces ...Evaluating the adversarial robustness of classification algorithms in machine learning is a crucial domain.However,current methods lack measurable and interpretable metrics.To address this issue,this paper introduces a visual evaluation index named confidence centroid skewing quadrilateral,which is based on a classification confidence-based confusion matrix,offering a quantitative and visual comparison of the adversarial robustness among different classification algorithms,and enhances intuitiveness and interpretability of attack impacts.We first conduct a validity test and sensitive analysis of the method.Then,prove its effectiveness through the experiments of five classification algorithms including artificial neural network(ANN),logistic regression(LR),support vector machine(SVM),convolutional neural network(CNN)and transformer against three adversarial attacks such as fast gradient sign method(FGSM),DeepFool,and projected gradient descent(PGD)attack.展开更多
Accurate soil classification is essential for pavement design;however,the traditional American Association of State Highway and Transportation Officials(AASHTO)classification system relies on extensive laboratory test...Accurate soil classification is essential for pavement design;however,the traditional American Association of State Highway and Transportation Officials(AASHTO)classification system relies on extensive laboratory testing and subjective judgment.This study presents an artificial intelligence(AI)enhanced framework for AASHTO soil classification.A synthetic dataset of 349,015 samples was generated using parameter ranges for five AASHTO input variables to support model development.Four machine learning models were trained,analyzed,and compared where the random forest(RF)consistently achieved the highest accuracy of 100%among the four models in predicting AASHTO soil groups.Feature importance analysis indicates that percent passing the No.200 sieve is the most influential factor,and under missing input scenarios.Additionally,the models remain reliable under partial input loss,though accuracy is most sensitive to the absence of percent passing the No.200 sieve,dropping to 85.8%,while all other variables maintain accuracies of at least 93.1%.Prediction uncertainty using Monte Carlo simulations shows model performance within a 95%confidence interval.Overall,the proposed AI models can accurately and efficiently predict AASHTO soil groups using incomplete datasets for geotechnical engineering.展开更多
With the evolution of next-generation network technologies,the complexity of network management has significantly increased,and the means of network attacks are diversified,bringing new challenges to network traffic c...With the evolution of next-generation network technologies,the complexity of network management has significantly increased,and the means of network attacks are diversified,bringing new challenges to network traffic classification.This paper presents a general AIdriven network traffic classification workflow and elaborates on a traffic data and feature engineering framework.Most importantly,it analyzes the concept and causes of data distribution shifts in ne twork traffic,proposing detection methods and countermeasures.Experimental results on real traffic collected at different time intervals show that application evolution can induce data distribution shifts,which in turn lead to a noticeable degradation in traffic classification performance.Comparative drift detection experiments further confirm that such shifts are more evident over long-term intervals,while short-term traffic remains relatively stable.These findings demonstrate the necessity of incorporating drift-aware mechanisms into AI-driven network traffic classification systems.展开更多
Legal case classification involves the categorization of legal documents into predefined categories,which facilitates legal information retrieval and case management.However,real-world legal datasets often suffer from...Legal case classification involves the categorization of legal documents into predefined categories,which facilitates legal information retrieval and case management.However,real-world legal datasets often suffer from class imbalances due to the uneven distribution of case types across legal domains.This leads to biased model performance,in the form of high accuracy for overrepresented categories and underperformance for minority classes.To address this issue,in this study,we propose a data augmentation method that masks unimportant terms within a document selectively while preserving key terms fromthe perspective of the legal domain.This approach enhances data diversity and improves the generalization capability of conventional models.Our experiments demonstrate consistent improvements achieved by the proposed augmentation strategy in terms of accuracy and F1 score across all models,validating the effectiveness of the proposed method in legal case classification.展开更多
基金Supported by the National Pre-research Program during the 14th Five-Year Plan(514010405)。
文摘In response to the scarcity of infrared aircraft samples and the tendency of traditional deep learning to overfit,a few-shot infrared aircraft classification method based on cross-correlation networks is proposed.This method combines two core modules:a simple parameter-free self-attention and cross-attention.By analyzing the self-correlation and cross-correlation between support images and query images,it achieves effective classification of infrared aircraft under few-shot conditions.The proposed cross-correlation network integrates these two modules and is trained in an end-to-end manner.The simple parameter-free self-attention is responsible for extracting the internal structure of the image while the cross-attention can calculate the cross-correlation between images further extracting and fusing the features between images.Compared with existing few-shot infrared target classification models,this model focuses on the geometric structure and thermal texture information of infrared images by modeling the semantic relevance between the features of the support set and query set,thus better attending to the target objects.Experimental results show that this method outperforms existing infrared aircraft classification methods in various classification tasks,with the highest classification accuracy improvement exceeding 3%.In addition,ablation experiments and comparative experiments also prove the effectiveness of the method.
基金Supported by the National Natural Science Foundation of China (No.61171131)Key R&D Program of Shandong Province (No.YD01033)。
文摘The existing few-shot learning(FSL) approaches based on metric-learning usually lack attention to the distinction of feature contributions,and the importance of each sample is often ignored when obtaining the class representation,where the performance of the model is limited.Additionally,similarity metric method is also worthy of attention.Therefore,a few-shot learning approach called MWNet based on multi-attention fusion and weighted class representation(WCR) is proposed in this paper.Firstly,a multi-attention fusion module is introduced into the model to highlight the valuable part of the feature and reduce the interference of irrelevant content.Then,when obtaining the class representation,weight is given to each support set sample,and the weighted class representation is used to better express the class.Moreover,a mutual similarity metric method is used to obtain a more accurate similarity relationship through the mutual similarity for each representation.Experiments prove that the approach in this paper performs well in few-shot image classification,and also shows remarkable excellence and competitiveness compared with related advanced techniques.
基金This work was supported in part by the CETC key laboratory of Aerospace Information Applications under Grant No.SXX19629X060.
文摘Target recognition based on deep learning relies on a large quantity of samples,but in some specific remote sensing scenes,the samples are very rare.Currently,few-shot learning can obtain high-performance target classification models using only a few samples,but most researches are based on the natural scene.Therefore,this paper proposes a metric-based few-shot classification technology in remote sensing.First,we constructed a dataset(RSD-FSC)for few-shot classification in remote sensing,which contained 21 classes typical target sample slices of remote sensing images.Second,based on metric learning,a k-nearest neighbor classification network is proposed,to find multiple training samples similar to the testing target,and then the similarity between the testing target and multiple similar samples is calculated to classify the testing target.Finally,the 5-way 1-shot,5-way 5-shot and 5-way 10-shot experiments are conducted to improve the generalization of the model on few-shot classification tasks.The experimental results show that for the newly emerged classes few-shot samples,when the number of training samples is 1,5 and 10,the average accuracy of target recognition can reach 59.134%,82.553%and 87.796%,respectively.It demonstrates that our proposed method can resolve few-shot classification in remote sensing image and perform better than other few-shot classification methods.
文摘In recent decades,the proliferation of email communication has markedly escalated,resulting in a concomitant surge in spam emails that congest networks and presenting security risks.This study introduces an innovative spam detection method utilizing the Horse Herd Optimization Algorithm(HHOA),designed for binary classification within multi⁃objective framework.The method proficiently identifies essential features,minimizing redundancy and improving classification precision.The suggested HHOA attained an impressive accuracy of 97.21%on the Kaggle email dataset,with precision of 94.30%,recall of 90.50%,and F1⁃score of 92.80%.Compared to conventional techniques,such as Support Vector Machine(93.89%accuracy),Random Forest(96.14%accuracy),and K⁃Nearest Neighbours(92.08%accuracy),HHOA exhibited enhanced performance with reduced computing complexity.The suggested method demonstrated enhanced feature selection efficiency,decreasing the number of selected features while maintaining high classification accuracy.The results underscore the efficacy of HHOA in spam identification and indicate its potential for further applications in practical email filtering systems.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2601).
文摘Visual diagnosis of skin cancer is challenging due to subtle inter-class similarities,variations in skin texture,the presence of hair,and inconsistent illumination.Deep learning models have shown promise in assisting early detection,yet their performance is often limited by the severe class imbalance present in dermoscopic datasets.This paper proposes CANNSkin,a skin cancer classification framework that integrates a convolutional autoencoder with latent-space oversampling to address this imbalance.The autoencoder is trained to reconstruct lesion images,and its latent embeddings are used as features for classification.To enhance minority-class representation,the Synthetic Minority Oversampling Technique(SMOTE)is applied directly to the latent vectors before classifier training.The encoder and classifier are first trained independently and later fine-tuned end-to-end.On the HAM10000 dataset,CANNSkin achieves an accuracy of 93.01%,a macro-F1 of 88.54%,and an ROC–AUC of 98.44%,demonstrating strong robustness across ten test subsets.Evaluation on the more complex ISIC 2019 dataset further confirms the model’s effectiveness,where CANNSkin achieves 94.27%accuracy,93.95%precision,94.09%recall,and 99.02%F1-score,supported by high reconstruction fidelity(PSNR 35.03 dB,SSIM 0.86).These results demonstrate the effectiveness of our proposed latent-space balancing and fine-tuned representation learning as a new benchmark method for robust and accurate skin cancer classification across heterogeneous datasets.
文摘Background:Accurate classification of brain tumors from Magnetic Resonance Imaging(MRI)is essential for clinical decision-making but remains challenging due to tumor heterogeneity.Existing approaches often focus solely on classification or treat segmentation and classification as separate tasks,limiting overall performance and interpretability.Methods:This study proposes an end-to-end automated framework that integrates optimized tumor localization with multiclass classification.An optimized segmentation model is first employed to generate tumor masks,which are then overlaid on MRI scans to produce attention-enhanced inputs.These inputs are subsequently used to train a convolutional neural network(CNN)classifier.Experiments were conducted on a public dataset comprising 4,237 MRI scans across four categories:normal,glioma,meningioma,and pituitary tumors.Results:Three widely used segmentation models were systematically evaluated,with an optimized U-Net achieving the best performance(accuracy=0.9939,Dice=0.8893).Segmentation-guided classification consistently improved performance across six CNN architectures,with the most notable gains observed in heterogeneous tumor types such as glioma and meningioma.Among the classifiers,EfficientNet-V2 achieved the highest performance,with an accuracy of 0.9835,precision of 0.9858,recall of 0.9804,and F1-score of 0.9828.The framework was further validated on an independent external dataset,demonstrating consistent performance and robustness across diverse MRI sources.Conclusion:The proposed framework demonstrates strong potential for multiclass brain tumor classification by effectively combining segmentation and classification.This segmentation-driven approach not only enhances predictive accuracy but also improves interpretability,making it more suitable for clinical applications.
基金supported by the Innovative Human Resource Development for Local Intel-lectualization program through the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.IITP-2026-2020-0-01741)the research fund of Hanyang University(HY-2025-1110).
文摘Arrhythmias are a frequently occurring phenomenon in clinical practice,but how to accurately dis-tinguish subtle rhythm abnormalities remains an ongoing difficulty faced by the entire research community when conducting ECG-based studies.From a review of existing studies,two main factors appear to contribute to this problem:the uneven distribution of arrhythmia classes and the limited expressiveness of features learned by current models.To overcome these limitations,this study proposes a dual-path multimodal framework,termed DM-EHC(Dual-Path Multimodal ECG Heartbeat Classifier),for ECG-based heartbeat classification.The proposed framework links 1D ECG temporal features with 2D time–frequency features.By setting up the dual paths described above,the model can process more dimensions of feature information.The MIT-BIH arrhythmia database was selected as the baseline dataset for the experiments.Experimental results show that the proposed method outperforms single modalities and performs better for certain specific types of arrhythmias.The model achieved mean precision,recall,and F1 score of 95.14%,92.26%,and 93.65%,respectively.These results indicate that the framework is robust and has potential value in automated arrhythmia classification.
基金funded by the China National Space Administration(KJSP2023020105)supported by the National Key R&D Program of China(Grant No.2023YFA1608100)+2 种基金the NSFC(Grant No.62227901)the Minor Planet Foundationsupported by the Egyptian Science,Technology&Innovation Funding Authority(STDF)under Grant No.48102.
文摘Near-Earth objects are important not only in studying the early formation of the Solar System,but also because they pose a serious hazard to humanity when they make close approaches to the Earth.Study of their physical properties can provide useful information on their origin,evolution,and hazard to human beings.However,it remains challenging to investigate small,newly discovered,near-Earth objects because of our limited observational window.This investigation seeks to determine the visible colors of near-Earth asteroids(NEAs),perform an initial taxonomic classification based on visible colors and analyze possible correlations between the distribution of taxonomic classification and asteroid size or orbital parameters.Observations were performed in the broadband BVRI Johnson−Cousins photometric system,applied to images from the Yaoan High Precision Telescope and the 1.88 m telescope at the Kottamia Astronomical Observatory.We present new photometric observations of 84 near-Earth asteroids,and classify 80 of them taxonomically,based on their photometric colors.We find that nearly half(46.3%)of the objects in our sample can be classified as S-complex,26.3%as C-complex,6%as D-complex,and 15.0%as X-complex;the remaining belong to the A-or V-types.Additionally,we identify three P-type NEAs in our sample,according to the Tholen scheme.The fractional abundances of the C/X-complex members with absolute magnitude H≥17.0 were more than twice as large as those with H<17.0.However,the fractions of C-and S-complex members with diameters≤1 km and>1 km are nearly equal,while X-complex members tend to have sub-kilometer diameters.In our sample,the C/D-complex objects are predominant among those with a Jovian Tisserand parameter of T_(J)<3.1.These bodies could have a cometary origin.C-and S-complex members account for a considerable proportion of the asteroids that are potentially hazardous.
基金funded by the National Key Research and Development Program of China(Grant No.2024YFE0209000)the NSFC(Grant No.U23B2019).
文摘Graph Neural Networks(GNNs)have proven highly effective for graph classification across diverse fields such as social networks,bioinformatics,and finance,due to their capability to learn complex graph structures.However,despite their success,GNNs remain vulnerable to adversarial attacks that can significantly degrade their classification accuracy.Existing adversarial attack strategies primarily rely on label information to guide the attacks,which limits their applicability in scenarios where such information is scarce or unavailable.This paper introduces an innovative unsupervised attack method for graph classification,which operates without relying on label information,thereby enhancing its applicability in a broad range of scenarios.Specifically,our method first leverages a graph contrastive learning loss to learn high-quality graph embeddings by comparing different stochastic augmented views of the graphs.To effectively perturb the graphs,we then introduce an implicit estimator that measures the impact of various modifications on graph structures.The proposed strategy identifies and flips edges with the top-K highest scores,determined by the estimator,to maximize the degradation of the model’s performance.In addition,to defend against such attack,we propose a lightweight regularization-based defense mechanism that is specifically tailored to mitigate the structural perturbations introduced by our attack strategy.It enhances model robustness by enforcing embedding consistency and edge-level smoothness during training.We conduct experiments on six public TU graph classification datasets:NCI1,NCI109,Mutagenicity,ENZYMES,COLLAB,and DBLP_v1,to evaluate the effectiveness of our attack and defense strategies.Under an attack budget of 3,the maximum reduction in model accuracy reaches 6.67%on the Graph Convolutional Network(GCN)and 11.67%on the Graph Attention Network(GAT)across different datasets,indicating that our unsupervised method induces degradation comparable to state-of-the-art supervised attacks.Meanwhile,our defense achieves the highest accuracy recovery of 3.89%(GCN)and 5.00%(GAT),demonstrating improved robustness against structural perturbations.
基金Supported by School-level Project of Shaoyang Industry Polytechnic College(SKY24A06)Science and Technology Plan(Special Fund Subsidy)of Shaoyang City(2024PT4070)General Research Project of Hunan Provincial Department of Education in 2025(25C1457).
文摘In the context of rural revitalization and the development of smart agriculture, image classification technology based on deep learning has emerged as a crucial tool for digital monitoring and intelligent prevention and control of agricultural diseases. This paper provides a systematic review of the evolutionary development of algorithms within this field. Addressing challenges such as domain drift and limited global awareness in classical convolutional neural networks (CNNs) applied to complex agricultural environments, the paper focuses on the latest advancements in vision transformers (ViT) and their hybrid architectures to enhance cross-domain robustness and fine-grained recognition capabilities. In response to the challenges posed by scarce long-tail data and limited edge computing power in real-world scenarios, the paper explores solutions related to few-shot learning and ultra-lightweight network deployment. Finally, a forward-looking analysis is presented on the application paradigms of multimodal feature fusion, vision-based large models, and explainable artificial intelligence (AI) within smart plant protection. This analysis aims to offer theoretical insights for the development of efficient and transparent intelligent diagnostic systems for agricultural diseases, thereby supporting the advancement of digital agriculture and the construction of a robust agricultural nation.
基金supported by the SungKyunKwan University and the BK21 FOUR(Graduate School Innovation)funded by the Ministry of Education(MOE,Korea)and National Research Foundation of Korea(NRF).
文摘With the recent increase in data volume and diversity,traditional text representation techniques are struggling to capture context,particularly in environments with sparse data.To address these challenges,this study proposes a new model,the Masked Joint Representation Model(MJRM).MJRM approximates the original hypothesis by leveraging multiple elements in a limited context.It dynamically adapts to changes in characteristics based on data distribution through three main components.First,masking-based representation learning,termed selective dynamic masking,integrates topic modeling and sentiment clustering to generate and train multiple instances across different data subsets,whose predictions are then aggregated with optimized weights.This design alleviates sparsity,suppresses noise,and preserves contextual structures.Second,regularization-based improvements are applied.Third,techniques for addressing sparse data are used to perform final inference.As a result,MJRM improves performance by up to 4%compared to existing AI techniques.In our experiments,we analyzed the contribution of each factor,demonstrating that masking,dynamic learning,and aggregating multiple instances complement each other to improve performance.This demonstrates that a masking-based multi-learning strategy is effective for context-aware sparse text classification,and can be useful even in challenging situations such as data shortage or data distribution variations.We expect that the approach can be extended to diverse fields such as sentiment analysis,spam filtering,and domain-specific document classification.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01296).
文摘Skin diseases affect millions worldwide.Early detection is key to preventing disfigurement,lifelong disability,or death.Dermoscopic images acquired in primary-care settings show high intra-class visual similarity and severe class imbalance,and occasional imaging artifacts can create ambiguity for state-of-the-art convolutional neural networks(CNNs).We frame skin lesion recognition as graph-based reasoning and,to ensure fair evaluation and avoid data leakage,adopt a strict lesion-level partitioning strategy.Each image is first over-segmented using SLIC(Simple Linear Iterative Clustering)to produce perceptually homogeneous superpixels.These superpixels form the nodes of a region-adjacency graph whose edges encode spatial continuity.Node attributes are 1280-dimensional embeddings extracted with a lightweight yet expressive EfficientNet-B0 backbone,providing strong representational power at modest computational cost.The resulting graphs are processed by a five-layer Graph Attention Network(GAT)that learns to weight inter-node relationships dynamically and aggregates multi-hop context before classifying lesions into seven classes with a log-softmax output.Extensive experiments on the DermaMNIST benchmark show the proposed pipeline achieves 88.35%accuracy and 98.04%AUC,outperforming contemporary CNNs,AutoML approaches,and alternative graph neural networks.An ablation study indicates EfficientNet-B0 produces superior node descriptors compared with ResNet-18 and DenseNet,and that roughly five GAT layers strike a good balance between being too shallow and over-deep while avoiding oversmoothing.The method requires no data augmentation or external metadata,making it a drop-in upgrade for clinical computer-aided diagnosis systems.
文摘Classifying job offers into occupational categories is a fundamental task in human resource information systems,as it improves and streamlines indexing,search,and matching between openings and job seekers.Comprehensive occupational databases such as O∗NET or ESCO provide detailed taxonomies of interrelated positions that can be leveraged to align the textual content of postings with occupational categories,thereby facilitating standardization,cross-system interoperability,and access to metadata for each occupation(e.g.,tasks,knowledge,skills,and abilities).In this work,we explore the effectiveness of fine-tuning existing language models(LMs)to classify job offers with occupational descriptors from O∗NET.This enables a more precise assessment of candidate suitability by identifying the specific knowledge and skills required for each position,and helps automate recruitment processes by mitigating human bias and subjectivity in candidate selection.We evaluate three representative BERT-like models:BERT,RoBERTa,and DeBERTa.BERT serves as the baseline encoder-only architecture;RoBERTa incorporates advances in pretraining objectives and data scale;and DeBERTa introduces architectural improvements through disentangled attention mechanisms.The best performance was achieved with the DeBERTa model,although the other models also produced strong results,and no statistically significant differences were observed acrossmodels.We also find that these models typically reach optimal performance after only a few training epochs,and that training with smaller,balanced datasets is effective.Consequently,comparable results can be obtained with models that require fewer computational resources and less training time,facilitating deployment and practical use.
文摘Container transportation is pivotal in global trade due to its efficiency,safety,and cost-effectiveness.However,structural defects—particularly in grapple slots—can result in cargo damage,financial loss,and elevated safety risks,including container drops during lifting operations.Timely and accurate inspection before and after transit is therefore essential.Traditional inspection methods rely heavily on manual observation of internal and external surfaces,which are time-consuming,resource-intensive,and prone to subjective errors.Container roofs pose additional challenges due to limited visibility,while grapple slots are especially vulnerable to wear from frequent use.This study proposes a two-stage automated detection framework targeting defects in container roof grapple slots.In the first stage,YOLOv7 is employed to localize grapple slot regions with high precision.In the second stage,ResNet50 classifies the extracted slots as either intact or defective.The results from both stages are integrated into a human-machine interface for real-time visualization and user verification.Experimental evaluations demonstrate that YOLOv7 achieves a 99%detection rate at 100 frames per second(FPS),while ResNet50 attains 87%classification accuracy at 34 FPS.Compared to some state of the arts,the proposed system offers significant speed,reliability,and usability improvements,enabling efficient defect identification and visual reconfirmation via the interface.
基金supported in part by the National Key Research and Development Program of Chinaunder(Grant 2021YFB3101100)in part by the National Natural Science Foundation of Chinaunder(Grant 42461057),(Grant 62272123),and(Grant 42371470)+1 种基金in part by the Fundamental Research Program of Shanxi Province under(Grant 202303021212164)in part by the Postgraduate Education Innovation Program of Shanxi Province under(Grant 2024KY474).
文摘Recently,large-scale deep learning models have been increasingly adopted for point cloud classification.However,thesemethods typically require collecting extensive datasets frommultiple clients,which may lead to privacy leaks.Federated learning provides an effective solution to data leakage by eliminating the need for data transmission,relying instead on the exchange of model parameters.However,the uneven distribution of client data can still affect the model’s ability to generalize effectively.To address these challenges,we propose a new framework for point cloud classification called Federated Dynamic Aggregation Selection Strategy-based Multi-Receptive Field Fusion Classification Framework(FDASS-MRFCF).Specifically,we tackle these challenges with two key innovations:(1)During the client local training phase,we propose a Multi-Receptive Field Fusion Classification Model(MRFCM),which captures local and global structures in point cloud data through dynamic convolution and multi-scale feature fusion,enhancing the robustness of point cloud classification.(2)In the server aggregation phase,we introduce a Federated Dynamic Aggregation Selection Strategy(FDASS),which employs a hybrid strategy to average client model parameters,skip aggregation,or reallocate local models to different clients,thereby balancing global consistency and local diversity.We evaluate our framework using the ModelNet40 and ShapeNetPart benchmarks,demonstrating its effectiveness.The proposed method is expected to significantly advance the field of point cloud classification in a secure environment.
文摘Accurate,up to date,and quick information related to any disaster supports disaster management team/authorities to perform quick,easy,and cost-effective response to enhance rescue operations to alleviate the possible loss of lives,financial risks,and properties.Due to damaged infrastructure in disaster-affected areas,social media is the only way to share/exchange real time information.Therefore,‘X’(formerly Twitter)has become a major platform for disseminating real-time information during disaster events or emergencies,i.e.,floods and earthquake.Rapid identification of actionable content is critical for effective humanitarian response;however,the brief and noisy nature of tweets makes automated classification challenging.To tackle this problem,this study proposes a hybrid classification framework that integrates term frequency–inverse document frequency(TF-IDF)features with graph convolutional networks(GCNs)to enhance disaster-related tweet analysis.The proposed model performs three classification tasks:identifying disaster-related tweets(achieving 94.47%accuracy),categorizing disaster types(earthquake,flood,and non-disaster)with 91.78%accuracy,and detecting aid requests such as food,donations,and medical assistance(94.64%accuracy).By combining the statistical strengths of TF-IDF with the relational learning capabilities of GCNs,the model attains high accuracy while maintaining computational efficiency and interpretability.The results demonstrate the framework’s strong potential for real-time disaster response,offering valuable insights to support emergency management systems and humanitarian decision-making.
文摘Evaluating the adversarial robustness of classification algorithms in machine learning is a crucial domain.However,current methods lack measurable and interpretable metrics.To address this issue,this paper introduces a visual evaluation index named confidence centroid skewing quadrilateral,which is based on a classification confidence-based confusion matrix,offering a quantitative and visual comparison of the adversarial robustness among different classification algorithms,and enhances intuitiveness and interpretability of attack impacts.We first conduct a validity test and sensitive analysis of the method.Then,prove its effectiveness through the experiments of five classification algorithms including artificial neural network(ANN),logistic regression(LR),support vector machine(SVM),convolutional neural network(CNN)and transformer against three adversarial attacks such as fast gradient sign method(FGSM),DeepFool,and projected gradient descent(PGD)attack.
文摘Accurate soil classification is essential for pavement design;however,the traditional American Association of State Highway and Transportation Officials(AASHTO)classification system relies on extensive laboratory testing and subjective judgment.This study presents an artificial intelligence(AI)enhanced framework for AASHTO soil classification.A synthetic dataset of 349,015 samples was generated using parameter ranges for five AASHTO input variables to support model development.Four machine learning models were trained,analyzed,and compared where the random forest(RF)consistently achieved the highest accuracy of 100%among the four models in predicting AASHTO soil groups.Feature importance analysis indicates that percent passing the No.200 sieve is the most influential factor,and under missing input scenarios.Additionally,the models remain reliable under partial input loss,though accuracy is most sensitive to the absence of percent passing the No.200 sieve,dropping to 85.8%,while all other variables maintain accuracies of at least 93.1%.Prediction uncertainty using Monte Carlo simulations shows model performance within a 95%confidence interval.Overall,the proposed AI models can accurately and efficiently predict AASHTO soil groups using incomplete datasets for geotechnical engineering.
基金supported by ZTE Industry-University-Institute Cooperation Funds under Grant No.HC-CN-20220607009。
文摘With the evolution of next-generation network technologies,the complexity of network management has significantly increased,and the means of network attacks are diversified,bringing new challenges to network traffic classification.This paper presents a general AIdriven network traffic classification workflow and elaborates on a traffic data and feature engineering framework.Most importantly,it analyzes the concept and causes of data distribution shifts in ne twork traffic,proposing detection methods and countermeasures.Experimental results on real traffic collected at different time intervals show that application evolution can induce data distribution shifts,which in turn lead to a noticeable degradation in traffic classification performance.Comparative drift detection experiments further confirm that such shifts are more evident over long-term intervals,while short-term traffic remains relatively stable.These findings demonstrate the necessity of incorporating drift-aware mechanisms into AI-driven network traffic classification systems.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)[RS-2021-II211341,Artificial Intelligence Graduate School Program(Chung-Ang University)],and by the Chung-Ang University Graduate Research Scholarship in 2024.
文摘Legal case classification involves the categorization of legal documents into predefined categories,which facilitates legal information retrieval and case management.However,real-world legal datasets often suffer from class imbalances due to the uneven distribution of case types across legal domains.This leads to biased model performance,in the form of high accuracy for overrepresented categories and underperformance for minority classes.To address this issue,in this study,we propose a data augmentation method that masks unimportant terms within a document selectively while preserving key terms fromthe perspective of the legal domain.This approach enhances data diversity and improves the generalization capability of conventional models.Our experiments demonstrate consistent improvements achieved by the proposed augmentation strategy in terms of accuracy and F1 score across all models,validating the effectiveness of the proposed method in legal case classification.