The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more e...The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.展开更多
The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are cr...The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.展开更多
The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by...The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by these interconnected devices,robust anomaly detection mechanisms are essential.Anomaly detection in this dynamic environment necessitates methods that can accurately distinguish between normal and anomalous behavior by learning intricate patterns.This paper presents a novel approach utilizing generative adversarial networks(GANs)for anomaly detection in IoT systems.However,optimizing GANs involves tuning hyper-parameters such as learning rate,batch size,and optimization algorithms,which can be challenging due to the non-convex nature of GAN loss functions.To address this,we propose a five-dimensional Gray wolf optimizer(5DGWO)to optimize GAN hyper-parameters.The 5DGWO introduces two new types of wolves:gamma(γ)for improved exploitation and convergence,and theta(θ)for enhanced exploration and escaping local minima.The proposed system framework comprises four key stages:1)preprocessing,2)generative model training,3)autoencoder(AE)training,and 4)predictive model training.The generative models are utilized to assist the AE training,and the final predictive models(including convolutional neural network(CNN),deep belief network(DBN),recurrent neural network(RNN),random forest(RF),and extreme gradient boosting(XGBoost))are trained using the generated data and AE-encoded features.We evaluated the system on three benchmark datasets:NSL-KDD,UNSW-NB15,and IoT-23.Experiments conducted on diverse IoT datasets show that our method outperforms existing anomaly detection strategies and significantly reduces false positives.The 5DGWO-GAN-CNNAE exhibits superior performance in various metrics,including accuracy,recall,precision,root mean square error(RMSE),and convergence trend.The proposed 5DGWO-GAN-CNNAE achieved the lowest RMSE values across the NSL-KDD,UNSW-NB15,and IoT-23 datasets,with values of 0.24,1.10,and 0.09,respectively.Additionally,it attained the highest accuracy,ranging from 94%to 100%.These results suggest a promising direction for future IoT security frameworks,offering a scalable and efficient solution to safeguard against evolving cyber threats.展开更多
The increasing adoption of Industrial Internet of Things(IIoT)systems in smart manufacturing is leading to raise cyberattack numbers and pressing the requirement for intrusion detection systems(IDS)to be effective.How...The increasing adoption of Industrial Internet of Things(IIoT)systems in smart manufacturing is leading to raise cyberattack numbers and pressing the requirement for intrusion detection systems(IDS)to be effective.However,existing datasets for IDS training often lack relevance to modern IIoT environments,limiting their applicability for research and development.To address the latter gap,this paper introduces the HiTar-2024 dataset specifically designed for IIoT systems.As a consequence,that can be used by an IDS to detect imminent threats.Likewise,HiTar-2024 was generated using the AREZZO simulator,which replicates realistic smart manufacturing scenarios.The generated dataset includes five distinct classes:Normal,Probing,Remote to Local(R2L),User to Root(U2R),and Denial of Service(DoS).Furthermore,comprehensive experiments with popular Machine Learning(ML)models using various classifiers,including BayesNet,Logistic,IBK,Multiclass,PART,and J48 demonstrate high accuracy,precision,recall,and F1-scores,exceeding 0.99 across all ML metrics.The latter result is reached thanks to the rigorous applied process to achieve this quite good result,including data pre-processing,features extraction,fixing the class imbalance problem,and using a test option for model robustness.This comprehensive approach emphasizes meticulous dataset construction through a complete dataset generation process,a careful labelling algorithm,and a sophisticated evaluation method,providing valuable insights to reinforce IIoT system security.Finally,the HiTar-2024 dataset is compared with other similar datasets in the literature,considering several factors such as data format,feature extraction tools,number of features,attack categories,number of instances,and ML metrics.展开更多
The Internet of Things(IoT)ecosystem faces growing security challenges because it is projected to have 76.88 billion devices by 2025 and $1.4 trillion market value by 2027,operating in distributed networks with resour...The Internet of Things(IoT)ecosystem faces growing security challenges because it is projected to have 76.88 billion devices by 2025 and $1.4 trillion market value by 2027,operating in distributed networks with resource limitations and diverse system architectures.The current conventional intrusion detection systems(IDS)face scalability problems and trust-related issues,but blockchain-based solutions face limitations because of their low transaction throughput(Bitcoin:7 TPS(Transactions Per Second),Ethereum:15-30 TPS)and high latency.The research introduces MBID(Multi-Tier Blockchain Intrusion Detection)as a groundbreaking Multi-Tier Blockchain Intrusion Detection System with AI-Enhanced Detection,which solves the problems in huge IoT networks.The MBID system uses a four-tier architecture that includes device,edge,fog,and cloud layers with blockchain implementations and Physics-Informed Neural Networks(PINNs)for edge-based anomaly detection and a dual consensus mechanism that uses Honesty-based Distributed Proof-of-Authority(HDPoA)and Delegated Proof of Stake(DPoS).The system achieves scalability and efficiency through the combination of dynamic sharding and Interplanetary File System(IPFS)integration.Experimental evaluations demonstrate exceptional performance,achieving a detection accuracy of 99.84%,an ultra-low false positive rate of 0.01% with a False Negative Rate of 0.15%,and a near-instantaneous edge detection latency of 0.40 ms.The system demonstrated an aggregate throughput of 214.57 TPS in a 3-shard configuration,providing a clear,evidence-based path for horizontally scaling to support overmillions of devices with exceeding throughput.The proposed architecture represents a significant advancement in blockchain-based security for IoT networks,effectively balancing the trade-offs between scalability,security,and decentralization.展开更多
The Internet of Medical Things(IoMT)is transforming healthcare by enabling real-time data collection,analysis,and personalized treatment through interconnected devices such as sensors and wearables.The integration of ...The Internet of Medical Things(IoMT)is transforming healthcare by enabling real-time data collection,analysis,and personalized treatment through interconnected devices such as sensors and wearables.The integration of Digital Twins(DTs),the virtual replicas of physical components and processes,has also been found to be a game changer for the ever-evolving IoMT.However,these advancements in the healthcare domain come with significant cybersecurity challenges,exposing it to malicious attacks and several security threats.Intrusion Detection Systems(IDSs)serve as a critical defense mechanism,yet traditional IDS approaches often struggle with the complexity and scale of IoMT networks.With this context,this paper follows a systematic approach to analyze the existing literature and highlight the current trends and challenges related to IDS in the IoMT domain.We leveraged techniques like bibliographic and keyword analysis to collect 832 research works published from 2007 to 2025,aligned with the theme“Digital Twins and IDS in IoMT.”It was found that by simulating device behaviours and network interactions in IoMT,DTs not only provide a proactive platform for early threat detection,but also offer a scalable and adaptive approach to mitigating evolving security threats in IoMT.Overall,this review provides a closer look into the role of IDS and DT in securing IoMT systems and sheds light on the possible research directions for developers and the research community.展开更多
The growing incidence of cyberattacks necessitates a robust and effective Intrusion Detection Systems(IDS)for enhanced network security.While conventional IDSs can be unsuitable for detecting different and emerging at...The growing incidence of cyberattacks necessitates a robust and effective Intrusion Detection Systems(IDS)for enhanced network security.While conventional IDSs can be unsuitable for detecting different and emerging attacks,there is a demand for better techniques to improve detection reliability.This study introduces a new method,the Deep Adaptive Multi-Layer Attention Network(DAMLAN),to boost the result of intrusion detection on network data.Due to its multi-scale attention mechanisms and graph features,DAMLAN aims to address both known and unknown intrusions.The real-world NSL-KDD dataset,a popular choice among IDS researchers,is used to assess the proposed model.There are 67,343 normal samples and 58,630 intrusion attacks in the training set,12,833 normal samples,and 9711 intrusion attacks in the test set.Thus,the proposed DAMLAN method is more effective than the standard models due to the consideration of patterns by the attention layers.The experimental performance of the proposed model demonstrates that it achieves 99.26%training accuracy and 90.68%testing accuracy,with precision reaching 98.54%on the training set and 96.64%on the testing set.The recall and F1 scores again support the model with training set values of 99.90%and 99.21%and testing set values of 86.65%and 91.37%.These results provide a strong basis for the claims made regarding the model’s potential to identify intrusion attacks and affirm its relatively strong overall performance,irrespective of type.Future work would employ more attempts to extend the scalability and applicability of DAMLAN for real-time use in intrusion detection systems.展开更多
The era of big data brings new challenges for information network systems(INS),simultaneously offering unprecedented opportunities for advancing intelligent intrusion detection systems.In this work,we propose a data-d...The era of big data brings new challenges for information network systems(INS),simultaneously offering unprecedented opportunities for advancing intelligent intrusion detection systems.In this work,we propose a data-driven intrusion detection system for Distributed Denial of Service(DDoS)attack detection.The system focuses on intrusion detection from a big data perceptive.As intelligent information processing methods,big data and artificial intelligence have been widely used in information systems.The INS system is an important information system in cyberspace.In advanced INS systems,the network architectures have become more complex.And the smart devices in INS systems collect a large scale of network data.How to improve the performance of a complex intrusion detection system with big data and artificial intelligence is a big challenge.To address the problem,we design a novel intrusion detection system(IDS)from a big data perspective.The IDS system uses tensors to represent large-scale and complex multi-source network data in a unified tensor.Then,a novel tensor decomposition(TD)method is developed to complete big data mining.The TD method seamlessly collaborates with the XGBoost(eXtreme Gradient Boosting)method to complete the intrusion detection.To verify the proposed IDS system,a series of experiments is conducted on two real network datasets.The results revealed that the proposed IDS system attained an impressive accuracy rate over 98%.Additionally,by altering the scale of the datasets,the proposed IDS system still maintains excellent detection performance,which demonstrates the proposed IDS system’s robustness.展开更多
Healthcare networks prove to be an urgent issue in terms of intrusion detection due to the critical consequences of cyber threats and the extreme sensitivity of medical information.The proposed Auto-Stack ID in the st...Healthcare networks prove to be an urgent issue in terms of intrusion detection due to the critical consequences of cyber threats and the extreme sensitivity of medical information.The proposed Auto-Stack ID in the study is a stacked ensemble of encoder-enhanced auctions that can be used to improve intrusion detection in healthcare networks.TheWUSTL-EHMS 2020 dataset trains and evaluates themodel,constituting an imbalanced class distribution(87.46% normal traffic and 12.53% intrusion attacks).To address this imbalance,the study balances the effect of training Bias through Stratified K-fold cross-validation(K=5),so that each class is represented similarly on training and validation splits.Second,the Auto-Stack ID method combines many base classifiers such as TabNet,LightGBM,Gaussian Naive Bayes,Histogram-Based Gradient Boosting(HGB),and Logistic Regression.We apply a two-stage training process based on the first stage,where we have base classifiers that predict out-of-fold(OOF)predictions,which we use as inputs for the second-stage meta-learner XGBoost.The meta-learner learns to refine predictions to capture complicated interactions between base models,thus improving detection accuracy without introducing bias,overfitting,or requiring domain knowledge of the meta-data.In addition,the auto-stack ID model got 98.41% accuracy and 93.45%F1 score,better than individual classifiers.It can identify intrusions due to its 90.55% recall and 96.53% precision with minimal false positives.These findings identify its suitability in ensuring healthcare networks’security through ensemble learning.Ongoing efforts will be deployed in real time to improve response to evolving threats.展开更多
The emergence of Generative Adversarial Network(GAN)techniques has garnered significant attention from the research community for the development of Intrusion Detection Systems(IDS).However,conventional GAN-based IDS ...The emergence of Generative Adversarial Network(GAN)techniques has garnered significant attention from the research community for the development of Intrusion Detection Systems(IDS).However,conventional GAN-based IDS models face several challenges,including training instability,high computational costs,and system failures.To address these limitations,we propose a Hybrid Wasserstein GAN and Autoencoder Model(WGAN-AE)for intrusion detection.The proposed framework leverages the stability of WGAN and the feature extraction capabilities of the Autoencoder Model.The model was trained and evaluated using two recent benchmark datasets,5GNIDD and IDSIoT2024.When trained on the 5GNIDD dataset,the model achieved an average area under the precisionrecall curve is 99.8%using five-fold cross-validation and demonstrated a high detection accuracy of 97.35%when tested on independent test data.Additionally,the model is well-suited for deployment on resource-limited Internetof-Things(IoT)devices due to its ability to detect attacks within microseconds and its small memory footprint of 60.24 kB.Similarly,when trained on the IDSIoT2024 dataset,the model achieved an average PR-AUC of 94.09%and an attack detection accuracy of 97.35%on independent test data,with a memory requirement of 61.84 kB.Extensive simulation results demonstrate that the proposed hybrid model effectively addresses the shortcomings of traditional GAN-based IDS approaches in terms of detection accuracy,computational efficiency,and applicability to real-world IoT environments.展开更多
The offshore Tanzanian Basin contains numerous igneous intrusions emplaced at various stratigraphic levels.Previous studies indicate these intrusions have impacted petroleum systems,affecting key elements such as sour...The offshore Tanzanian Basin contains numerous igneous intrusions emplaced at various stratigraphic levels.Previous studies indicate these intrusions have impacted petroleum systems,affecting key elements such as source rocks,reservoirs,seals,migration pathways,and trapping mechanisms.However,due to the limited number of wells drilled in the region,there have been few studies reporting the associated thermal effects on source rock maturation and their role in hydrocarbon generation.To gain a comprehensive understanding of the intricate relationship between intrusions and the petroleum system,particularly source rock,an integrated geochemical and resistivity log analysis was carried out.The geochemical results show that the Cretaceous-Cenozoic sediments of the study area have low total organic carbon contents(TOC<1 wt%),kerogen yield(<1 Mg HC/g),and Hydrogen Index(<100 Mg HC/g),primarily composed of TypeⅢ(gas-prone)to TypeⅣ(inert)kerogens.These sediments have undergone varying levels of thermal maturity,ranging from post-mature(within Cretaceous),matured(in Paleocene)to immature(in Eocene)thermal states.The Cretaceous strata located proximal to the intrusions exhibit significant thermal alteration,resulting in a reduction of both organic matter(OM)content and source potential compared to the Eocene and Paleocene samples.This observation is consistent with the estimated paleotemperature(T)and resistivity log(ILD)along the depth profile,which have mapped local thermal alteration increasing from base Paleocene to Cretaceous.These findings have implications for source rock potential and thermal evolution history in the offshore Tanzanian Basin.This study highlights the necessity for thorough subsurface mapping in the area to identify both younger and older intrusive rocks.These intrusions pose a potential risk in petroleum exploration,especially when they intrude into matured source rock intervals.展开更多
The rapid proliferation of Internet of Things(IoT)devices has heightened security concerns,making intrusion detection a pivotal challenge in safeguarding these networks.Traditional centralized Intrusion Detection Syst...The rapid proliferation of Internet of Things(IoT)devices has heightened security concerns,making intrusion detection a pivotal challenge in safeguarding these networks.Traditional centralized Intrusion Detection Systems(IDS)often fail to meet the privacy requirements and scalability demands of large-scale IoT ecosystems.To address these challenges,we propose an innovative privacy-preserving approach leveraging Federated Learning(FL)for distributed intrusion detection.Our model eliminates the need for aggregating sensitive data on a central server by training locally on IoT devices and sharing only encrypted model updates,ensuring enhanced privacy and scalability without compromising detection accuracy.Key innovations of this research include the integration of advanced deep learning techniques for real-time threat detection with minimal latency and a novel model to fortify the system’s resilience against diverse cyber-attacks such as Distributed Denial of Service(DDoS)and malware injections.Our evaluation on three benchmark IoT datasets demonstrates significant improvements:achieving 92.78%accuracy on NSL-KDD,91.47%on BoT-IoT,and 92.05%on UNSW-NB15.The precision,recall,and F1-scores for all datasets consistently exceed 91%.Furthermore,the communication overhead was reduced to 85 MB for NSL-KDD,105 MB for BoT-IoT,and 95 MB for UNSW-NB15—substantially lower than traditional centralized IDS approaches.This study contributes to the domain by presenting a scalable,secure,and privacy-preserving solution tailored to the unique characteristics of IoT environments.The proposed framework is adaptable to dynamic and heterogeneous settings,with potential applications extending to other privacy-sensitive domains.Future work will focus on enhancing the system’s efficiency and addressing emerging challenges such as model poisoning attacks in federated environments.展开更多
Prolonged cyclic water intrusion has progressively developed joints in the hydro-fluctuation belt,elevating the instability risk of reservoir bank slopes.To investigate its impact on joint shear damage evolution,joint...Prolonged cyclic water intrusion has progressively developed joints in the hydro-fluctuation belt,elevating the instability risk of reservoir bank slopes.To investigate its impact on joint shear damage evolution,joint samples were prepared using three representative roughness curves and subjected to direct shear testing following cyclic water intrusion.A shear damage constitutive model considering the coupling effect of cyclic water intrusion and load was developed based on macroscopic phenomenological damage mechanics and micro-statistical theory.Results indicate:(1)All critical shear mechanical parameters(including peak shear strength,shear stiffness,basic friction angle,and joint compressive strength)exhibit progressive deterioration with increasing water intrusion cycles;(2)Model validation through experimental curve comparisons confirms its reliability.The model demonstrates that intensified water intrusion cycles reduce key mechanical indices,inducing a brittle-to-ductile transition in joint surface deformation—a behavior consistent with experimental observations;(3)Damage under cyclic water intrusion and load coupling follows an S-shaped trend,divided into stabilization(water-dominated stage),development(load-dominated stage),and completion stages.The research provides valuable insights for stability studies,such as similar model experiments for reservoir bank slopes and other water-related projects.展开更多
With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Marit...With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Maritime Meteorological Sensor Networks(MMSNs). However, the increasing number of intelligent devices joining the MMSN poses a growing threat to network security. Current Artificial Intelligence(AI) intrusion detection techniques turn intrusion detection into a classification problem, where AI excels. These techniques assume sufficient high-quality instances for model construction, which is often unsatisfactory for real-world operation with limited attack instances and constantly evolving characteristics. This paper proposes an Adaptive Personalized Federated learning(APFed) framework that allows multiple MMSN owners to engage in collaborative training. By employing an adaptive personalized update and a shared global classifier, the adverse effects of imbalanced, Non-Independent and Identically Distributed(Non-IID) data are mitigated, enabling the intrusion detection model to possess personalized capabilities and good global generalization. In addition, a lightweight intrusion detection model is proposed to detect various attacks with an effective adaptation to the MMSN environment. Finally, extensive experiments on a classical network dataset show that the attack classification accuracy is improved by about 5% compared to most baselines in the global scenarios.展开更多
The Internet of MedicalThings(IoMT)connects healthcare devices and sensors to the Internet,driving transformative advancements in healthcare delivery.However,expanding IoMT infrastructures face growing security threat...The Internet of MedicalThings(IoMT)connects healthcare devices and sensors to the Internet,driving transformative advancements in healthcare delivery.However,expanding IoMT infrastructures face growing security threats,necessitating robust IntrusionDetection Systems(IDS).Maintaining the confidentiality of patient data is critical in AI-driven healthcare systems,especially when securing interconnected medical devices.This paper introduces SNN-IoMT(Stacked Neural Network Ensemble for IoMT Security),an AI-driven IDS framework designed to secure dynamic IoMT environments.Leveraging a stacked deep learning architecture combining Multi-Layer Perceptron(MLP),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM),the model optimizes data management and integration while ensuring system scalability and interoperability.Trained on the WUSTL-EHMS-2020 and IoT-Healthcare-Security datasets,SNN-IoMT surpasses existing IDS frameworks in accuracy,precision,and detecting novel threats.By addressing the primary challenges in AI-driven healthcare systems,including privacy,reliability,and ethical data management,our approach exemplifies the importance of AI to enhance security and trust in IoMT-enabled healthcare.展开更多
The growing sophistication of cyberthreats,among others the Distributed Denial of Service attacks,has exposed limitations in traditional rule-based Security Information and Event Management systems.While machine learn...The growing sophistication of cyberthreats,among others the Distributed Denial of Service attacks,has exposed limitations in traditional rule-based Security Information and Event Management systems.While machine learning–based intrusion detection systems can capture complex network behaviours,their“black-box”nature often limits trust and actionable insight for security operators.This study introduces a novel approach that integrates Explainable Artificial Intelligence—xAI—with the Random Forest classifier to derive human-interpretable rules,thereby enhancing the detection of Distributed Denial of Service(DDoS)attacks.The proposed framework combines traditional static rule formulation with advanced xAI techniques—SHapley Additive exPlanations and Scoped Rules-to extract decision criteria from a fully trained model.The methodology was validated on two benchmark datasets,CICIDS2017 and WUSTL-IIOT-2021.Extracted rules were evaluated against conventional Security Information and Event Management Systems rules with metrics such as precision,recall,accuracy,balanced accuracy,and Matthews Correlation Coefficient.Experimental results demonstrate that xAI-derived rules consistently outperform traditional static rules.Notably,the most refined xAI-generated rule achieved near-perfect performance with significantly improved detection of DDoS traffic while maintaining high accuracy in classifying benign traffic across both datasets.展开更多
The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion dete...The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion detection system.It uses a combined method that integrates machine learning(ML)and deep learning(DL)techniques to improve the protection of contemporary information technology(IT)systems.Unlike traditional signature-based or singlemodel methods,this system integrates the strengths of ensemble learning for binary classification and deep learning for multi-class classification.This combination provides a more nuanced and adaptable defense.The research utilizes the NF-UQ-NIDS-v2 dataset,a recent,comprehensive benchmark for evaluating network intrusion detection systems(NIDS).Our methodological framework employs advanced artificial intelligence techniques.Specifically,we use ensemble learning algorithms(Random Forest,Gradient Boosting,AdaBoost,and XGBoost)for binary classification.Deep learning architectures are also employed to address the complexities of multi-class classification,allowing for fine-grained identification of intrusion types.To mitigate class imbalance,a common problem in multi-class intrusion detection that biases model performance,we use oversampling and data augmentation.These techniques ensure equitable class representation.The results demonstrate the efficacy of the proposed hybrid ML-DL system.It achieves significant improvements in intrusion detection accuracy and reliability.This research contributes substantively to cybersecurity by providing a more robust and adaptable intrusion detection solution.展开更多
Intrusion detection systems play a vital role in cyberspace security.In this study,a network intrusion detection method based on the feature selection algorithm(FSA)and a deep learning model is developed using a fusio...Intrusion detection systems play a vital role in cyberspace security.In this study,a network intrusion detection method based on the feature selection algorithm(FSA)and a deep learning model is developed using a fusion of a recursive feature elimination(RFE)algorithm and a bidirectional gated recurrent unit(BGRU).Particularly,the RFE algorithm is employed to select features from high-dimensional data to reduce weak correlations between features and remove redundant features in the numerical feature space.Then,a neural network that combines the BGRU and multilayer perceptron(MLP)is adopted to extract deep intrusion behavior features.Finally,a support vector machine(SVM)classifier is used to classify intrusion behaviors.The proposed model is verified by experiments on the NSL-KDD dataset.The results indicate that the proposed model achieves a 90.25%accuracy and a 97.51%detection rate in binary classification and outperforms other machine learning and deep learning models in intrusion classification.The proposed method can provide new insight into network intrusion detection.展开更多
With the rapid development of the industrial Internet,the network security environment has become increasingly complex and variable.Intrusion detection,a core technology for ensuring the security of industrial control...With the rapid development of the industrial Internet,the network security environment has become increasingly complex and variable.Intrusion detection,a core technology for ensuring the security of industrial control systems,faces the challenge of unbalanced data samples,particularly the low detection rates for minority class attack samples.Therefore,this paper proposes a data enhancement method for intrusion detection in the industrial Internet based on a Self-Attention Wasserstein Generative Adversarial Network(SA-WGAN)to address the low detection rates of minority class attack samples in unbalanced intrusion detection scenarios.The proposed method integrates a selfattention mechanism with a Wasserstein Generative Adversarial Network(WGAN).The self-attention mechanism automatically learns important features from the input data and assigns different weights to emphasize the key features related to intrusion behaviors,providing strong guidance for subsequent data generation.The WGAN generates new data samples through adversarial training to expand the original dataset.In the SA-WGAN framework,the WGAN directs the data generation process based on the key features extracted by the self-attention mechanism,ensuring that the generated samples exhibit both diversity and similarity to real data.Experimental results demonstrate that the SA-WGAN-based data enhancement method significantly improves detection performance for attack samples from minority classes,addresses issues of insufficient data and category imbalance,and enhances the generalization ability and overall performance of the intrusion detection model.展开更多
In the complex environment of Wireless Sensor Networks(WSNs),various malicious attacks have emerged,among which internal attacks pose particularly severe security risks.These attacks seriously threaten network stabili...In the complex environment of Wireless Sensor Networks(WSNs),various malicious attacks have emerged,among which internal attacks pose particularly severe security risks.These attacks seriously threaten network stability,data transmission reliability,and overall performance.To effectively address this issue and significantly improve intrusion detection speed,accuracy,and resistance to malicious attacks,this research designs a Three-level Intrusion Detection Model based on Dynamic Trust Evaluation(TIDM-DTE).This study conducts a detailed analysis of how different attack types impact node trust and establishes node models for data trust,communication trust,and energy consumption trust by focusing on characteristics such as continuous packet loss and energy consumption changes.By dynamically predicting node trust values using the grey Markov model,the model accurately and sensitively reflects changes in node trust levels during attacks.Additionally,DBSCAN(Density-Based Spatial Clustering of Applications with Noise)data noise monitoring technology is employed to quickly identify attacked nodes,while a trust recovery mechanism restores the trust of temporarily faulty nodes to reduce False Alarm Rate.Simulation results demonstrate that TIDM-DTE achieves high detection rates,fast detection speed,and low False Alarm Rate when identifying various network attacks,including selective forwarding attacks,Sybil attacks,switch attacks,and black hole attacks.TIDM-DTE significantly enhances network security,ensures secure and reliable data transmission,moderately improves network energy efficiency,reduces unnecessary energy consumption,and provides strong support for the stable operation of WSNs.Meanwhile,the research findings offer new ideas and methods for WSN security protection,possessing important theoretical significance and practical application value.展开更多
文摘The rapid rise of cyberattacks and the gradual failure of traditional defense systems and approaches led to using artificial intelligence(AI)techniques(such as machine learning(ML)and deep learning(DL))to build more efficient and reliable intrusion detection systems(IDSs).However,the advent of larger IDS datasets has negatively impacted the performance and computational complexity of AI-based IDSs.Many researchers used data preprocessing techniques such as feature selection and normalization to overcome such issues.While most of these researchers reported the success of these preprocessing techniques on a shallow level,very few studies have been performed on their effects on a wider scale.Furthermore,the performance of an IDS model is subject to not only the utilized preprocessing techniques but also the dataset and the ML/DL algorithm used,which most of the existing studies give little emphasis on.Thus,this study provides an in-depth analysis of feature selection and normalization effects on IDS models built using three IDS datasets:NSL-KDD,UNSW-NB15,and CSE–CIC–IDS2018,and various AI algorithms.A wrapper-based approach,which tends to give superior performance,and min-max normalization methods were used for feature selection and normalization,respectively.Numerous IDS models were implemented using the full and feature-selected copies of the datasets with and without normalization.The models were evaluated using popular evaluation metrics in IDS modeling,intra-and inter-model comparisons were performed between models and with state-of-the-art works.Random forest(RF)models performed better on NSL-KDD and UNSW-NB15 datasets with accuracies of 99.86%and 96.01%,respectively,whereas artificial neural network(ANN)achieved the best accuracy of 95.43%on the CSE–CIC–IDS2018 dataset.The RF models also achieved an excellent performance compared to recent works.The results show that normalization and feature selection positively affect IDS modeling.Furthermore,while feature selection benefits simpler algorithms(such as RF),normalization is more useful for complex algorithms like ANNs and deep neural networks(DNNs),and algorithms such as Naive Bayes are unsuitable for IDS modeling.The study also found that the UNSW-NB15 and CSE–CIC–IDS2018 datasets are more complex and more suitable for building and evaluating modern-day IDS than the NSL-KDD dataset.Our findings suggest that prioritizing robust algorithms like RF,alongside complex models such as ANN and DNN,can significantly enhance IDS performance.These insights provide valuable guidance for managers to develop more effective security measures by focusing on high detection rates and low false alert rates.
文摘The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.
基金described in this paper has been developed with in the project PRESECREL(PID2021-124502OB-C43)。
文摘The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by these interconnected devices,robust anomaly detection mechanisms are essential.Anomaly detection in this dynamic environment necessitates methods that can accurately distinguish between normal and anomalous behavior by learning intricate patterns.This paper presents a novel approach utilizing generative adversarial networks(GANs)for anomaly detection in IoT systems.However,optimizing GANs involves tuning hyper-parameters such as learning rate,batch size,and optimization algorithms,which can be challenging due to the non-convex nature of GAN loss functions.To address this,we propose a five-dimensional Gray wolf optimizer(5DGWO)to optimize GAN hyper-parameters.The 5DGWO introduces two new types of wolves:gamma(γ)for improved exploitation and convergence,and theta(θ)for enhanced exploration and escaping local minima.The proposed system framework comprises four key stages:1)preprocessing,2)generative model training,3)autoencoder(AE)training,and 4)predictive model training.The generative models are utilized to assist the AE training,and the final predictive models(including convolutional neural network(CNN),deep belief network(DBN),recurrent neural network(RNN),random forest(RF),and extreme gradient boosting(XGBoost))are trained using the generated data and AE-encoded features.We evaluated the system on three benchmark datasets:NSL-KDD,UNSW-NB15,and IoT-23.Experiments conducted on diverse IoT datasets show that our method outperforms existing anomaly detection strategies and significantly reduces false positives.The 5DGWO-GAN-CNNAE exhibits superior performance in various metrics,including accuracy,recall,precision,root mean square error(RMSE),and convergence trend.The proposed 5DGWO-GAN-CNNAE achieved the lowest RMSE values across the NSL-KDD,UNSW-NB15,and IoT-23 datasets,with values of 0.24,1.10,and 0.09,respectively.Additionally,it attained the highest accuracy,ranging from 94%to 100%.These results suggest a promising direction for future IoT security frameworks,offering a scalable and efficient solution to safeguard against evolving cyber threats.
文摘The increasing adoption of Industrial Internet of Things(IIoT)systems in smart manufacturing is leading to raise cyberattack numbers and pressing the requirement for intrusion detection systems(IDS)to be effective.However,existing datasets for IDS training often lack relevance to modern IIoT environments,limiting their applicability for research and development.To address the latter gap,this paper introduces the HiTar-2024 dataset specifically designed for IIoT systems.As a consequence,that can be used by an IDS to detect imminent threats.Likewise,HiTar-2024 was generated using the AREZZO simulator,which replicates realistic smart manufacturing scenarios.The generated dataset includes five distinct classes:Normal,Probing,Remote to Local(R2L),User to Root(U2R),and Denial of Service(DoS).Furthermore,comprehensive experiments with popular Machine Learning(ML)models using various classifiers,including BayesNet,Logistic,IBK,Multiclass,PART,and J48 demonstrate high accuracy,precision,recall,and F1-scores,exceeding 0.99 across all ML metrics.The latter result is reached thanks to the rigorous applied process to achieve this quite good result,including data pre-processing,features extraction,fixing the class imbalance problem,and using a test option for model robustness.This comprehensive approach emphasizes meticulous dataset construction through a complete dataset generation process,a careful labelling algorithm,and a sophisticated evaluation method,providing valuable insights to reinforce IIoT system security.Finally,the HiTar-2024 dataset is compared with other similar datasets in the literature,considering several factors such as data format,feature extraction tools,number of features,attack categories,number of instances,and ML metrics.
基金supported in part by Multimedia University under the Research Fellow Grant MMUI/250008in part by Telekom Research&Development Sdn Bhd underGrantRDTC/241149Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R140),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The Internet of Things(IoT)ecosystem faces growing security challenges because it is projected to have 76.88 billion devices by 2025 and $1.4 trillion market value by 2027,operating in distributed networks with resource limitations and diverse system architectures.The current conventional intrusion detection systems(IDS)face scalability problems and trust-related issues,but blockchain-based solutions face limitations because of their low transaction throughput(Bitcoin:7 TPS(Transactions Per Second),Ethereum:15-30 TPS)and high latency.The research introduces MBID(Multi-Tier Blockchain Intrusion Detection)as a groundbreaking Multi-Tier Blockchain Intrusion Detection System with AI-Enhanced Detection,which solves the problems in huge IoT networks.The MBID system uses a four-tier architecture that includes device,edge,fog,and cloud layers with blockchain implementations and Physics-Informed Neural Networks(PINNs)for edge-based anomaly detection and a dual consensus mechanism that uses Honesty-based Distributed Proof-of-Authority(HDPoA)and Delegated Proof of Stake(DPoS).The system achieves scalability and efficiency through the combination of dynamic sharding and Interplanetary File System(IPFS)integration.Experimental evaluations demonstrate exceptional performance,achieving a detection accuracy of 99.84%,an ultra-low false positive rate of 0.01% with a False Negative Rate of 0.15%,and a near-instantaneous edge detection latency of 0.40 ms.The system demonstrated an aggregate throughput of 214.57 TPS in a 3-shard configuration,providing a clear,evidence-based path for horizontally scaling to support overmillions of devices with exceeding throughput.The proposed architecture represents a significant advancement in blockchain-based security for IoT networks,effectively balancing the trade-offs between scalability,security,and decentralization.
基金This research is conducted as part of the project titled“Digital Twin-based Intrusion Detection System Using Federated Learning for IoMT”(2024-2027),supported by C3iHub,IIT Kanpur,India,under Sanction Order No.:IHUB-NTIHAC/2024/01/3.
文摘The Internet of Medical Things(IoMT)is transforming healthcare by enabling real-time data collection,analysis,and personalized treatment through interconnected devices such as sensors and wearables.The integration of Digital Twins(DTs),the virtual replicas of physical components and processes,has also been found to be a game changer for the ever-evolving IoMT.However,these advancements in the healthcare domain come with significant cybersecurity challenges,exposing it to malicious attacks and several security threats.Intrusion Detection Systems(IDSs)serve as a critical defense mechanism,yet traditional IDS approaches often struggle with the complexity and scale of IoMT networks.With this context,this paper follows a systematic approach to analyze the existing literature and highlight the current trends and challenges related to IDS in the IoMT domain.We leveraged techniques like bibliographic and keyword analysis to collect 832 research works published from 2007 to 2025,aligned with the theme“Digital Twins and IDS in IoMT.”It was found that by simulating device behaviours and network interactions in IoMT,DTs not only provide a proactive platform for early threat detection,but also offer a scalable and adaptive approach to mitigating evolving security threats in IoMT.Overall,this review provides a closer look into the role of IDS and DT in securing IoMT systems and sheds light on the possible research directions for developers and the research community.
基金Nourah bint Abdulrahman University for funding this project through the Researchers Supporting Project(PNURSP2025R319)Riyadh,Saudi Arabia and Prince Sultan University for covering the article processing charges(APC)associated with this publication.Special acknowledgement to Automated Systems&Soft Computing Lab(ASSCL),Prince Sultan University,Riyadh,Saudi Arabia.
文摘The growing incidence of cyberattacks necessitates a robust and effective Intrusion Detection Systems(IDS)for enhanced network security.While conventional IDSs can be unsuitable for detecting different and emerging attacks,there is a demand for better techniques to improve detection reliability.This study introduces a new method,the Deep Adaptive Multi-Layer Attention Network(DAMLAN),to boost the result of intrusion detection on network data.Due to its multi-scale attention mechanisms and graph features,DAMLAN aims to address both known and unknown intrusions.The real-world NSL-KDD dataset,a popular choice among IDS researchers,is used to assess the proposed model.There are 67,343 normal samples and 58,630 intrusion attacks in the training set,12,833 normal samples,and 9711 intrusion attacks in the test set.Thus,the proposed DAMLAN method is more effective than the standard models due to the consideration of patterns by the attention layers.The experimental performance of the proposed model demonstrates that it achieves 99.26%training accuracy and 90.68%testing accuracy,with precision reaching 98.54%on the training set and 96.64%on the testing set.The recall and F1 scores again support the model with training set values of 99.90%and 99.21%and testing set values of 86.65%and 91.37%.These results provide a strong basis for the claims made regarding the model’s potential to identify intrusion attacks and affirm its relatively strong overall performance,irrespective of type.Future work would employ more attempts to extend the scalability and applicability of DAMLAN for real-time use in intrusion detection systems.
基金supported in part by the National Nature Science Foundation of China under Project 62166047in part by the Yunnan International Joint Laboratory of Natural Rubber Intelligent Monitor and Digital Applications under Grant 202403AP140001in part by the Xingdian Talent Support Program under Grant YNWR-QNBJ-2019-270.
文摘The era of big data brings new challenges for information network systems(INS),simultaneously offering unprecedented opportunities for advancing intelligent intrusion detection systems.In this work,we propose a data-driven intrusion detection system for Distributed Denial of Service(DDoS)attack detection.The system focuses on intrusion detection from a big data perceptive.As intelligent information processing methods,big data and artificial intelligence have been widely used in information systems.The INS system is an important information system in cyberspace.In advanced INS systems,the network architectures have become more complex.And the smart devices in INS systems collect a large scale of network data.How to improve the performance of a complex intrusion detection system with big data and artificial intelligence is a big challenge.To address the problem,we design a novel intrusion detection system(IDS)from a big data perspective.The IDS system uses tensors to represent large-scale and complex multi-source network data in a unified tensor.Then,a novel tensor decomposition(TD)method is developed to complete big data mining.The TD method seamlessly collaborates with the XGBoost(eXtreme Gradient Boosting)method to complete the intrusion detection.To verify the proposed IDS system,a series of experiments is conducted on two real network datasets.The results revealed that the proposed IDS system attained an impressive accuracy rate over 98%.Additionally,by altering the scale of the datasets,the proposed IDS system still maintains excellent detection performance,which demonstrates the proposed IDS system’s robustness.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R319),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia and Prince Sultan University for covering the article processing charges(APC)associated with this publicationResearchers Supporting Project Number(RSPD2025R1107),King Saud University,Riyadh,Saudi Arabia.
文摘Healthcare networks prove to be an urgent issue in terms of intrusion detection due to the critical consequences of cyber threats and the extreme sensitivity of medical information.The proposed Auto-Stack ID in the study is a stacked ensemble of encoder-enhanced auctions that can be used to improve intrusion detection in healthcare networks.TheWUSTL-EHMS 2020 dataset trains and evaluates themodel,constituting an imbalanced class distribution(87.46% normal traffic and 12.53% intrusion attacks).To address this imbalance,the study balances the effect of training Bias through Stratified K-fold cross-validation(K=5),so that each class is represented similarly on training and validation splits.Second,the Auto-Stack ID method combines many base classifiers such as TabNet,LightGBM,Gaussian Naive Bayes,Histogram-Based Gradient Boosting(HGB),and Logistic Regression.We apply a two-stage training process based on the first stage,where we have base classifiers that predict out-of-fold(OOF)predictions,which we use as inputs for the second-stage meta-learner XGBoost.The meta-learner learns to refine predictions to capture complicated interactions between base models,thus improving detection accuracy without introducing bias,overfitting,or requiring domain knowledge of the meta-data.In addition,the auto-stack ID model got 98.41% accuracy and 93.45%F1 score,better than individual classifiers.It can identify intrusions due to its 90.55% recall and 96.53% precision with minimal false positives.These findings identify its suitability in ensuring healthcare networks’security through ensemble learning.Ongoing efforts will be deployed in real time to improve response to evolving threats.
基金the Deanship of Research and Graduate Studies at King Khalid University for funding this work through Large Group Project under grant number(RGP.2/245/46)funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R760)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The research team thanks the Deanship of Graduate Studies and Scientific Research at Najran University for supporting the research project through the Nama’a program,with the project code NU/GP/SERC/13/352-1。
文摘The emergence of Generative Adversarial Network(GAN)techniques has garnered significant attention from the research community for the development of Intrusion Detection Systems(IDS).However,conventional GAN-based IDS models face several challenges,including training instability,high computational costs,and system failures.To address these limitations,we propose a Hybrid Wasserstein GAN and Autoencoder Model(WGAN-AE)for intrusion detection.The proposed framework leverages the stability of WGAN and the feature extraction capabilities of the Autoencoder Model.The model was trained and evaluated using two recent benchmark datasets,5GNIDD and IDSIoT2024.When trained on the 5GNIDD dataset,the model achieved an average area under the precisionrecall curve is 99.8%using five-fold cross-validation and demonstrated a high detection accuracy of 97.35%when tested on independent test data.Additionally,the model is well-suited for deployment on resource-limited Internetof-Things(IoT)devices due to its ability to detect attacks within microseconds and its small memory footprint of 60.24 kB.Similarly,when trained on the IDSIoT2024 dataset,the model achieved an average PR-AUC of 94.09%and an attack detection accuracy of 97.35%on independent test data,with a memory requirement of 61.84 kB.Extensive simulation results demonstrate that the proposed hybrid model effectively addresses the shortcomings of traditional GAN-based IDS approaches in terms of detection accuracy,computational efficiency,and applicability to real-world IoT environments.
文摘The offshore Tanzanian Basin contains numerous igneous intrusions emplaced at various stratigraphic levels.Previous studies indicate these intrusions have impacted petroleum systems,affecting key elements such as source rocks,reservoirs,seals,migration pathways,and trapping mechanisms.However,due to the limited number of wells drilled in the region,there have been few studies reporting the associated thermal effects on source rock maturation and their role in hydrocarbon generation.To gain a comprehensive understanding of the intricate relationship between intrusions and the petroleum system,particularly source rock,an integrated geochemical and resistivity log analysis was carried out.The geochemical results show that the Cretaceous-Cenozoic sediments of the study area have low total organic carbon contents(TOC<1 wt%),kerogen yield(<1 Mg HC/g),and Hydrogen Index(<100 Mg HC/g),primarily composed of TypeⅢ(gas-prone)to TypeⅣ(inert)kerogens.These sediments have undergone varying levels of thermal maturity,ranging from post-mature(within Cretaceous),matured(in Paleocene)to immature(in Eocene)thermal states.The Cretaceous strata located proximal to the intrusions exhibit significant thermal alteration,resulting in a reduction of both organic matter(OM)content and source potential compared to the Eocene and Paleocene samples.This observation is consistent with the estimated paleotemperature(T)and resistivity log(ILD)along the depth profile,which have mapped local thermal alteration increasing from base Paleocene to Cretaceous.These findings have implications for source rock potential and thermal evolution history in the offshore Tanzanian Basin.This study highlights the necessity for thorough subsurface mapping in the area to identify both younger and older intrusive rocks.These intrusions pose a potential risk in petroleum exploration,especially when they intrude into matured source rock intervals.
基金supported and funded by the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2025).
文摘The rapid proliferation of Internet of Things(IoT)devices has heightened security concerns,making intrusion detection a pivotal challenge in safeguarding these networks.Traditional centralized Intrusion Detection Systems(IDS)often fail to meet the privacy requirements and scalability demands of large-scale IoT ecosystems.To address these challenges,we propose an innovative privacy-preserving approach leveraging Federated Learning(FL)for distributed intrusion detection.Our model eliminates the need for aggregating sensitive data on a central server by training locally on IoT devices and sharing only encrypted model updates,ensuring enhanced privacy and scalability without compromising detection accuracy.Key innovations of this research include the integration of advanced deep learning techniques for real-time threat detection with minimal latency and a novel model to fortify the system’s resilience against diverse cyber-attacks such as Distributed Denial of Service(DDoS)and malware injections.Our evaluation on three benchmark IoT datasets demonstrates significant improvements:achieving 92.78%accuracy on NSL-KDD,91.47%on BoT-IoT,and 92.05%on UNSW-NB15.The precision,recall,and F1-scores for all datasets consistently exceed 91%.Furthermore,the communication overhead was reduced to 85 MB for NSL-KDD,105 MB for BoT-IoT,and 95 MB for UNSW-NB15—substantially lower than traditional centralized IDS approaches.This study contributes to the domain by presenting a scalable,secure,and privacy-preserving solution tailored to the unique characteristics of IoT environments.The proposed framework is adaptable to dynamic and heterogeneous settings,with potential applications extending to other privacy-sensitive domains.Future work will focus on enhancing the system’s efficiency and addressing emerging challenges such as model poisoning attacks in federated environments.
基金supported by Shandong Provincial Colleges and Universities Youth Innovation Technol ogy Support Program(No.2023KJ092)Natural Science Foundation of Shandong Province(No.ZR2024ME060)Key Laboratory of Geological Safety of Coastal Urban Underground Space,Ministry of Natural Resources(No.BHKF2024Z06)。
文摘Prolonged cyclic water intrusion has progressively developed joints in the hydro-fluctuation belt,elevating the instability risk of reservoir bank slopes.To investigate its impact on joint shear damage evolution,joint samples were prepared using three representative roughness curves and subjected to direct shear testing following cyclic water intrusion.A shear damage constitutive model considering the coupling effect of cyclic water intrusion and load was developed based on macroscopic phenomenological damage mechanics and micro-statistical theory.Results indicate:(1)All critical shear mechanical parameters(including peak shear strength,shear stiffness,basic friction angle,and joint compressive strength)exhibit progressive deterioration with increasing water intrusion cycles;(2)Model validation through experimental curve comparisons confirms its reliability.The model demonstrates that intensified water intrusion cycles reduce key mechanical indices,inducing a brittle-to-ductile transition in joint surface deformation—a behavior consistent with experimental observations;(3)Damage under cyclic water intrusion and load coupling follows an S-shaped trend,divided into stabilization(water-dominated stage),development(load-dominated stage),and completion stages.The research provides valuable insights for stability studies,such as similar model experiments for reservoir bank slopes and other water-related projects.
基金supported by the National Natural Science Foundation of China under Grant 62371181the Project on Excellent Postgraduate Dissertation of Hohai University (422003482)the Changzhou Science and Technology International Cooperation Program under Grant CZ20230029。
文摘With the rapid development of advanced networking and computing technologies such as the Internet of Things, network function virtualization, and 5G infrastructure, new development opportunities are emerging for Maritime Meteorological Sensor Networks(MMSNs). However, the increasing number of intelligent devices joining the MMSN poses a growing threat to network security. Current Artificial Intelligence(AI) intrusion detection techniques turn intrusion detection into a classification problem, where AI excels. These techniques assume sufficient high-quality instances for model construction, which is often unsatisfactory for real-world operation with limited attack instances and constantly evolving characteristics. This paper proposes an Adaptive Personalized Federated learning(APFed) framework that allows multiple MMSN owners to engage in collaborative training. By employing an adaptive personalized update and a shared global classifier, the adverse effects of imbalanced, Non-Independent and Identically Distributed(Non-IID) data are mitigated, enabling the intrusion detection model to possess personalized capabilities and good global generalization. In addition, a lightweight intrusion detection model is proposed to detect various attacks with an effective adaptation to the MMSN environment. Finally, extensive experiments on a classical network dataset show that the attack classification accuracy is improved by about 5% compared to most baselines in the global scenarios.
文摘The Internet of MedicalThings(IoMT)connects healthcare devices and sensors to the Internet,driving transformative advancements in healthcare delivery.However,expanding IoMT infrastructures face growing security threats,necessitating robust IntrusionDetection Systems(IDS).Maintaining the confidentiality of patient data is critical in AI-driven healthcare systems,especially when securing interconnected medical devices.This paper introduces SNN-IoMT(Stacked Neural Network Ensemble for IoMT Security),an AI-driven IDS framework designed to secure dynamic IoMT environments.Leveraging a stacked deep learning architecture combining Multi-Layer Perceptron(MLP),Convolutional Neural Networks(CNN),and Long Short-Term Memory(LSTM),the model optimizes data management and integration while ensuring system scalability and interoperability.Trained on the WUSTL-EHMS-2020 and IoT-Healthcare-Security datasets,SNN-IoMT surpasses existing IDS frameworks in accuracy,precision,and detecting novel threats.By addressing the primary challenges in AI-driven healthcare systems,including privacy,reliability,and ethical data management,our approach exemplifies the importance of AI to enhance security and trust in IoMT-enabled healthcare.
基金funded under the Horizon Europe AI4CYBER Projectwhich has received funding from the European Union’s Horizon Europe Research and Innovation Programme under grant agreement No.101070450.
文摘The growing sophistication of cyberthreats,among others the Distributed Denial of Service attacks,has exposed limitations in traditional rule-based Security Information and Event Management systems.While machine learning–based intrusion detection systems can capture complex network behaviours,their“black-box”nature often limits trust and actionable insight for security operators.This study introduces a novel approach that integrates Explainable Artificial Intelligence—xAI—with the Random Forest classifier to derive human-interpretable rules,thereby enhancing the detection of Distributed Denial of Service(DDoS)attacks.The proposed framework combines traditional static rule formulation with advanced xAI techniques—SHapley Additive exPlanations and Scoped Rules-to extract decision criteria from a fully trained model.The methodology was validated on two benchmark datasets,CICIDS2017 and WUSTL-IIOT-2021.Extracted rules were evaluated against conventional Security Information and Event Management Systems rules with metrics such as precision,recall,accuracy,balanced accuracy,and Matthews Correlation Coefficient.Experimental results demonstrate that xAI-derived rules consistently outperform traditional static rules.Notably,the most refined xAI-generated rule achieved near-perfect performance with significantly improved detection of DDoS traffic while maintaining high accuracy in classifying benign traffic across both datasets.
文摘The rapid increase in the number of Internet of Things(IoT)devices,coupled with a rise in sophisticated cyberattacks,demands robust intrusion detection systems.This study presents a holistic,intelligent intrusion detection system.It uses a combined method that integrates machine learning(ML)and deep learning(DL)techniques to improve the protection of contemporary information technology(IT)systems.Unlike traditional signature-based or singlemodel methods,this system integrates the strengths of ensemble learning for binary classification and deep learning for multi-class classification.This combination provides a more nuanced and adaptable defense.The research utilizes the NF-UQ-NIDS-v2 dataset,a recent,comprehensive benchmark for evaluating network intrusion detection systems(NIDS).Our methodological framework employs advanced artificial intelligence techniques.Specifically,we use ensemble learning algorithms(Random Forest,Gradient Boosting,AdaBoost,and XGBoost)for binary classification.Deep learning architectures are also employed to address the complexities of multi-class classification,allowing for fine-grained identification of intrusion types.To mitigate class imbalance,a common problem in multi-class intrusion detection that biases model performance,we use oversampling and data augmentation.These techniques ensure equitable class representation.The results demonstrate the efficacy of the proposed hybrid ML-DL system.It achieves significant improvements in intrusion detection accuracy and reliability.This research contributes substantively to cybersecurity by providing a more robust and adaptable intrusion detection solution.
基金supported in part by the National Natural Science Foundation of China(No.62001333)the Scientific Research Project of Education Department of Hubei Province(No.D20221702).
文摘Intrusion detection systems play a vital role in cyberspace security.In this study,a network intrusion detection method based on the feature selection algorithm(FSA)and a deep learning model is developed using a fusion of a recursive feature elimination(RFE)algorithm and a bidirectional gated recurrent unit(BGRU).Particularly,the RFE algorithm is employed to select features from high-dimensional data to reduce weak correlations between features and remove redundant features in the numerical feature space.Then,a neural network that combines the BGRU and multilayer perceptron(MLP)is adopted to extract deep intrusion behavior features.Finally,a support vector machine(SVM)classifier is used to classify intrusion behaviors.The proposed model is verified by experiments on the NSL-KDD dataset.The results indicate that the proposed model achieves a 90.25%accuracy and a 97.51%detection rate in binary classification and outperforms other machine learning and deep learning models in intrusion classification.The proposed method can provide new insight into network intrusion detection.
基金supported by the National Natural Science Foundation of China(62473341)Key Technologies R&D Program of Henan Province(242102211071,252102211086,252102210166).
文摘With the rapid development of the industrial Internet,the network security environment has become increasingly complex and variable.Intrusion detection,a core technology for ensuring the security of industrial control systems,faces the challenge of unbalanced data samples,particularly the low detection rates for minority class attack samples.Therefore,this paper proposes a data enhancement method for intrusion detection in the industrial Internet based on a Self-Attention Wasserstein Generative Adversarial Network(SA-WGAN)to address the low detection rates of minority class attack samples in unbalanced intrusion detection scenarios.The proposed method integrates a selfattention mechanism with a Wasserstein Generative Adversarial Network(WGAN).The self-attention mechanism automatically learns important features from the input data and assigns different weights to emphasize the key features related to intrusion behaviors,providing strong guidance for subsequent data generation.The WGAN generates new data samples through adversarial training to expand the original dataset.In the SA-WGAN framework,the WGAN directs the data generation process based on the key features extracted by the self-attention mechanism,ensuring that the generated samples exhibit both diversity and similarity to real data.Experimental results demonstrate that the SA-WGAN-based data enhancement method significantly improves detection performance for attack samples from minority classes,addresses issues of insufficient data and category imbalance,and enhances the generalization ability and overall performance of the intrusion detection model.
基金supported by Gansu Provincial Higher Education Teachers’Innovation Fund under Grant 2025A-124Key Research Project of Gansu University of Political Science and Law under Grant No.GZF2022XZD08Soft Science Special Project of Gansu Basic Research Plan under Grant No.22JR11RA106.
文摘In the complex environment of Wireless Sensor Networks(WSNs),various malicious attacks have emerged,among which internal attacks pose particularly severe security risks.These attacks seriously threaten network stability,data transmission reliability,and overall performance.To effectively address this issue and significantly improve intrusion detection speed,accuracy,and resistance to malicious attacks,this research designs a Three-level Intrusion Detection Model based on Dynamic Trust Evaluation(TIDM-DTE).This study conducts a detailed analysis of how different attack types impact node trust and establishes node models for data trust,communication trust,and energy consumption trust by focusing on characteristics such as continuous packet loss and energy consumption changes.By dynamically predicting node trust values using the grey Markov model,the model accurately and sensitively reflects changes in node trust levels during attacks.Additionally,DBSCAN(Density-Based Spatial Clustering of Applications with Noise)data noise monitoring technology is employed to quickly identify attacked nodes,while a trust recovery mechanism restores the trust of temporarily faulty nodes to reduce False Alarm Rate.Simulation results demonstrate that TIDM-DTE achieves high detection rates,fast detection speed,and low False Alarm Rate when identifying various network attacks,including selective forwarding attacks,Sybil attacks,switch attacks,and black hole attacks.TIDM-DTE significantly enhances network security,ensures secure and reliable data transmission,moderately improves network energy efficiency,reduces unnecessary energy consumption,and provides strong support for the stable operation of WSNs.Meanwhile,the research findings offer new ideas and methods for WSN security protection,possessing important theoretical significance and practical application value.