The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c...The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.展开更多
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machin...The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.展开更多
Artificial intelligence has experienced a significant boom with the emergence of agentic AI,where autonomous agents are increasingly replacing human intervention,enabling systems to perceive,reason,and act independent...Artificial intelligence has experienced a significant boom with the emergence of agentic AI,where autonomous agents are increasingly replacing human intervention,enabling systems to perceive,reason,and act independently to achieve specific goals.Despite its transformative potential,comprehensive information on agentic AI remains scarce in the literature.This paper provides the first comprehensive review of agentic AI,focusing on its evolution and three core aspects:patterns,types,and environments.The evolution of agentic AI is traced through five phases to the current era of multi-modal and collaborative agents,driven by advancements in reinforcement learning,neural networks,and large language models(LLMs).Five key patterns:tool use,reflection,ReAct,planning,and multi-agent collaboration(MAC)define how agentic AI systems interact and process tasks.These systems are categorized into seven categories,each tailored for specific operational styles and autonomy in decision making.The environments in which these agents operate are classified as static,dynamic,fully observable,partially observable,deterministic,stochastic,single-agent,and multiagent,emphasizing the impact of environmental complexity on agent behavior.Agentic AI has revolutionized systems through autonomous decision making and resource optimization,yet challenges persist in aligning AI with human values,ensuring adaptability,and addressing ethical constraints.Future research focuses on multidomain agents,human–AI collaboration,and self-improving systems.This work provides researchers,practitioners,and policymakers with a structured approach to understanding and advancing the rapidly evolving landscape of agentic AI systems.展开更多
Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Tr...Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Training(AT)enables NIDS agents to discover and prevent newattack paths by exposing them to competing examples,thereby increasing detection accuracy,reducing False Positives(FPs),and enhancing network security.To develop robust decision-making capabilities for real-world network disruptions and hostile activity,NIDS agents are trained in adversarial scenarios to monitor the current state and notify management of any abnormal or malicious activity.The accuracy and timeliness of the IDS were crucial to the network’s availability and reliability at this time.This paper analyzes ARL applications in NIDS,revealing State-of-The-Art(SoTA)methodology,issues,and future research prospects.This includes Reinforcement Machine Learning(RML)-based NIDS,which enables an agent to interact with the environment to achieve a goal,andDeep Reinforcement Learning(DRL)-based NIDS,which can solve complex decision-making problems.Additionally,this survey study addresses cybersecurity adversarial circumstances and their importance for ARL and NIDS.Architectural design,RL algorithms,feature representation,and training methodologies are examined in the ARL-NIDS study.This comprehensive study evaluates ARL for intelligent NIDS research,benefiting cybersecurity researchers,practitioners,and policymakers.The report promotes cybersecurity defense research and innovation.展开更多
The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)an...The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.展开更多
Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning sy...Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning system for the early detection of Autism Spectrum Disorder(ASD)in children.Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning.For this,we combined several different models,including Random Forest,XGBoost,and Neural Networks,into a single,more powerful framework.We used two different types of datasets:(i)a standard behavioral dataset and(ii)a more complex multimodal dataset with images,audio,and physiological information.The datasets were carefully preprocessed for missing values,redundant features,and dataset imbalance to ensure fair learning.The results outperformed the state-of-the-art with a Regularized Neural Network,achieving 97.6%accuracy on behavioral data.Whereas,on the multimodal data,the accuracy is 98.2%.Other models also did well with accuracies consistently above 96%.We also used SHAP and LIME on a behavioral dataset for models’explainability.展开更多
Colorectal cancer is the third most diagnosed cancer worldwide,and immune checkpoint inhibitors have shown promising therapeutic outcomes in selected patient groups.This study performed a comprehensive analysis of mul...Colorectal cancer is the third most diagnosed cancer worldwide,and immune checkpoint inhibitors have shown promising therapeutic outcomes in selected patient groups.This study performed a comprehensive analysis of multi-omics data from The Cancer Genome Atlas colorectal adenocarcinoma cohort(TCGA-COADREAD),accessed through cBioPortal,to develop machine learning models for predicting progression-free survival(PFS)following immunotherapy.The dataset included clinical variables,genomic alterations in Kirsten Rat Sarcoma Viral Oncogene Homolog(KRAS),B-Raf Proto-Oncogene(BRAF),and Neuroblastoma RAS Viral Oncogene Homolog(NRAS),microsatellite instability(MSI)status,tumor mutation burden(TMB),and expression of immune checkpoint genes.Kaplan–Meier analysis showed that KRAS mutations were significantly associated with reduced PFS,while BRAF and NRAS mutations had no significant impact.MSI-high tumors exhibited elevated TMB and increased immune checkpoint expression,reflecting their immunologically active phenotype.We developed both survival and classification models,with the Extra Trees classifier achieving the best performance(accuracy=0.86,precision=0.67,recall=0.70,F1-score=0.68,AUC=0.84).These findings highlight the potential of combining genomic and immune biomarkers with machine learning to improve patient stratification and guide personalized immunotherapy decisions.An interactive web application was also developed to enable clinicians to input patient-specific molecular and clinical data and visualize individualized PFS predictions,supporting timely,data-driven treatment planning.展开更多
Recent studies indicate that millions of individuals suffer from renal diseases,with renal carcinoma,a type of kidney cancer,emerging as both a chronic illness and a significant cause of mortality.Magnetic Resonance I...Recent studies indicate that millions of individuals suffer from renal diseases,with renal carcinoma,a type of kidney cancer,emerging as both a chronic illness and a significant cause of mortality.Magnetic Resonance Imaging(MRI)and Computed Tomography(CT)have become essential tools for diagnosing and assessing kidney disorders.However,accurate analysis of thesemedical images is critical for detecting and evaluating tumor severity.This study introduces an integrated hybrid framework that combines three complementary deep learning models for kidney tumor segmentation from MRI images.The proposed framework fuses a customized U-Net and Mask R-CNN using a weighted scheme to achieve semantic and instance-level segmentation.The fused outputs are further refined through edge detection using Stochastic FeatureMapping Neural Networks(SFMNN),while volumetric consistency is ensured through Improved Mini-Batch K-Means(IMBKM)clustering integrated with an Encoder-Decoder Convolutional Neural Network(EDCNN).The outputs of these three stages are combined through a weighted fusion mechanism,with optimal weights determined empirically.Experiments on MRI scans from the TCGA-KIRC dataset demonstrate that the proposed hybrid framework significantly outperforms standalone models,achieving a Dice Score of 92.5%,an IoU of 87.8%,a Precision of 93.1%,a Recall of 90.8%,and a Hausdorff Distance of 2.8 mm.These findings validate that the weighted integration of complementary architectures effectively overcomes key limitations in kidney tumor segmentation,leading to improved diagnostic accuracy and robustness in medical image analysis.展开更多
The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobil...The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobiles.While this integration enhances scalability and safety,it also raises sophisticated cyberthreats,particularly Distributed Denial of Service(DDoS)attacks.Traditional rule-based anomaly detection methods often struggle to detectmodern low-and-slowDDoS patterns,thereby leading to higher false positives.To this end,this study proposes an explainable hybrid framework to detect DDoS attacks in SDN-enabled IoV(SDN-IoV).The hybrid framework utilizes a Residual Network(ResNet)to capture spatial correlations and a Bi-Long Short-Term Memory(BiLSTM)to capture both forward and backward temporal dependencies in high-dimensional input patterns.To ensure transparency and trustworthiness,themodel integrates the Explainable AI(XAI)technique,i.e.,SHapley Additive exPlanations(SHAP).SHAP highlights the contribution of each feature during the decision-making process,facilitating security analysts to understand the rationale behind the attack classification decision.The SDN-IoV environment is created in Mininet-WiFi and SUMO,and the hybrid model is trained on the CICDDoS2019 security dataset.The simulation results reveal the efficacy of the proposed model in terms of standard performance metrics compared to similar baseline methods.展开更多
Objectives:Decisions regarding CT after nCCRT for locally advanced rectal cancer(LARC)are challenging due to limited evidence guiding treatment.This study aimed to(i)evaluate the predictive performance of machine lear...Objectives:Decisions regarding CT after nCCRT for locally advanced rectal cancer(LARC)are challenging due to limited evidence guiding treatment.This study aimed to(i)evaluate the predictive performance of machine learning(ML)models in patients treated with neoadjuvant concurrent chemoradiotherapy(nCCRT)alone vs.those receiving nCCRT plus chemotherapy(CT),(ii)identify features associated with treatment improvement,and(iii)derive ML-based thresholds for treatment response.Methods:This retrospective study included 409 patients with LARC treated at three affiliated hospitals of Taipei Medical University.Patients were categorised into two groups:nCCRT alone followed by surgery(n=182)and nCCRT plus additional CT(n=227).Thirty-four baseline demographic,tumor,and laboratory variables were analysed.Four ML algorithms(K-Star,Random Forest,Multilayer Perceptron,and Random Committee)were evaluated,while five feature-ranking algorithms identified influential attributes among improved patients across both treatments.Decision Stump and AdaBoostM1 were applied to derive threshold-based patterns.Results:K-Star achieved the highest accuracy for nCCRT alone(80.8%;AUC=0.89),while Random Committee performed best for nCCRT plus CT(77.3%;AUC=0.84).Clinical N stage(cN)ranked highest,followed by Sodium(Na),Glutamic pyruvic transaminase,estimated glomerular filtration rate,body weight,red blood cell count,mean corpuscular hemoglobin concentration,and blood urea nitrogen.Threshold patterns suggested that CT-related improvement aligned with higher lymphocyte percentage and lower platelet distribution width,whereas nCCRT-only improvement aligned with elevated eGFR,GPT,and cN=2.Conclusions:ML-based analysis identified key predictors and demonstrated good model performance,supporting individualised post-nCCRT chemotherapy decisions.展开更多
In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises e...In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises essential components such as base stations,edge servers,and numerous IIoT devices characterized by limited energy and computing capacities.The central challenge addressed is the optimization of resource allocation and task distribution while adhering to stringent queueing delay constraints and minimizing overall energy consumption.The system operates in discrete time slots and employs a quasi-static approach,with a specific focus on the complexities of task partitioning and the management of constrained resources within the IIoT context.This study makes valuable contributions to the field by enhancing the understanding of resourceefficient management and task allocation,particularly relevant in real-time industrial applications.Experimental results indicate that our proposed algorithmsignificantly outperforms existing approaches,reducing queue backlog by 45.32% and 17.25% compared to SMRA and ACRA while achieving a 27.31% and 74.12% improvement in Qn O.Moreover,the algorithmeffectively balances complexity and network performance,as demonstratedwhen reducing the number of devices in each group(Ng)from 200 to 50,resulting in a 97.21% reduction in complexity with only a 7.35% increase in energy consumption.This research offers a practical solution for optimizing IIoT networks in real-time industrial settings.展开更多
Tomato leaf diseases significantly reduce crop yield;therefore,early and accurate disease detection is required.Traditional detection methods are laborious and error-prone,particularly in large-scale farms,whereas exi...Tomato leaf diseases significantly reduce crop yield;therefore,early and accurate disease detection is required.Traditional detection methods are laborious and error-prone,particularly in large-scale farms,whereas existing hybrid deep learning models often face computational inefficiencies and poor generalization over diverse environmental and disease conditions.This study presents a unified U-Net-Vision Mamba Model with Hierarchical Bottleneck AttentionMechanism(U-net-Vim-HBAM),which integrates U-Net’s high-resolution segmentation,Vision Mamba’s efficient contextual processing,and a Hierarchical Bottleneck Attention Mechanism to address the challenges of disease detection accuracy,computational complexity,and efficiency in existing models.The model was trained on the Tomato Leaves and PlantVillage combined datasets from Kaggle and achieved 98.63% accuracy,98.24% precision,96.41% recall,and 97.31%F1 score,outperforming baselinemodels.Simulation tests demonstrated the model’s compatibility across devices with computational efficacy,ensuring its potential for integration into real-time mobile agricultural applications.The model’s adaptability to diverse datasets and conditions suggests that it is a versatile and high-precision instrument for disease management in agriculture,supporting sustainable agricultural practices.This offers a promising solution for crop health management and contributes to food security.展开更多
Automatic Dependent Surveillance-Broadcast(ADS-B)technology,with its open signal sharing,faces substantial security risks from false signals and spoofing attacks when broadcasting Unmanned Aerial Vehicle(UAV)informati...Automatic Dependent Surveillance-Broadcast(ADS-B)technology,with its open signal sharing,faces substantial security risks from false signals and spoofing attacks when broadcasting Unmanned Aerial Vehicle(UAV)information.This paper proposes a security position verification technique based on Multilateration(MLAT)to detect false signals,ensuring UAV safety and reliable airspace operations.First,the proposed method estimates the current position of the UAV by calculating the Time Difference of Arrival(TDOA),Time Sum of Arrival(TSOA),and Angle of Arrival(AOA)information.Then,this estimated position is compared with the ADS-B message to eliminate false UAV signals.Furthermore,a localization model based on TDOA/TSOA/AOA is established by utilizing reliable reference sources for base station time synchronization.Additionally,an improved Chan-Taylor algorithm is developed,incorporating the Constrained Weighted Least Squares(CWLS)method to initialize UAV position calculations.Finally,a false signal detection method is proposed to distinguish between true and false positioning targets.Numerical simulation results indicate that,at a positioning error threshold of 150 m,the improved Chan-Taylor algorithm based on TDOA/TSOA/AOA achieves 100%accuracy coverage,significantly enhancing localization precision.And the proposed false signal detection method achieves a detection accuracy rate of at least 90%within a 50-meter error range.展开更多
The successful penetration of government,corporate,and organizational IT systems by state and non-state actors deploying APT vectors continues at an alarming pace.Advanced Persistent Threat(APT)attacks continue to pos...The successful penetration of government,corporate,and organizational IT systems by state and non-state actors deploying APT vectors continues at an alarming pace.Advanced Persistent Threat(APT)attacks continue to pose significant challenges for organizations despite technological advancements in artificial intelligence(AI)-based defense mechanisms.While AI has enhanced organizational capabilities for deterrence,detection,and mitigation of APTs,the global escalation in reported incidents,particularly those successfully penetrating critical government infrastructure has heightened concerns among information technology(IT)security administrators and decision-makers.Literature review has identified the stealthy lateral movement(LM)of malware within the initially infected local area network(LAN)as a significant concern.However,current literature has yet to propose a viable approach for resource-efficient,real-time detection of APT malware lateral movement within the initially compromised LAN following perimeter breach.Researchers have suggested the nature of the dataset,optimal feature selection,and the choice of machine learning(ML)techniques as critical factors for detection.Hence,the objective of the research described here was to successfully demonstrate a simplified lightweight ML method for detecting the LM of APT vectors.While the nearest detection rate achieved in the LM domain within LAN was 99.89%,as reported in relevant studies,our approach surpassed it,with a detection rate of 99.95%for the modified random forest(RF)classifier for dataset 1.Additionally,our approach achieved a perfect 100%detection rate for the decision tree(DT)and RF classifiers with dataset 2,a milestone not previously reached in studies within this domain involving two distinct datasets.Using the ML life cycle methodology,we deployed K-nearest neighbor(KNN),support vector machine(SVM),DT,and RF on three relevant datasets to detect the LM of APTs at the affected LAN prior to data exfiltration/destruction.Feature engineering presented four critical APT LM intrusion detection(ID)indicators(features)across the three datasets,namely,the source port number,the destination port number,the packets,and the bytes.This study demonstrates the effectiveness of lightweight ML classifiers in detecting APT lateral movement after network perimeter breach.It contributes to the field by proposing a non-intrusive network detection method capable of identifying APT malware before data exfiltration,thus providing an additional layer of organizational defense.展开更多
The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(I...The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(IoT)relies on the support of base stations,which provide a solid foundation for achieving a more intelligent way of living.In a specific area,achieving higher signal coverage with fewer base stations has become an urgent problem.Therefore,this article focuses on the effective coverage area of base station signals and proposes a novel Evolutionary Particle Swarm Optimization(EPSO)algorithm based on collective prediction,referred to herein as ECPPSO.Introducing a new strategy called neighbor-based evolution prediction(NEP)addresses the issue of premature convergence often encountered by PSO.ECPPSO also employs a strengthening evolution(SE)strategy to enhance the algorithm’s global search capability and efficiency,ensuring enhanced robustness and a faster convergence speed when solving complex optimization problems.To better adapt to the actual communication needs of base stations,this article conducts simulation experiments by changing the number of base stations.The experimental results demonstrate thatunder the conditionof 50 ormore base stations,ECPPSOconsistently achieves the best coverage rate exceeding 95%,peaking at 99.4400%when the number of base stations reaches 80.These results validate the optimization capability of the ECPPSO algorithm,proving its feasibility and effectiveness.Further ablative experiments and comparisons with other algorithms highlight the advantages of ECPPSO.展开更多
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
Kubernetes has become the dominant container orchestration platform,withwidespread adoption across industries.However,its default pod-to-pod communicationmechanism introduces security vulnerabilities,particularly IP s...Kubernetes has become the dominant container orchestration platform,withwidespread adoption across industries.However,its default pod-to-pod communicationmechanism introduces security vulnerabilities,particularly IP spoofing attacks.Attackers can exploit this weakness to impersonate legitimate pods,enabling unauthorized access,lateral movement,and large-scale Distributed Denial of Service(DDoS)attacks.Existing security mechanisms such as network policies and intrusion detection systems introduce latency and performance overhead,making them less effective in dynamic Kubernetes environments.This research presents PodCA,an eBPF-based security framework designed to detect and prevent IP spoofing in real time while minimizing performance impact.PodCA integrates with Kubernetes’Container Network Interface(CNI)and uses eBPF to monitor and validate packet metadata at the kernel level.It maintains a container network mapping table that tracks pod IP assignments,validates packet legitimacy before forwarding,and ensures network integrity.If an attack is detected,PodCA automatically blocks spoofed packets and,in cases of repeated attempts,terminates compromised pods to prevent further exploitation.Experimental evaluation on an AWS Kubernetes cluster demonstrates that PodCA detects and prevents spoofed packets with 100%accuracy.Additionally,resource consumption analysis reveals minimal overhead,with a CPU increase of only 2–3%per node and memory usage rising by 40–60 MB.These results highlight the effectiveness of eBPF in securing Kubernetes environments with low overhead,making it a scalable and efficient security solution for containerized applications.展开更多
To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The...To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.展开更多
The authors regret that the original publication of this paper did not include Jawad Fayaz as a co-author.After further discussions and a thorough review of the research contributions,it was agreed that his significan...The authors regret that the original publication of this paper did not include Jawad Fayaz as a co-author.After further discussions and a thorough review of the research contributions,it was agreed that his significant contributions to the foundational aspects of the research warranted recognition,and he has now been added as a co-author.展开更多
基金appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R384)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.
文摘The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.
文摘Artificial intelligence has experienced a significant boom with the emergence of agentic AI,where autonomous agents are increasingly replacing human intervention,enabling systems to perceive,reason,and act independently to achieve specific goals.Despite its transformative potential,comprehensive information on agentic AI remains scarce in the literature.This paper provides the first comprehensive review of agentic AI,focusing on its evolution and three core aspects:patterns,types,and environments.The evolution of agentic AI is traced through five phases to the current era of multi-modal and collaborative agents,driven by advancements in reinforcement learning,neural networks,and large language models(LLMs).Five key patterns:tool use,reflection,ReAct,planning,and multi-agent collaboration(MAC)define how agentic AI systems interact and process tasks.These systems are categorized into seven categories,each tailored for specific operational styles and autonomy in decision making.The environments in which these agents operate are classified as static,dynamic,fully observable,partially observable,deterministic,stochastic,single-agent,and multiagent,emphasizing the impact of environmental complexity on agent behavior.Agentic AI has revolutionized systems through autonomous decision making and resource optimization,yet challenges persist in aligning AI with human values,ensuring adaptability,and addressing ethical constraints.Future research focuses on multidomain agents,human–AI collaboration,and self-improving systems.This work provides researchers,practitioners,and policymakers with a structured approach to understanding and advancing the rapidly evolving landscape of agentic AI systems.
文摘Adversarial Reinforcement Learning(ARL)models for intelligent devices and Network Intrusion Detection Systems(NIDS)improve systemresilience against sophisticated cyber-attacks.As a core component of ARL,Adversarial Training(AT)enables NIDS agents to discover and prevent newattack paths by exposing them to competing examples,thereby increasing detection accuracy,reducing False Positives(FPs),and enhancing network security.To develop robust decision-making capabilities for real-world network disruptions and hostile activity,NIDS agents are trained in adversarial scenarios to monitor the current state and notify management of any abnormal or malicious activity.The accuracy and timeliness of the IDS were crucial to the network’s availability and reliability at this time.This paper analyzes ARL applications in NIDS,revealing State-of-The-Art(SoTA)methodology,issues,and future research prospects.This includes Reinforcement Machine Learning(RML)-based NIDS,which enables an agent to interact with the environment to achieve a goal,andDeep Reinforcement Learning(DRL)-based NIDS,which can solve complex decision-making problems.Additionally,this survey study addresses cybersecurity adversarial circumstances and their importance for ARL and NIDS.Architectural design,RL algorithms,feature representation,and training methodologies are examined in the ARL-NIDS study.This comprehensive study evaluates ARL for intelligent NIDS research,benefiting cybersecurity researchers,practitioners,and policymakers.The report promotes cybersecurity defense research and innovation.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2025R97)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of the Internet of Things(IoT)has introduced significant security challenges,with zero-day attacks emerging as one of the most critical and challenging threats.Traditional Machine Learning(ML)and Deep Learning(DL)techniques have demonstrated promising early detection capabilities.However,their effectiveness is limited when handling the vast volumes of IoT-generated data due to scalability constraints,high computational costs,and the costly time-intensive process of data labeling.To address these challenges,this study proposes a Federated Learning(FL)framework that leverages collaborative and hybrid supervised learning to enhance cyber threat detection in IoT networks.By employing Deep Neural Networks(DNNs)and decentralized model training,the approach reduces computational complexity while improving detection accuracy.The proposed model demonstrates robust performance,achieving accuracies of 94.34%,99.95%,and 87.94%on the publicly available kitsune,Bot-IoT,and UNSW-NB15 datasets,respectively.Furthermore,its ability to detect zero-day attacks is validated through evaluations on two additional benchmark datasets,TON-IoT and IoT-23,using a Deep Federated Learning(DFL)framework,underscoring the generalization and effectiveness of the model in heterogeneous and decentralized IoT environments.Experimental results demonstrate superior performance over existing methods,establishing the proposed framework as an efficient and scalable solution for IoT security.
基金the King Salman center for Disability Research for funding this work through Research Group No.KSRG-2024-050.
文摘Artificial Intelligence(AI)is changing healthcare by helping with diagnosis.However,for doctors to trust AI tools,they need to be both accurate and easy to understand.In this study,we created a new machine learning system for the early detection of Autism Spectrum Disorder(ASD)in children.Our main goal was to build a model that is not only good at predicting ASD but also clear in its reasoning.For this,we combined several different models,including Random Forest,XGBoost,and Neural Networks,into a single,more powerful framework.We used two different types of datasets:(i)a standard behavioral dataset and(ii)a more complex multimodal dataset with images,audio,and physiological information.The datasets were carefully preprocessed for missing values,redundant features,and dataset imbalance to ensure fair learning.The results outperformed the state-of-the-art with a Regularized Neural Network,achieving 97.6%accuracy on behavioral data.Whereas,on the multimodal data,the accuracy is 98.2%.Other models also did well with accuracies consistently above 96%.We also used SHAP and LIME on a behavioral dataset for models’explainability.
基金funded by the Research,Development,and Innovation Authority(RDIA)—Kingdom of Saudi Arabia(Grant No.13292-psu-2023-PSNU-R-3-1-EF-).
文摘Colorectal cancer is the third most diagnosed cancer worldwide,and immune checkpoint inhibitors have shown promising therapeutic outcomes in selected patient groups.This study performed a comprehensive analysis of multi-omics data from The Cancer Genome Atlas colorectal adenocarcinoma cohort(TCGA-COADREAD),accessed through cBioPortal,to develop machine learning models for predicting progression-free survival(PFS)following immunotherapy.The dataset included clinical variables,genomic alterations in Kirsten Rat Sarcoma Viral Oncogene Homolog(KRAS),B-Raf Proto-Oncogene(BRAF),and Neuroblastoma RAS Viral Oncogene Homolog(NRAS),microsatellite instability(MSI)status,tumor mutation burden(TMB),and expression of immune checkpoint genes.Kaplan–Meier analysis showed that KRAS mutations were significantly associated with reduced PFS,while BRAF and NRAS mutations had no significant impact.MSI-high tumors exhibited elevated TMB and increased immune checkpoint expression,reflecting their immunologically active phenotype.We developed both survival and classification models,with the Extra Trees classifier achieving the best performance(accuracy=0.86,precision=0.67,recall=0.70,F1-score=0.68,AUC=0.84).These findings highlight the potential of combining genomic and immune biomarkers with machine learning to improve patient stratification and guide personalized immunotherapy decisions.An interactive web application was also developed to enable clinicians to input patient-specific molecular and clinical data and visualize individualized PFS predictions,supporting timely,data-driven treatment planning.
基金funded by the Ongoing Research Funding Program-Research Chairs(ORF-RC-2025-2400),King Saud University,Riyadh,Saudi Arabia。
文摘Recent studies indicate that millions of individuals suffer from renal diseases,with renal carcinoma,a type of kidney cancer,emerging as both a chronic illness and a significant cause of mortality.Magnetic Resonance Imaging(MRI)and Computed Tomography(CT)have become essential tools for diagnosing and assessing kidney disorders.However,accurate analysis of thesemedical images is critical for detecting and evaluating tumor severity.This study introduces an integrated hybrid framework that combines three complementary deep learning models for kidney tumor segmentation from MRI images.The proposed framework fuses a customized U-Net and Mask R-CNN using a weighted scheme to achieve semantic and instance-level segmentation.The fused outputs are further refined through edge detection using Stochastic FeatureMapping Neural Networks(SFMNN),while volumetric consistency is ensured through Improved Mini-Batch K-Means(IMBKM)clustering integrated with an Encoder-Decoder Convolutional Neural Network(EDCNN).The outputs of these three stages are combined through a weighted fusion mechanism,with optimal weights determined empirically.Experiments on MRI scans from the TCGA-KIRC dataset demonstrate that the proposed hybrid framework significantly outperforms standalone models,achieving a Dice Score of 92.5%,an IoU of 87.8%,a Precision of 93.1%,a Recall of 90.8%,and a Hausdorff Distance of 2.8 mm.These findings validate that the weighted integration of complementary architectures effectively overcomes key limitations in kidney tumor segmentation,leading to improved diagnostic accuracy and robustness in medical image analysis.
基金extend their appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2026R760)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors also extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through small group research under grant number RGP2/714/46.
文摘The convergence of Software Defined Networking(SDN)in Internet of Vehicles(IoV)enables a flexible,programmable,and globally visible network control architecture across Road Side Units(RSUs),cloud servers,and automobiles.While this integration enhances scalability and safety,it also raises sophisticated cyberthreats,particularly Distributed Denial of Service(DDoS)attacks.Traditional rule-based anomaly detection methods often struggle to detectmodern low-and-slowDDoS patterns,thereby leading to higher false positives.To this end,this study proposes an explainable hybrid framework to detect DDoS attacks in SDN-enabled IoV(SDN-IoV).The hybrid framework utilizes a Residual Network(ResNet)to capture spatial correlations and a Bi-Long Short-Term Memory(BiLSTM)to capture both forward and backward temporal dependencies in high-dimensional input patterns.To ensure transparency and trustworthiness,themodel integrates the Explainable AI(XAI)technique,i.e.,SHapley Additive exPlanations(SHAP).SHAP highlights the contribution of each feature during the decision-making process,facilitating security analysts to understand the rationale behind the attack classification decision.The SDN-IoV environment is created in Mininet-WiFi and SUMO,and the hybrid model is trained on the CICDDoS2019 security dataset.The simulation results reveal the efficacy of the proposed model in terms of standard performance metrics compared to similar baseline methods.
基金funded by the Australian National Health and Medical Research Council(Grant No.GNT1192469)supported by the Research Technology Services at the University of New South Wales Sydney,Google Cloud Research(Award No.GCP19980904)。
文摘Objectives:Decisions regarding CT after nCCRT for locally advanced rectal cancer(LARC)are challenging due to limited evidence guiding treatment.This study aimed to(i)evaluate the predictive performance of machine learning(ML)models in patients treated with neoadjuvant concurrent chemoradiotherapy(nCCRT)alone vs.those receiving nCCRT plus chemotherapy(CT),(ii)identify features associated with treatment improvement,and(iii)derive ML-based thresholds for treatment response.Methods:This retrospective study included 409 patients with LARC treated at three affiliated hospitals of Taipei Medical University.Patients were categorised into two groups:nCCRT alone followed by surgery(n=182)and nCCRT plus additional CT(n=227).Thirty-four baseline demographic,tumor,and laboratory variables were analysed.Four ML algorithms(K-Star,Random Forest,Multilayer Perceptron,and Random Committee)were evaluated,while five feature-ranking algorithms identified influential attributes among improved patients across both treatments.Decision Stump and AdaBoostM1 were applied to derive threshold-based patterns.Results:K-Star achieved the highest accuracy for nCCRT alone(80.8%;AUC=0.89),while Random Committee performed best for nCCRT plus CT(77.3%;AUC=0.84).Clinical N stage(cN)ranked highest,followed by Sodium(Na),Glutamic pyruvic transaminase,estimated glomerular filtration rate,body weight,red blood cell count,mean corpuscular hemoglobin concentration,and blood urea nitrogen.Threshold patterns suggested that CT-related improvement aligned with higher lymphocyte percentage and lower platelet distribution width,whereas nCCRT-only improvement aligned with elevated eGFR,GPT,and cN=2.Conclusions:ML-based analysis identified key predictors and demonstrated good model performance,supporting individualised post-nCCRT chemotherapy decisions.
基金the Deanship of Scientific Research at King Khalid University for funding this work through large group research project under Grant Number RGP2/474/44.
文摘In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises essential components such as base stations,edge servers,and numerous IIoT devices characterized by limited energy and computing capacities.The central challenge addressed is the optimization of resource allocation and task distribution while adhering to stringent queueing delay constraints and minimizing overall energy consumption.The system operates in discrete time slots and employs a quasi-static approach,with a specific focus on the complexities of task partitioning and the management of constrained resources within the IIoT context.This study makes valuable contributions to the field by enhancing the understanding of resourceefficient management and task allocation,particularly relevant in real-time industrial applications.Experimental results indicate that our proposed algorithmsignificantly outperforms existing approaches,reducing queue backlog by 45.32% and 17.25% compared to SMRA and ACRA while achieving a 27.31% and 74.12% improvement in Qn O.Moreover,the algorithmeffectively balances complexity and network performance,as demonstratedwhen reducing the number of devices in each group(Ng)from 200 to 50,resulting in a 97.21% reduction in complexity with only a 7.35% increase in energy consumption.This research offers a practical solution for optimizing IIoT networks in real-time industrial settings.
文摘Tomato leaf diseases significantly reduce crop yield;therefore,early and accurate disease detection is required.Traditional detection methods are laborious and error-prone,particularly in large-scale farms,whereas existing hybrid deep learning models often face computational inefficiencies and poor generalization over diverse environmental and disease conditions.This study presents a unified U-Net-Vision Mamba Model with Hierarchical Bottleneck AttentionMechanism(U-net-Vim-HBAM),which integrates U-Net’s high-resolution segmentation,Vision Mamba’s efficient contextual processing,and a Hierarchical Bottleneck Attention Mechanism to address the challenges of disease detection accuracy,computational complexity,and efficiency in existing models.The model was trained on the Tomato Leaves and PlantVillage combined datasets from Kaggle and achieved 98.63% accuracy,98.24% precision,96.41% recall,and 97.31%F1 score,outperforming baselinemodels.Simulation tests demonstrated the model’s compatibility across devices with computational efficacy,ensuring its potential for integration into real-time mobile agricultural applications.The model’s adaptability to diverse datasets and conditions suggests that it is a versatile and high-precision instrument for disease management in agriculture,supporting sustainable agricultural practices.This offers a promising solution for crop health management and contributes to food security.
基金supported by the National Natural Science Foundation of China(Nos.U2441250,62301380,and 62231027)Natural Science Basic Research Program of Shaanxi,China(2024JC-JCQN-63)+3 种基金the Key Research and Development Program of Shaanxi,China(No.2023-YBGY-249)the Guangxi Key Research and Development Program,China(No.2022AB46002)the China Postdoctoral Science Foundation(No.2022M722504 and 2024T170696)the Innovation Capability Support Program of Shaanxi,China(No.2024RS-CXTD-01).
文摘Automatic Dependent Surveillance-Broadcast(ADS-B)technology,with its open signal sharing,faces substantial security risks from false signals and spoofing attacks when broadcasting Unmanned Aerial Vehicle(UAV)information.This paper proposes a security position verification technique based on Multilateration(MLAT)to detect false signals,ensuring UAV safety and reliable airspace operations.First,the proposed method estimates the current position of the UAV by calculating the Time Difference of Arrival(TDOA),Time Sum of Arrival(TSOA),and Angle of Arrival(AOA)information.Then,this estimated position is compared with the ADS-B message to eliminate false UAV signals.Furthermore,a localization model based on TDOA/TSOA/AOA is established by utilizing reliable reference sources for base station time synchronization.Additionally,an improved Chan-Taylor algorithm is developed,incorporating the Constrained Weighted Least Squares(CWLS)method to initialize UAV position calculations.Finally,a false signal detection method is proposed to distinguish between true and false positioning targets.Numerical simulation results indicate that,at a positioning error threshold of 150 m,the improved Chan-Taylor algorithm based on TDOA/TSOA/AOA achieves 100%accuracy coverage,significantly enhancing localization precision.And the proposed false signal detection method achieves a detection accuracy rate of at least 90%within a 50-meter error range.
基金Rabdan Academy for funding the research presented in the paper.
文摘The successful penetration of government,corporate,and organizational IT systems by state and non-state actors deploying APT vectors continues at an alarming pace.Advanced Persistent Threat(APT)attacks continue to pose significant challenges for organizations despite technological advancements in artificial intelligence(AI)-based defense mechanisms.While AI has enhanced organizational capabilities for deterrence,detection,and mitigation of APTs,the global escalation in reported incidents,particularly those successfully penetrating critical government infrastructure has heightened concerns among information technology(IT)security administrators and decision-makers.Literature review has identified the stealthy lateral movement(LM)of malware within the initially infected local area network(LAN)as a significant concern.However,current literature has yet to propose a viable approach for resource-efficient,real-time detection of APT malware lateral movement within the initially compromised LAN following perimeter breach.Researchers have suggested the nature of the dataset,optimal feature selection,and the choice of machine learning(ML)techniques as critical factors for detection.Hence,the objective of the research described here was to successfully demonstrate a simplified lightweight ML method for detecting the LM of APT vectors.While the nearest detection rate achieved in the LM domain within LAN was 99.89%,as reported in relevant studies,our approach surpassed it,with a detection rate of 99.95%for the modified random forest(RF)classifier for dataset 1.Additionally,our approach achieved a perfect 100%detection rate for the decision tree(DT)and RF classifiers with dataset 2,a milestone not previously reached in studies within this domain involving two distinct datasets.Using the ML life cycle methodology,we deployed K-nearest neighbor(KNN),support vector machine(SVM),DT,and RF on three relevant datasets to detect the LM of APTs at the affected LAN prior to data exfiltration/destruction.Feature engineering presented four critical APT LM intrusion detection(ID)indicators(features)across the three datasets,namely,the source port number,the destination port number,the packets,and the bytes.This study demonstrates the effectiveness of lightweight ML classifiers in detecting APT lateral movement after network perimeter breach.It contributes to the field by proposing a non-intrusive network detection method capable of identifying APT malware before data exfiltration,thus providing an additional layer of organizational defense.
基金supported by the National Natural Science Foundation of China(Nos.62272418,62102058)Basic Public Welfare Research Program of Zhejiang Province(No.LGG18E050011)the Major Open Project of Key Laboratory for Advanced Design and Intelligent Computing of the Ministry of Education under Grant ADIC2023ZD001,National Undergraduate Training Program on Innovation and Entrepreneurship(No.202410345054).
文摘The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(IoT)relies on the support of base stations,which provide a solid foundation for achieving a more intelligent way of living.In a specific area,achieving higher signal coverage with fewer base stations has become an urgent problem.Therefore,this article focuses on the effective coverage area of base station signals and proposes a novel Evolutionary Particle Swarm Optimization(EPSO)algorithm based on collective prediction,referred to herein as ECPPSO.Introducing a new strategy called neighbor-based evolution prediction(NEP)addresses the issue of premature convergence often encountered by PSO.ECPPSO also employs a strengthening evolution(SE)strategy to enhance the algorithm’s global search capability and efficiency,ensuring enhanced robustness and a faster convergence speed when solving complex optimization problems.To better adapt to the actual communication needs of base stations,this article conducts simulation experiments by changing the number of base stations.The experimental results demonstrate thatunder the conditionof 50 ormore base stations,ECPPSOconsistently achieves the best coverage rate exceeding 95%,peaking at 99.4400%when the number of base stations reaches 80.These results validate the optimization capability of the ECPPSO algorithm,proving its feasibility and effectiveness.Further ablative experiments and comparisons with other algorithms highlight the advantages of ECPPSO.
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
基金partially supported by Asia Pacific University of Technology&Innovation(APU)Bukit Jalil,Kuala Lumpur,MalaysiaThe funding body had no role in the study design,data collection,analysis,interpretation,or writing of the manuscript.
文摘Kubernetes has become the dominant container orchestration platform,withwidespread adoption across industries.However,its default pod-to-pod communicationmechanism introduces security vulnerabilities,particularly IP spoofing attacks.Attackers can exploit this weakness to impersonate legitimate pods,enabling unauthorized access,lateral movement,and large-scale Distributed Denial of Service(DDoS)attacks.Existing security mechanisms such as network policies and intrusion detection systems introduce latency and performance overhead,making them less effective in dynamic Kubernetes environments.This research presents PodCA,an eBPF-based security framework designed to detect and prevent IP spoofing in real time while minimizing performance impact.PodCA integrates with Kubernetes’Container Network Interface(CNI)and uses eBPF to monitor and validate packet metadata at the kernel level.It maintains a container network mapping table that tracks pod IP assignments,validates packet legitimacy before forwarding,and ensures network integrity.If an attack is detected,PodCA automatically blocks spoofed packets and,in cases of repeated attempts,terminates compromised pods to prevent further exploitation.Experimental evaluation on an AWS Kubernetes cluster demonstrates that PodCA detects and prevents spoofed packets with 100%accuracy.Additionally,resource consumption analysis reveals minimal overhead,with a CPU increase of only 2–3%per node and memory usage rising by 40–60 MB.These results highlight the effectiveness of eBPF in securing Kubernetes environments with low overhead,making it a scalable and efficient security solution for containerized applications.
基金supported by National Nature Science Foundation of China(No.62361036)Nature Science Foundation of Gansu Province(No.22JR5RA279).
文摘To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.
文摘The authors regret that the original publication of this paper did not include Jawad Fayaz as a co-author.After further discussions and a thorough review of the research contributions,it was agreed that his significant contributions to the foundational aspects of the research warranted recognition,and he has now been added as a co-author.