Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machin...The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.展开更多
In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises e...In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises essential components such as base stations,edge servers,and numerous IIoT devices characterized by limited energy and computing capacities.The central challenge addressed is the optimization of resource allocation and task distribution while adhering to stringent queueing delay constraints and minimizing overall energy consumption.The system operates in discrete time slots and employs a quasi-static approach,with a specific focus on the complexities of task partitioning and the management of constrained resources within the IIoT context.This study makes valuable contributions to the field by enhancing the understanding of resourceefficient management and task allocation,particularly relevant in real-time industrial applications.Experimental results indicate that our proposed algorithmsignificantly outperforms existing approaches,reducing queue backlog by 45.32% and 17.25% compared to SMRA and ACRA while achieving a 27.31% and 74.12% improvement in Qn O.Moreover,the algorithmeffectively balances complexity and network performance,as demonstratedwhen reducing the number of devices in each group(Ng)from 200 to 50,resulting in a 97.21% reduction in complexity with only a 7.35% increase in energy consumption.This research offers a practical solution for optimizing IIoT networks in real-time industrial settings.展开更多
Automatic Dependent Surveillance-Broadcast(ADS-B)technology,with its open signal sharing,faces substantial security risks from false signals and spoofing attacks when broadcasting Unmanned Aerial Vehicle(UAV)informati...Automatic Dependent Surveillance-Broadcast(ADS-B)technology,with its open signal sharing,faces substantial security risks from false signals and spoofing attacks when broadcasting Unmanned Aerial Vehicle(UAV)information.This paper proposes a security position verification technique based on Multilateration(MLAT)to detect false signals,ensuring UAV safety and reliable airspace operations.First,the proposed method estimates the current position of the UAV by calculating the Time Difference of Arrival(TDOA),Time Sum of Arrival(TSOA),and Angle of Arrival(AOA)information.Then,this estimated position is compared with the ADS-B message to eliminate false UAV signals.Furthermore,a localization model based on TDOA/TSOA/AOA is established by utilizing reliable reference sources for base station time synchronization.Additionally,an improved Chan-Taylor algorithm is developed,incorporating the Constrained Weighted Least Squares(CWLS)method to initialize UAV position calculations.Finally,a false signal detection method is proposed to distinguish between true and false positioning targets.Numerical simulation results indicate that,at a positioning error threshold of 150 m,the improved Chan-Taylor algorithm based on TDOA/TSOA/AOA achieves 100%accuracy coverage,significantly enhancing localization precision.And the proposed false signal detection method achieves a detection accuracy rate of at least 90%within a 50-meter error range.展开更多
The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(I...The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(IoT)relies on the support of base stations,which provide a solid foundation for achieving a more intelligent way of living.In a specific area,achieving higher signal coverage with fewer base stations has become an urgent problem.Therefore,this article focuses on the effective coverage area of base station signals and proposes a novel Evolutionary Particle Swarm Optimization(EPSO)algorithm based on collective prediction,referred to herein as ECPPSO.Introducing a new strategy called neighbor-based evolution prediction(NEP)addresses the issue of premature convergence often encountered by PSO.ECPPSO also employs a strengthening evolution(SE)strategy to enhance the algorithm’s global search capability and efficiency,ensuring enhanced robustness and a faster convergence speed when solving complex optimization problems.To better adapt to the actual communication needs of base stations,this article conducts simulation experiments by changing the number of base stations.The experimental results demonstrate thatunder the conditionof 50 ormore base stations,ECPPSOconsistently achieves the best coverage rate exceeding 95%,peaking at 99.4400%when the number of base stations reaches 80.These results validate the optimization capability of the ECPPSO algorithm,proving its feasibility and effectiveness.Further ablative experiments and comparisons with other algorithms highlight the advantages of ECPPSO.展开更多
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
Kubernetes has become the dominant container orchestration platform,withwidespread adoption across industries.However,its default pod-to-pod communicationmechanism introduces security vulnerabilities,particularly IP s...Kubernetes has become the dominant container orchestration platform,withwidespread adoption across industries.However,its default pod-to-pod communicationmechanism introduces security vulnerabilities,particularly IP spoofing attacks.Attackers can exploit this weakness to impersonate legitimate pods,enabling unauthorized access,lateral movement,and large-scale Distributed Denial of Service(DDoS)attacks.Existing security mechanisms such as network policies and intrusion detection systems introduce latency and performance overhead,making them less effective in dynamic Kubernetes environments.This research presents PodCA,an eBPF-based security framework designed to detect and prevent IP spoofing in real time while minimizing performance impact.PodCA integrates with Kubernetes’Container Network Interface(CNI)and uses eBPF to monitor and validate packet metadata at the kernel level.It maintains a container network mapping table that tracks pod IP assignments,validates packet legitimacy before forwarding,and ensures network integrity.If an attack is detected,PodCA automatically blocks spoofed packets and,in cases of repeated attempts,terminates compromised pods to prevent further exploitation.Experimental evaluation on an AWS Kubernetes cluster demonstrates that PodCA detects and prevents spoofed packets with 100%accuracy.Additionally,resource consumption analysis reveals minimal overhead,with a CPU increase of only 2–3%per node and memory usage rising by 40–60 MB.These results highlight the effectiveness of eBPF in securing Kubernetes environments with low overhead,making it a scalable and efficient security solution for containerized applications.展开更多
To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The...To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.展开更多
The authors regret that the original publication of this paper did not include Jawad Fayaz as a co-author.After further discussions and a thorough review of the research contributions,it was agreed that his significan...The authors regret that the original publication of this paper did not include Jawad Fayaz as a co-author.After further discussions and a thorough review of the research contributions,it was agreed that his significant contributions to the foundational aspects of the research warranted recognition,and he has now been added as a co-author.展开更多
This paper investigates the uplink spectral efficiency of distributed cell-free(CF)massive multiple-input multiple-output(mMIMO)networks with correlated Rayleigh fading channels based on three different channel estima...This paper investigates the uplink spectral efficiency of distributed cell-free(CF)massive multiple-input multiple-output(mMIMO)networks with correlated Rayleigh fading channels based on three different channel estimation schemes.Specifically,each access point(AP)first uses embedded pilots to estimate the channels of all users based on minimum mean-squared error(MMSE)estimation.Given the high computational cost of MMSE estimation,the low-complexity element-wise MMSE(EW-MMSE)channel estimator and the least-squares(LS)channel estimator without prior statistical information are also analyzed.To reduce non-coherent and coherent interference during uplink payload data transmission,simple centralized decoding(SCD)and large-scale fading decoding(LSFD)are examined.Then,the closedform expressions for uplink spectral efficiency(SE)using MMSE,EW-MMSE,and LS estimators are developed for maximum ratio(MR)combining under LSFD,where each AP may have any number of antennas.The sum SE maximization problem with uplink power control is formulated.Since the maximization problem is non-convex and challenging,a block coordinate descent approach based on the weighted MMSE method is used to get the optimal local solution.Numerical studies demonstrate that LSFD and efficient uplink power control can considerably increase SE in distributed CF m MIMO networks.展开更多
Determining the optimal ceramic content of the ceramics-in-polymer composite electrolytes and the appropriate stack pressure can effectively improve the interfacial contact of solid-state batteries(SSBs).Based on the ...Determining the optimal ceramic content of the ceramics-in-polymer composite electrolytes and the appropriate stack pressure can effectively improve the interfacial contact of solid-state batteries(SSBs).Based on the contact mechanics model and constructed by the conjugate gradient method,continuous convolution,and fast Fourier transform,this paper analyzes and compares the interfacial contact responses involving the polymers commonly used in SSBs,which provides the original training data for machine learning.A support vector regression model is established to predict the relationship between the content of ceramics and the interfacial resistance.The Bayesian optimization and K-fold cross-validation are introduced to find the optimal combination of hyperparameters,which accelerates the training process and improves the model’s accuracy.We found the relationship between the content of ceramics,the stack pressure,and the interfacial resistance.The results can be taken as a reference for the design of the low-resistance composite electrolytes for solid-state batteries.展开更多
The construction projects’ dynamic and interconnected nature requires a comprehensive understanding of complexity during pre-construction. Traditional tools such as Gantt charts, CPM, and PERT often overlook uncertai...The construction projects’ dynamic and interconnected nature requires a comprehensive understanding of complexity during pre-construction. Traditional tools such as Gantt charts, CPM, and PERT often overlook uncertainties. This study identifies 20 complexity factors through expert interviews and literature, categorising them into six groups. The Analytical Hierarchy Process evaluated the significance of different factors, establishing their corresponding weights to enhance adaptive project scheduling. A system dynamics (SD) model is developed and tested to evaluate the dynamic behaviour of identified complexity factors. The model simulates the impact of complexity on total project duration (TPD), revealing significant deviations from initial deterministic estimates. Data collection and analysis for reliability tests, including normality and Cronbach alpha, to validate the model’s components and expert feedback. Sensitivity analysis confirmed a positive relationship between complexity and project duration, with higher complexity levels resulting in increased TPD. This relationship highlights the inadequacy of static planning approaches and underscores the importance of addressing complexity dynamically. The study provides a framework for enhancing planning systems through system dynamics and recommends expanding the model to ensure broader applicability in diverse construction projects.展开更多
The proliferation of smart communities in Foshan has led to increasingly diverse and prevalent cybersecurity risks for residents.This trend has rendered traditional cybersecurity education models inadequate in address...The proliferation of smart communities in Foshan has led to increasingly diverse and prevalent cybersecurity risks for residents.This trend has rendered traditional cybersecurity education models inadequate in addressing the challenges of the digital era.Guided by the theory of collaborative governance and the framework of digital transformation,this paper examines the multi-stakeholder collaborative mechanism involving the government,businesses,community organizations,universities,and residents.It subsequently proposes a series of implementation strategies such as digitizing educational content,intellectualizing platforms,contextualizing delivery methods,and refining management precision.Studies demonstrate that this model enables effective resource integration,improves educational precision,and boosts resident engagement.It represents a fundamental shift from unilateral dissemination to multi-party interaction and from decentralized management to collaborative synergy,offering a replicable“Foshan Model”for digital governance at the community level.展开更多
Autism Spectrum Disorder(ASD)is a complex neurodevelopmental condition that causes multiple challenges in behavioral and communication activities.In the medical field,the data related to ASD,the security measures are ...Autism Spectrum Disorder(ASD)is a complex neurodevelopmental condition that causes multiple challenges in behavioral and communication activities.In the medical field,the data related to ASD,the security measures are integrated in this research responsibly and effectively to develop the Mobile Neuron Attention Stage-by-Stage Network(MNASNet)model,which is the integration of both Mobile Network(MobileNet)and Neuron Attention Stage-by-Stage.The steps followed to detect ASD with privacy-preserved data are data normalization,data augmentation,and K-Anonymization.The clinical data of individuals are taken initially and preprocessed using the Z-score Normalization.Then,data augmentation is performed using the oversampling technique.Subsequently,K-Anonymization is effectuated by utilizing the Black-winged Kite Algorithm to ensure the privacy of medical data,where the best fitness solution is based on data utility and privacy.Finally,after improving the data privacy,the developed approach MNASNet is implemented for ASD detection,which achieves highly accurate results compared to traditional methods to detect autism behavior.Hence,the final results illustrate that the proposed MNASNet achieves an accuracy of 92.9%,TPR of 95.9%,and TNR of 90.9%at the k-samples of 8.展开更多
Leaf disease identification is one of the most promising applications of convolutional neural networks(CNNs).This method represents a significant step towards revolutionizing agriculture by enabling the quick and accu...Leaf disease identification is one of the most promising applications of convolutional neural networks(CNNs).This method represents a significant step towards revolutionizing agriculture by enabling the quick and accurate assessment of plant health.In this study,a CNN model was specifically designed and tested to detect and categorize diseases on fig tree leaves.The researchers utilized a dataset of 3422 images,divided into four classes:healthy,fig rust,fig mosaic,and anthracnose.These diseases can significantly reduce the yield and quality of fig tree fruit.The objective of this research is to develop a CNN that can identify and categorize diseases in fig tree leaves.The data for this study was collected from gardens in the Amandi and Mamash Khail Bannu districts of the Khyber Pakhtunkhwa region in Pakistan.To minimize the risk of overfitting and enhance the model’s performance,early stopping techniques and data augmentation were employed.As a result,the model achieved a training accuracy of 91.53%and a validation accuracy of 90.12%,which are considered respectable.This comprehensive model assists farmers in the early identification and categorization of fig tree leaf diseases.Our experts believe that CNNs could serve as valuable tools for accurate disease classification and detection in precision agriculture.We recommend further research to explore additional data sources and more advanced neural networks to improve the model’s accuracy and applicability.Future research will focus on expanding the dataset by including new diseases and testing the model in real-world scenarios to enhance sustainable farming practices.展开更多
Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a r...Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a result many service providers have begun to deploy edge servers to cache intelligent services in order to reduce transmission delay and communication energy consumption.However,finding the optimal service caching strategy remains a significant challenge due to the stochastic nature of service requests and the bulky nature of intelligent services.To deal with this,we propose a distributed service caching scheme integrating deep reinforcement learning(DRL)with mobility prediction,which we refer to as DSDM.Specifically,we employ the D3QN(Deep Double Dueling Q-Network)framework to integrate Long Short-Term Memory(LSTM)predicted mobile device locations into the service caching replacement algorithm and adopt the distributed multi-agent approach for learning and training.Experimental results demonstrate that DSDM achieves significant performance improvements in reducing communication energy consumption compared to traditional methods across various scenarios.展开更多
This study presents a comprehensive and secure architectural framework for the Internet of Medical Things(IoMT),integrating the foundational principles of the Confidentiality,Integrity,and Availability(CIA)triad along...This study presents a comprehensive and secure architectural framework for the Internet of Medical Things(IoMT),integrating the foundational principles of the Confidentiality,Integrity,and Availability(CIA)triad along with authentication mechanisms.Leveraging advanced Machine Learning(ML)and Deep Learning(DL)techniques,the proposed system is designed to safeguard Patient-Generated Health Data(PGHD)across interconnected medical devices.Given the increasing complexity and scale of cyber threats in IoMT environments,the integration of Intrusion Detection and Prevention Systems(IDPS)with intelligent analytics is critical.Our methodology employs both standalone and hybrid ML&DL models to automate threat detection and enable real-time analysis,while ensuring rapid and accurate responses to a diverse array of attacks.Emphasis is placed on systematic model evaluation using detection metrics such as accuracy,False Alarm Rate(FAR),and False Discovery Rate(FDR),with performance validation through cross-validation and statistical significance testing.Experimental results based on the Edge-IIoTset dataset demonstrate the superior performance of ensemble-based ML models such as Extreme Gradient Boosting(XGB)and hybrid DL models such as Convolutional Neural Networks with Autoencoders(CNN+AE),which achieved detection accuracies of 96%and 98%,respectively,with notably low FARs.These findings underscore the effectiveness of combining traditional security principles with advanced AI-driven methodologies to ensure secure,resilient,and trustworthy healthcare systems within the IoMT ecosystem.展开更多
Designing fast and accurate neural networks is becoming essential in various vision tasks.Recently,the use of attention mechanisms has increased,aimed at enhancing the vision task performance by selectively focusing o...Designing fast and accurate neural networks is becoming essential in various vision tasks.Recently,the use of attention mechanisms has increased,aimed at enhancing the vision task performance by selectively focusing on relevant parts of the input.In this paper,we concentrate on squeeze-and-excitation(SE)-based channel attention,considering the trade-off between latency and accuracy.We propose a variation of the SE module,called squeeze-and-excitation with layer normalization(SELN),in which layer normalization(LN)replaces the sigmoid activation function.This approach reduces the vanishing gradient problem while enhancing feature diversity and discriminability of channel attention.In addition,we propose a latency-efficient model named SELNeXt,where the LN typically used in the ConvNext block is replaced by SELN to minimize additional latency-impacting operations.Through classification simulations on ImageNet-1k,we show that the top-1 accuracy of the proposed SELNeXt outperforms other ConvNeXt-based models in terms of latency efficiency.SELNeXt also achieves better object detection and instance segmentation performance on COCO than Swin Transformer and ConvNeXt for small-sized models.Our results indicate that LN could be a considerable candidate for replacing the activation function in attention mechanisms.In addition,SELNeXt achieves a better accuracy-latency trade-off,making it favorable for real-time applications and edge computing.The code is available at https://github.com/oto-q/SELNeXt(accessed on 06 December 2024).展开更多
The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches l...The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.展开更多
To enable proper diagnosis of a patient,medical images must demonstrate no presence of noise and artifacts.The major hurdle lies in acquiring these images in such a manner that extraneous variables,causing distortions...To enable proper diagnosis of a patient,medical images must demonstrate no presence of noise and artifacts.The major hurdle lies in acquiring these images in such a manner that extraneous variables,causing distortions in the form of noise and artifacts,are kept to a bare minimum.The unexpected change realized during the acquisition process specifically attacks the integrity of the image’s quality,while indirectly attacking the effectiveness of the diagnostic process.It is thus crucial that this is attended to with maximum efficiency at the level of pertinent expertise.The solution to these challenges presents a complex dilemma at the acquisition stage,where image processing techniques must be adopted.The necessity of this mandatory image pre-processing step underpins the implementation of traditional state-of-the-art methods to create functional and robust denoising or recovery devices.This article hereby provides an extensive systematic review of the above techniques,with the purpose of presenting a systematic evaluation of their effect on medical images under three different distributions of noise,i.e.,Gaussian,Poisson,and Rician.A thorough analysis of these methods is conducted using eight evaluation parameters to highlight the unique features of each method.The covered denoising methods are essential in actual clinical scenarios where the preservation of anatomical details is crucial for accurate and safe diagnosis,such as tumor detection in MRI and vascular imaging in CT.展开更多
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.
文摘The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.
基金the Deanship of Scientific Research at King Khalid University for funding this work through large group research project under Grant Number RGP2/474/44.
文摘In this paper,we present a comprehensive system model for Industrial Internet of Things(IIoT)networks empowered by Non-Orthogonal Multiple Access(NOMA)and Mobile Edge Computing(MEC)technologies.The network comprises essential components such as base stations,edge servers,and numerous IIoT devices characterized by limited energy and computing capacities.The central challenge addressed is the optimization of resource allocation and task distribution while adhering to stringent queueing delay constraints and minimizing overall energy consumption.The system operates in discrete time slots and employs a quasi-static approach,with a specific focus on the complexities of task partitioning and the management of constrained resources within the IIoT context.This study makes valuable contributions to the field by enhancing the understanding of resourceefficient management and task allocation,particularly relevant in real-time industrial applications.Experimental results indicate that our proposed algorithmsignificantly outperforms existing approaches,reducing queue backlog by 45.32% and 17.25% compared to SMRA and ACRA while achieving a 27.31% and 74.12% improvement in Qn O.Moreover,the algorithmeffectively balances complexity and network performance,as demonstratedwhen reducing the number of devices in each group(Ng)from 200 to 50,resulting in a 97.21% reduction in complexity with only a 7.35% increase in energy consumption.This research offers a practical solution for optimizing IIoT networks in real-time industrial settings.
基金supported by the National Natural Science Foundation of China(Nos.U2441250,62301380,and 62231027)Natural Science Basic Research Program of Shaanxi,China(2024JC-JCQN-63)+3 种基金the Key Research and Development Program of Shaanxi,China(No.2023-YBGY-249)the Guangxi Key Research and Development Program,China(No.2022AB46002)the China Postdoctoral Science Foundation(No.2022M722504 and 2024T170696)the Innovation Capability Support Program of Shaanxi,China(No.2024RS-CXTD-01).
文摘Automatic Dependent Surveillance-Broadcast(ADS-B)technology,with its open signal sharing,faces substantial security risks from false signals and spoofing attacks when broadcasting Unmanned Aerial Vehicle(UAV)information.This paper proposes a security position verification technique based on Multilateration(MLAT)to detect false signals,ensuring UAV safety and reliable airspace operations.First,the proposed method estimates the current position of the UAV by calculating the Time Difference of Arrival(TDOA),Time Sum of Arrival(TSOA),and Angle of Arrival(AOA)information.Then,this estimated position is compared with the ADS-B message to eliminate false UAV signals.Furthermore,a localization model based on TDOA/TSOA/AOA is established by utilizing reliable reference sources for base station time synchronization.Additionally,an improved Chan-Taylor algorithm is developed,incorporating the Constrained Weighted Least Squares(CWLS)method to initialize UAV position calculations.Finally,a false signal detection method is proposed to distinguish between true and false positioning targets.Numerical simulation results indicate that,at a positioning error threshold of 150 m,the improved Chan-Taylor algorithm based on TDOA/TSOA/AOA achieves 100%accuracy coverage,significantly enhancing localization precision.And the proposed false signal detection method achieves a detection accuracy rate of at least 90%within a 50-meter error range.
基金supported by the National Natural Science Foundation of China(Nos.62272418,62102058)Basic Public Welfare Research Program of Zhejiang Province(No.LGG18E050011)the Major Open Project of Key Laboratory for Advanced Design and Intelligent Computing of the Ministry of Education under Grant ADIC2023ZD001,National Undergraduate Training Program on Innovation and Entrepreneurship(No.202410345054).
文摘The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(IoT)relies on the support of base stations,which provide a solid foundation for achieving a more intelligent way of living.In a specific area,achieving higher signal coverage with fewer base stations has become an urgent problem.Therefore,this article focuses on the effective coverage area of base station signals and proposes a novel Evolutionary Particle Swarm Optimization(EPSO)algorithm based on collective prediction,referred to herein as ECPPSO.Introducing a new strategy called neighbor-based evolution prediction(NEP)addresses the issue of premature convergence often encountered by PSO.ECPPSO also employs a strengthening evolution(SE)strategy to enhance the algorithm’s global search capability and efficiency,ensuring enhanced robustness and a faster convergence speed when solving complex optimization problems.To better adapt to the actual communication needs of base stations,this article conducts simulation experiments by changing the number of base stations.The experimental results demonstrate thatunder the conditionof 50 ormore base stations,ECPPSOconsistently achieves the best coverage rate exceeding 95%,peaking at 99.4400%when the number of base stations reaches 80.These results validate the optimization capability of the ECPPSO algorithm,proving its feasibility and effectiveness.Further ablative experiments and comparisons with other algorithms highlight the advantages of ECPPSO.
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
基金partially supported by Asia Pacific University of Technology&Innovation(APU)Bukit Jalil,Kuala Lumpur,MalaysiaThe funding body had no role in the study design,data collection,analysis,interpretation,or writing of the manuscript.
文摘Kubernetes has become the dominant container orchestration platform,withwidespread adoption across industries.However,its default pod-to-pod communicationmechanism introduces security vulnerabilities,particularly IP spoofing attacks.Attackers can exploit this weakness to impersonate legitimate pods,enabling unauthorized access,lateral movement,and large-scale Distributed Denial of Service(DDoS)attacks.Existing security mechanisms such as network policies and intrusion detection systems introduce latency and performance overhead,making them less effective in dynamic Kubernetes environments.This research presents PodCA,an eBPF-based security framework designed to detect and prevent IP spoofing in real time while minimizing performance impact.PodCA integrates with Kubernetes’Container Network Interface(CNI)and uses eBPF to monitor and validate packet metadata at the kernel level.It maintains a container network mapping table that tracks pod IP assignments,validates packet legitimacy before forwarding,and ensures network integrity.If an attack is detected,PodCA automatically blocks spoofed packets and,in cases of repeated attempts,terminates compromised pods to prevent further exploitation.Experimental evaluation on an AWS Kubernetes cluster demonstrates that PodCA detects and prevents spoofed packets with 100%accuracy.Additionally,resource consumption analysis reveals minimal overhead,with a CPU increase of only 2–3%per node and memory usage rising by 40–60 MB.These results highlight the effectiveness of eBPF in securing Kubernetes environments with low overhead,making it a scalable and efficient security solution for containerized applications.
基金supported by National Nature Science Foundation of China(No.62361036)Nature Science Foundation of Gansu Province(No.22JR5RA279).
文摘To realize dynamic statistical publishing and protection of location-based data privacy,this paper proposes a differential privacy publishing algorithm based on adaptive sampling and grid clustering and adjustment.The PID control strategy is combined with the difference in data variation to realize the dynamic adjustment of the data publishing intervals.The spatial-temporal correlations of the adjacent snapshots are utilized to design the grid clustering and adjustment algorithm,which facilitates saving the execution time of the publishing process.The budget distribution and budget absorption strategies are improved to form the sliding window-based differential privacy statistical publishing algorithm,which realizes continuous statistical publishing and privacy protection and improves the accuracy of published data.Experiments and analysis on large datasets of actual locations show that the privacy protection algorithm proposed in this paper is superior to other existing algorithms in terms of the accuracy of adaptive sampling time,the availability of published data,and the execution efficiency of data publishing methods.
文摘The authors regret that the original publication of this paper did not include Jawad Fayaz as a co-author.After further discussions and a thorough review of the research contributions,it was agreed that his significant contributions to the foundational aspects of the research warranted recognition,and he has now been added as a co-author.
基金supported by National Natural Science Foundation of China(NSFC No.62020106001)。
文摘This paper investigates the uplink spectral efficiency of distributed cell-free(CF)massive multiple-input multiple-output(mMIMO)networks with correlated Rayleigh fading channels based on three different channel estimation schemes.Specifically,each access point(AP)first uses embedded pilots to estimate the channels of all users based on minimum mean-squared error(MMSE)estimation.Given the high computational cost of MMSE estimation,the low-complexity element-wise MMSE(EW-MMSE)channel estimator and the least-squares(LS)channel estimator without prior statistical information are also analyzed.To reduce non-coherent and coherent interference during uplink payload data transmission,simple centralized decoding(SCD)and large-scale fading decoding(LSFD)are examined.Then,the closedform expressions for uplink spectral efficiency(SE)using MMSE,EW-MMSE,and LS estimators are developed for maximum ratio(MR)combining under LSFD,where each AP may have any number of antennas.The sum SE maximization problem with uplink power control is formulated.Since the maximization problem is non-convex and challenging,a block coordinate descent approach based on the weighted MMSE method is used to get the optimal local solution.Numerical studies demonstrate that LSFD and efficient uplink power control can considerably increase SE in distributed CF m MIMO networks.
基金the National Natural Science Foundation of China(12102085)the Postdoctoral Science Foundation of China(2023M730504)the Sichuan Province Regional Innovation and Cooperation Project(2024YFHZ0210).
文摘Determining the optimal ceramic content of the ceramics-in-polymer composite electrolytes and the appropriate stack pressure can effectively improve the interfacial contact of solid-state batteries(SSBs).Based on the contact mechanics model and constructed by the conjugate gradient method,continuous convolution,and fast Fourier transform,this paper analyzes and compares the interfacial contact responses involving the polymers commonly used in SSBs,which provides the original training data for machine learning.A support vector regression model is established to predict the relationship between the content of ceramics and the interfacial resistance.The Bayesian optimization and K-fold cross-validation are introduced to find the optimal combination of hyperparameters,which accelerates the training process and improves the model’s accuracy.We found the relationship between the content of ceramics,the stack pressure,and the interfacial resistance.The results can be taken as a reference for the design of the low-resistance composite electrolytes for solid-state batteries.
文摘The construction projects’ dynamic and interconnected nature requires a comprehensive understanding of complexity during pre-construction. Traditional tools such as Gantt charts, CPM, and PERT often overlook uncertainties. This study identifies 20 complexity factors through expert interviews and literature, categorising them into six groups. The Analytical Hierarchy Process evaluated the significance of different factors, establishing their corresponding weights to enhance adaptive project scheduling. A system dynamics (SD) model is developed and tested to evaluate the dynamic behaviour of identified complexity factors. The model simulates the impact of complexity on total project duration (TPD), revealing significant deviations from initial deterministic estimates. Data collection and analysis for reliability tests, including normality and Cronbach alpha, to validate the model’s components and expert feedback. Sensitivity analysis confirmed a positive relationship between complexity and project duration, with higher complexity levels resulting in increased TPD. This relationship highlights the inadequacy of static planning approaches and underscores the importance of addressing complexity dynamically. The study provides a framework for enhancing planning systems through system dynamics and recommends expanding the model to ensure broader applicability in diverse construction projects.
基金2025 Foshan Social Science Planning Project,“Research on Pathways for Enhancing Cybersecurity Awareness Among Foshan Community Residents Empowered by Digital and Intelligent Technologies”(Project No.:2025-GJ091)。
文摘The proliferation of smart communities in Foshan has led to increasingly diverse and prevalent cybersecurity risks for residents.This trend has rendered traditional cybersecurity education models inadequate in addressing the challenges of the digital era.Guided by the theory of collaborative governance and the framework of digital transformation,this paper examines the multi-stakeholder collaborative mechanism involving the government,businesses,community organizations,universities,and residents.It subsequently proposes a series of implementation strategies such as digitizing educational content,intellectualizing platforms,contextualizing delivery methods,and refining management precision.Studies demonstrate that this model enables effective resource integration,improves educational precision,and boosts resident engagement.It represents a fundamental shift from unilateral dissemination to multi-party interaction and from decentralized management to collaborative synergy,offering a replicable“Foshan Model”for digital governance at the community level.
文摘Autism Spectrum Disorder(ASD)is a complex neurodevelopmental condition that causes multiple challenges in behavioral and communication activities.In the medical field,the data related to ASD,the security measures are integrated in this research responsibly and effectively to develop the Mobile Neuron Attention Stage-by-Stage Network(MNASNet)model,which is the integration of both Mobile Network(MobileNet)and Neuron Attention Stage-by-Stage.The steps followed to detect ASD with privacy-preserved data are data normalization,data augmentation,and K-Anonymization.The clinical data of individuals are taken initially and preprocessed using the Z-score Normalization.Then,data augmentation is performed using the oversampling technique.Subsequently,K-Anonymization is effectuated by utilizing the Black-winged Kite Algorithm to ensure the privacy of medical data,where the best fitness solution is based on data utility and privacy.Finally,after improving the data privacy,the developed approach MNASNet is implemented for ASD detection,which achieves highly accurate results compared to traditional methods to detect autism behavior.Hence,the final results illustrate that the proposed MNASNet achieves an accuracy of 92.9%,TPR of 95.9%,and TNR of 90.9%at the k-samples of 8.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2025).
文摘Leaf disease identification is one of the most promising applications of convolutional neural networks(CNNs).This method represents a significant step towards revolutionizing agriculture by enabling the quick and accurate assessment of plant health.In this study,a CNN model was specifically designed and tested to detect and categorize diseases on fig tree leaves.The researchers utilized a dataset of 3422 images,divided into four classes:healthy,fig rust,fig mosaic,and anthracnose.These diseases can significantly reduce the yield and quality of fig tree fruit.The objective of this research is to develop a CNN that can identify and categorize diseases in fig tree leaves.The data for this study was collected from gardens in the Amandi and Mamash Khail Bannu districts of the Khyber Pakhtunkhwa region in Pakistan.To minimize the risk of overfitting and enhance the model’s performance,early stopping techniques and data augmentation were employed.As a result,the model achieved a training accuracy of 91.53%and a validation accuracy of 90.12%,which are considered respectable.This comprehensive model assists farmers in the early identification and categorization of fig tree leaf diseases.Our experts believe that CNNs could serve as valuable tools for accurate disease classification and detection in precision agriculture.We recommend further research to explore additional data sources and more advanced neural networks to improve the model’s accuracy and applicability.Future research will focus on expanding the dataset by including new diseases and testing the model in real-world scenarios to enhance sustainable farming practices.
基金supported by the National Natural Science Foundation of China under grants No.92267104 and 62372242。
文摘Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a result many service providers have begun to deploy edge servers to cache intelligent services in order to reduce transmission delay and communication energy consumption.However,finding the optimal service caching strategy remains a significant challenge due to the stochastic nature of service requests and the bulky nature of intelligent services.To deal with this,we propose a distributed service caching scheme integrating deep reinforcement learning(DRL)with mobility prediction,which we refer to as DSDM.Specifically,we employ the D3QN(Deep Double Dueling Q-Network)framework to integrate Long Short-Term Memory(LSTM)predicted mobile device locations into the service caching replacement algorithm and adopt the distributed multi-agent approach for learning and training.Experimental results demonstrate that DSDM achieves significant performance improvements in reducing communication energy consumption compared to traditional methods across various scenarios.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under Grant Number(DGSSR-2023-02-02516).
文摘This study presents a comprehensive and secure architectural framework for the Internet of Medical Things(IoMT),integrating the foundational principles of the Confidentiality,Integrity,and Availability(CIA)triad along with authentication mechanisms.Leveraging advanced Machine Learning(ML)and Deep Learning(DL)techniques,the proposed system is designed to safeguard Patient-Generated Health Data(PGHD)across interconnected medical devices.Given the increasing complexity and scale of cyber threats in IoMT environments,the integration of Intrusion Detection and Prevention Systems(IDPS)with intelligent analytics is critical.Our methodology employs both standalone and hybrid ML&DL models to automate threat detection and enable real-time analysis,while ensuring rapid and accurate responses to a diverse array of attacks.Emphasis is placed on systematic model evaluation using detection metrics such as accuracy,False Alarm Rate(FAR),and False Discovery Rate(FDR),with performance validation through cross-validation and statistical significance testing.Experimental results based on the Edge-IIoTset dataset demonstrate the superior performance of ensemble-based ML models such as Extreme Gradient Boosting(XGB)and hybrid DL models such as Convolutional Neural Networks with Autoencoders(CNN+AE),which achieved detection accuracies of 96%and 98%,respectively,with notably low FARs.These findings underscore the effectiveness of combining traditional security principles with advanced AI-driven methodologies to ensure secure,resilient,and trustworthy healthcare systems within the IoMT ecosystem.
基金supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education under Grant NRF-2021R1A6A1A03039493.
文摘Designing fast and accurate neural networks is becoming essential in various vision tasks.Recently,the use of attention mechanisms has increased,aimed at enhancing the vision task performance by selectively focusing on relevant parts of the input.In this paper,we concentrate on squeeze-and-excitation(SE)-based channel attention,considering the trade-off between latency and accuracy.We propose a variation of the SE module,called squeeze-and-excitation with layer normalization(SELN),in which layer normalization(LN)replaces the sigmoid activation function.This approach reduces the vanishing gradient problem while enhancing feature diversity and discriminability of channel attention.In addition,we propose a latency-efficient model named SELNeXt,where the LN typically used in the ConvNext block is replaced by SELN to minimize additional latency-impacting operations.Through classification simulations on ImageNet-1k,we show that the top-1 accuracy of the proposed SELNeXt outperforms other ConvNeXt-based models in terms of latency efficiency.SELNeXt also achieves better object detection and instance segmentation performance on COCO than Swin Transformer and ConvNeXt for small-sized models.Our results indicate that LN could be a considerable candidate for replacing the activation function in attention mechanisms.In addition,SELNeXt achieves a better accuracy-latency trade-off,making it favorable for real-time applications and edge computing.The code is available at https://github.com/oto-q/SELNeXt(accessed on 06 December 2024).
文摘The rapid adoption of machine learning in sensitive domains,such as healthcare,finance,and government services,has heightened the need for robust,privacy-preserving techniques.Traditional machine learning approaches lack built-in privacy mechanisms,exposing sensitive data to risks,which motivates the development of Privacy-Preserving Machine Learning(PPML)methods.Despite significant advances in PPML,a comprehensive and focused exploration of Secure Multi-Party Computing(SMPC)within this context remains underdeveloped.This review aims to bridge this knowledge gap by systematically analyzing the role of SMPC in PPML,offering a structured overviewof current techniques,challenges,and future directions.Using a semi-systematicmapping studymethodology,this paper surveys recent literature spanning SMPC protocols,PPML frameworks,implementation approaches,threat models,and performance metrics.Emphasis is placed on identifying trends,technical limitations,and comparative strengths of leading SMPC-based methods.Our findings reveal thatwhile SMPCoffers strong cryptographic guarantees for privacy,challenges such as computational overhead,communication costs,and scalability persist.The paper also discusses critical vulnerabilities,practical deployment issues,and variations in protocol efficiency across use cases.
文摘To enable proper diagnosis of a patient,medical images must demonstrate no presence of noise and artifacts.The major hurdle lies in acquiring these images in such a manner that extraneous variables,causing distortions in the form of noise and artifacts,are kept to a bare minimum.The unexpected change realized during the acquisition process specifically attacks the integrity of the image’s quality,while indirectly attacking the effectiveness of the diagnostic process.It is thus crucial that this is attended to with maximum efficiency at the level of pertinent expertise.The solution to these challenges presents a complex dilemma at the acquisition stage,where image processing techniques must be adopted.The necessity of this mandatory image pre-processing step underpins the implementation of traditional state-of-the-art methods to create functional and robust denoising or recovery devices.This article hereby provides an extensive systematic review of the above techniques,with the purpose of presenting a systematic evaluation of their effect on medical images under three different distributions of noise,i.e.,Gaussian,Poisson,and Rician.A thorough analysis of these methods is conducted using eight evaluation parameters to highlight the unique features of each method.The covered denoising methods are essential in actual clinical scenarios where the preservation of anatomical details is crucial for accurate and safe diagnosis,such as tumor detection in MRI and vascular imaging in CT.