This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagno...This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.展开更多
Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monit...Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monitoring and automate the communication process.In recent decades,researchers have made many efforts to propose autonomous systems for manipulating network data and providing on-time responses in critical operations.However,the widespread use of IoT devices in resource-constrained applications and mobile sensor networks introduces significant research challenges for cybersecurity.These systems are vulnerable to a variety of cyberattacks,including unauthorized access,denial-of-service attacks,and data leakage,which compromise the network’s security.Additionally,uneven load balancing between mobile IoT devices,which frequently experience link interferences,compromises the trustworthiness of the system.This paper introduces a Multi-Agent secured framework using lightweight edge computing to enhance cybersecurity for sensor networks,aiming to leverage artificial intelligence for adaptive routing and multi-metric trust evaluation to achieve data privacy and mitigate potential threats.Moreover,it enhances the efficiency of distributed sensors for energy consumption through intelligent data analytics techniques,resulting in highly consistent and low-latency network communication.Using simulations,the proposed framework reveals its significant performance compared to state-of-the-art approaches for energy consumption by 43%,latency by 46%,network throughput by 51%,packet loss rate by 40%,and denial of service attacks by 42%.展开更多
With the large-scale deployment of the Internet ofThings(IoT)devices,their weak securitymechanisms make them prime targets for malware attacks.Attackers often use Domain Generation Algorithm(DGA)to generate random dom...With the large-scale deployment of the Internet ofThings(IoT)devices,their weak securitymechanisms make them prime targets for malware attacks.Attackers often use Domain Generation Algorithm(DGA)to generate random domain names,hiding the real IP of Command and Control(C&C)servers to build botnets.Due to the randomness and dynamics of DGA,traditional methods struggle to detect them accurately,increasing the difficulty of network defense.This paper proposes a lightweight DGA detection model based on knowledge distillation for resource-constrained IoT environments.Specifically,a teacher model combining CharacterBERT,a bidirectional long short-term memory(BiLSTM)network,and attention mechanism(ATT)is constructed:it extracts character-level semantic features viaCharacterBERT,captures sequence dependencieswith the BiLSTM,and integrates theATT for key feature weighting,formingmulti-granularity feature fusion.An improved knowledge distillation approach transfers the teacher model’s learned knowledge to the simplified DistilBERT student model.Experimental results show the teacher model achieves 98.68%detection accuracy.The student modelmaintains slightly improved accuracy while significantly compressing parameters to approximately 38.4%of the teacher model’s scale,greatly reducing computational overhead for IoT deployment.展开更多
Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources...Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources of computation and communication.Multiaccess edge computing(MEC)can offload computing-intensive tasks to the nearby edge servers,which alleviates the pressure of devices.Ultra-dense network(UDN)can provide effective spectrum resources by deploying a large number of micro base stations.Furthermore,network slicing can support various applications in different communication scenarios.Therefore,this paper integrates the ultra-dense network slicing and the MEC technology,and introduces a hybrid computing offloading strategy in order to satisfy various quality of service(QoS)of edge devices.In order to dynamically allocate limited resources,the above problem is formulated as multiagent distributed deep reinforcement learning(DRL),which will achieve low overhead computation offloading strategy and real-time resource allocation decisions.In this context,federated learning is added to train DRL agents in a distributed manner,where each agent is dedicated to exploring actions composed of offloading decisions and allocating resources,so as to jointly optimize system delay and energy consumption.Simulation results show that the proposed learning algorithm has better performance compared with other strategies in literature.展开更多
The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial veh...The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial vehicles-assisted mobile edge computing(UAV-MEC)has gained attention in providing computing resources to vehicles and optimizing system costs.We model the computing offloading problem as a multi-objective optimization challenge aimed at minimizing both task processing delay and energy consumption.We propose a three-stage hybrid offloading scheme called Dynamic Vehicle Clustering Game-based Multi-objective Whale Optimization Algorithm(DVCG-MWOA)to address this problem.A novel dynamic clustering algorithm is designed based on vehiclemobility and task offloading efficiency requirements,where each UAV independently serves as the cluster head for a vehicle cluster and adjusts its position at the end of each timeslot in response to vehiclemovement.Within eachUAV-led cluster,cooperative game theory is applied to allocate computing resourceswhile respecting delay constraints,ensuring efficient resource utilization.To enhance offloading efficiency,we improve the multi-objective whale optimization algorithm(MOWOA),resulting in the MWOA.This enhanced algorithm determines the optimal allocation of pending tasks to different edge computing devices and the resource utilization ratio of each device,ultimately achieving a Pareto-optimal solution set for delay and energy consumption.Experimental results demonstrate that the proposed joint offloading scheme significantly reduces both delay and energy consumption compared to existing approaches,offering superior performance for vehicular networks.展开更多
The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the ...The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the systems.There is a lack of explainability,challenges with imbalanced attack classes,and limited consideration of practical edge–cloud deployment strategies in prior works.In the proposed study,we suggest an Impact-Aware Taxonomy-Driven Machine Learning Framework with Edge Deployment and SHapley Additive exPlanations(SHAP)-based Explainable AI(XAI)to attack detection and classification in IIoT-CPS settings.It includes not only unsupervised clustering(K-Means and DBSCAN)to extract latent traffic patterns but also supervised classification based on taxonomy to classify 33 different kinds of attacks into seven high-level categories:Flood Attacks,Botnet/Mirai,Reconnaissance,Spoofing/Man-In-The-Middle(MITM),Injection Attacks,Backdoors/Exploits,and Benign.The three machine learning algorithms,Random Forest,XGBoost,and Multi-Layer Perceptron(MLP),were trained on a realworld dataset of more than 1 million network traffic records,with overall accuracy of 99.4%(RF),99.5%(XGBoost),and 99.1%(MLP).Rare types of attacks,such as injection attacks and backdoors,were examined even in the case of extreme imbalance between the classes.SHAP-based XAI was performed on every model to help gain transparency and trust in the model and identify important features that drive the classification decisions,such as inter-arrival time,TCP flags,and protocol type.A workable edge-computing implementation strategy is proposed,whereby lightweight computing is performed at the edge devices and heavy,computation-intensive analytics is performed at the cloud.This framework is highly accurate,interpretable,and has real-time application,hence a robust and scalable solution to securing IIoT-CPS infrastructure against dynamic cyber-attacks.展开更多
With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications a...With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.展开更多
An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This ...An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This paper presents a robust solution in the form of a fast flutter suppression digital control logic of edge computing aileron mechatronics(ECAM).We have effectively eliminated passive and active oscillating response biases by integrating nonlinear functional parameters and an antiphase hysteresis Schmitt trigger.Our findings demonstrate that self-tuning nonlinear parameters can optimize stability,robustness,and accuracy.At the same time,the antiphase hysteresis Schmitt trigger effectively rejects flutters without the need for collaborative navigation and guidance.Our hardware-in-the-loop simulation results confirm that this approach can eliminate aircraft jitter and shaking while ensuring expected stability and maneuverability.In conclusion,this nonlinear aileron mechatronics with a Schmitt positive feedback mechanism is a highly effective solution for distributed flight control and active flutter rejection.展开更多
The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing pr...The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing presents substantial security challenges,rendering it a prominent target for sophisticated malware attacks.Existing signature-based and behavior-based detection methods are ineffective against the swiftly evolving nature of malware threats and are constrained by the availability of resources.This paper suggests the Genetic Encoding for Novel Optimization of Malware Evaluation(GENOME)framework,a novel solution that is intended to improve the performance of malware detection and classification in peripheral computing environments.GENOME optimizes data storage and computa-tional efficiency by converting malware artifacts into compact,structured sequences through a Deoxyribonucleic Acid(DNA)encoding mechanism.The framework employs two DNA encoding algorithms,standard and compressed,which substantially reduce data size while preserving high detection accuracy.The Edge-IIoTset dataset was used to conduct experiments that showed that GENOME was able to achieve high classification performance using models such as Random Forest and Logistic Regression,resulting in a reduction of data size by up to 42%.Further evaluations with the CIC-IoT-23 dataset and Deep Learning models confirmed GENOME’s scalability and adaptability across diverse datasets and algorithms.The potential of GENOME to address critical challenges,such as the rapid mutation of malware,real-time processing demands,and resource limitations,is emphasized in this study.GENOME offers comprehensive protection for peripheral computing environments by offering a security solution that is both efficient and scalable.展开更多
Non-panoramic virtual reality(VR)provides users with immersive experiences involving strong interactivity,thus attracting growing research and development attention.However,the demand for high bandwidth and low latenc...Non-panoramic virtual reality(VR)provides users with immersive experiences involving strong interactivity,thus attracting growing research and development attention.However,the demand for high bandwidth and low latency in VR services presents greater challenges to existing networks.Inspired by mobile edge computing(MEC),VR users can offload rendering tasks to other devices.The main challenge of task offloading is to minimize latency and energy consumption.Yet,in non-panoramic VR scenarios,it is essential to consider the Quality of Perceptual Experience(QOPE)for users.Simultaneously,one must also take into account the diverse requirements of users in real-world scenarios.Therefore,this paper proposes a QOPE model to measure the visual quality of non-panoramic VR users and models the non-panoramic VR task offloading problem based on MEC as a constrained multi-objective optimization problem(CMOP)that minimizes latency and energy consumption while providing a satisfied QOPE.And we propose an evolutionary algorithm(EA),GNSGA-II,to solve the CMOP.Simulation results show that the algorithm can effectively find various trade-off solutions among the objectives,satisfying the requirements of different users.展开更多
The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with e...The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with each other.Blockchain-based STS devices need a lot of computer power to change different video feed quality and forms into different versions and structures that meet the needs of different users.On the other hand,existing blockchains can’t support live streaming because they take too long to process and don’t have enough computer power.Large amounts of video data being sent and analyzed put too much stress on networks for vehicles.A video surveillance method is suggested in this paper to improve the performance of the blockchain system’s data and lower the latency across the multiple access edge computing(MEC)system.The integration of MEC and blockchain for video surveillance in autonomous vehicles(IMEC-BVS)framework has been proposed.To deal with this problem,the joint optimization problem is shown using the actor-critical asynchronous advantage(ACAA)method and deep reinforcement training as a Markov Choice Progression(MCP).Simulation results show that the suggested method quickly converges and improves the performance of MEC and blockchain when used together for video surveillance in self-driving cars compared to other methods.展开更多
Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC ha...Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks.展开更多
In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements o...In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements on task offloading and reveal that data preparation time for task execution can be minimized via forward-looking scheduling.Then,a Bi-LSTM-based model is proposed to predict the trajectories of vehicles.The service area is divided into several equal-sized grids.If the actual position of the vehicle and the predicted position by the model belong to the same grid,the prediction is considered correct,thereby reducing the difficulty of vehicle trajectory prediction.Moreover,we propose a scheduling strategy for delay optimization based on the vehicle trajectory prediction.Considering the inevitable prediction error,we take some edge servers around the predicted area as candidate execution servers and the data required for task execution are backed up to these candidate servers,thereby reducing the impact of prediction deviations on task offloading and converting the modest increase of resource overheads into delay reduction in task offloading.Simulation results show that,compared with other classical schemes,the proposed strategy has lower average task offloading delays.展开更多
As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,fo...As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,for instance,limited processing capability,a few energy sources,and erratic availability being some of the common ones.Correspondingly,these problems require an effective task allocation algorithmto optimize the resources through continued high system performance and dependability in dynamic environments.This paper proposes an improved Particle Swarm Optimization technique,known as IPSO,for multi-objective optimization in edge computing to overcome these issues.To this end,the IPSO algorithm tries to make a trade-off between two important objectives,which are energy consumption minimization and task execution time reduction.Because of global optimal position mutation and dynamic adjustment to inertia weight,the proposed optimization algorithm can effectively distribute tasks among edge nodes.As a result,it reduces the execution time of tasks and energy consumption.In comparative assessments carried out by IPSO with benchmark methods such as Energy-aware Double-fitness Particle Swarm Optimization(EADPSO)and ICBA,IPSO provides better results than these algorithms.For the maximum task size,when compared with the benchmark methods,IPSO reduces the execution time by 17.1%and energy consumption by 31.58%.These results allow the conclusion that IPSO is an efficient and scalable technique for task allocation at the edge environment.It provides peak efficiency while handling scarce resources and variable workloads.展开更多
The rapid expansion of the Internet of Things(IoT)has introduced significant security challenges due to the scale,complexity,and heterogeneity of interconnected devices.The current traditional centralized security mod...The rapid expansion of the Internet of Things(IoT)has introduced significant security challenges due to the scale,complexity,and heterogeneity of interconnected devices.The current traditional centralized security models are deemed irrelevant in dealing with these threats,especially in decentralized applications where the IoT devices may at times operate on minimal resources.The emergence of new technologies,including Artificial Intelligence(AI),blockchain,edge computing,and Zero-Trust-Architecture(ZTA),is offering potential solutions as it helps with additional threat detection,data integrity,and system resilience in real-time.AI offers sophisticated anomaly detection and prediction analytics,and blockchain delivers decentralized and tamper-proof insurance over device communication and exchange of information.Edge computing enables low-latency character processing by distributing and moving the computational workload near the devices.The ZTA enhances security by continuously verifying each device and user on the network,adhering to the“never trust,always verify”ideology.The present research paper is a review of these technologies,finding out how they are used in securing IoT ecosystems,the issues of such integration,and the possibility of developing a multi-layered,adaptive security structure.Major concerns,such as scalability,resource limitations,and interoperability,are identified,and the way to optimize the application of AI,blockchain,and edge computing in zero-trust IoT systems in the future is discussed.展开更多
This paper presents an algorithm named the dependency-aware offloading framework(DeAOff),which is designed to optimize the deployment of Gen-AI decoder models in mobile edge computing(MEC)environments.These models,suc...This paper presents an algorithm named the dependency-aware offloading framework(DeAOff),which is designed to optimize the deployment of Gen-AI decoder models in mobile edge computing(MEC)environments.These models,such as decoders,pose significant challenges due to their interlayer dependencies and high computational demands,especially under edge resource constraints.To address these challenges,we propose a two-phase optimization algorithm that first handles dependencyaware task allocation and subsequently optimizes energy consumption.By modeling the inference process using directed acyclic graphs(DAGs)and applying constraint relaxation techniques,our approach effectively reduces execution latency and energy usage.Experimental results demonstrate that our method achieves a reduction of up to 20%in task completion time and approximately 30%savings in energy consumption compared to traditional methods.These outcomes underscore our solution’s robustness in managing complex sequential dependencies and dynamic MEC conditions,enhancing quality of service.Thus,our work presents a practical and efficient resource optimization strategy for deploying models in resourceconstrained MEC scenarios.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Mobile Edge Computing(MEC) is an emerging technology in 5G era which enables the provision of the cloud and IT services within the close proximity of mobile subscribers.It allows the availability of the cloud servers ...Mobile Edge Computing(MEC) is an emerging technology in 5G era which enables the provision of the cloud and IT services within the close proximity of mobile subscribers.It allows the availability of the cloud servers inside or adjacent to the base station.The endto-end latency perceived by the mobile user is therefore reduced with the MEC platform.The context-aware services are able to be served by the application developers by leveraging the real time radio access network information from MEC.The MEC additionally enables the compute intensive applications execution in the resource constraint devices with the collaborative computing involving the cloud servers.This paper presents the architectural description of the MEC platform as well as the key functionalities enabling the above features.The relevant state-of-the-art research efforts are then surveyed.The paper finally discusses and identifies the open research challenges of MEC.展开更多
文摘This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.
基金supported by the Deanship of Graduate Studies and Scientific Research at Jouf University.
文摘Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monitoring and automate the communication process.In recent decades,researchers have made many efforts to propose autonomous systems for manipulating network data and providing on-time responses in critical operations.However,the widespread use of IoT devices in resource-constrained applications and mobile sensor networks introduces significant research challenges for cybersecurity.These systems are vulnerable to a variety of cyberattacks,including unauthorized access,denial-of-service attacks,and data leakage,which compromise the network’s security.Additionally,uneven load balancing between mobile IoT devices,which frequently experience link interferences,compromises the trustworthiness of the system.This paper introduces a Multi-Agent secured framework using lightweight edge computing to enhance cybersecurity for sensor networks,aiming to leverage artificial intelligence for adaptive routing and multi-metric trust evaluation to achieve data privacy and mitigate potential threats.Moreover,it enhances the efficiency of distributed sensors for energy consumption through intelligent data analytics techniques,resulting in highly consistent and low-latency network communication.Using simulations,the proposed framework reveals its significant performance compared to state-of-the-art approaches for energy consumption by 43%,latency by 46%,network throughput by 51%,packet loss rate by 40%,and denial of service attacks by 42%.
基金supported by the following projects:National Natural Science Foundation of China(62461041)Natural Science Foundation of Jiangxi Province China(20242BAB25068).
文摘With the large-scale deployment of the Internet ofThings(IoT)devices,their weak securitymechanisms make them prime targets for malware attacks.Attackers often use Domain Generation Algorithm(DGA)to generate random domain names,hiding the real IP of Command and Control(C&C)servers to build botnets.Due to the randomness and dynamics of DGA,traditional methods struggle to detect them accurately,increasing the difficulty of network defense.This paper proposes a lightweight DGA detection model based on knowledge distillation for resource-constrained IoT environments.Specifically,a teacher model combining CharacterBERT,a bidirectional long short-term memory(BiLSTM)network,and attention mechanism(ATT)is constructed:it extracts character-level semantic features viaCharacterBERT,captures sequence dependencieswith the BiLSTM,and integrates theATT for key feature weighting,formingmulti-granularity feature fusion.An improved knowledge distillation approach transfers the teacher model’s learned knowledge to the simplified DistilBERT student model.Experimental results show the teacher model achieves 98.68%detection accuracy.The student modelmaintains slightly improved accuracy while significantly compressing parameters to approximately 38.4%of the teacher model’s scale,greatly reducing computational overhead for IoT deployment.
文摘Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources of computation and communication.Multiaccess edge computing(MEC)can offload computing-intensive tasks to the nearby edge servers,which alleviates the pressure of devices.Ultra-dense network(UDN)can provide effective spectrum resources by deploying a large number of micro base stations.Furthermore,network slicing can support various applications in different communication scenarios.Therefore,this paper integrates the ultra-dense network slicing and the MEC technology,and introduces a hybrid computing offloading strategy in order to satisfy various quality of service(QoS)of edge devices.In order to dynamically allocate limited resources,the above problem is formulated as multiagent distributed deep reinforcement learning(DRL),which will achieve low overhead computation offloading strategy and real-time resource allocation decisions.In this context,federated learning is added to train DRL agents in a distributed manner,where each agent is dedicated to exploring actions composed of offloading decisions and allocating resources,so as to jointly optimize system delay and energy consumption.Simulation results show that the proposed learning algorithm has better performance compared with other strategies in literature.
基金funded by Shandong University of Technology Doctoral Program in Science and Technology,grant number 4041422007.
文摘The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial vehicles-assisted mobile edge computing(UAV-MEC)has gained attention in providing computing resources to vehicles and optimizing system costs.We model the computing offloading problem as a multi-objective optimization challenge aimed at minimizing both task processing delay and energy consumption.We propose a three-stage hybrid offloading scheme called Dynamic Vehicle Clustering Game-based Multi-objective Whale Optimization Algorithm(DVCG-MWOA)to address this problem.A novel dynamic clustering algorithm is designed based on vehiclemobility and task offloading efficiency requirements,where each UAV independently serves as the cluster head for a vehicle cluster and adjusts its position at the end of each timeslot in response to vehiclemovement.Within eachUAV-led cluster,cooperative game theory is applied to allocate computing resourceswhile respecting delay constraints,ensuring efficient resource utilization.To enhance offloading efficiency,we improve the multi-objective whale optimization algorithm(MOWOA),resulting in the MWOA.This enhanced algorithm determines the optimal allocation of pending tasks to different edge computing devices and the resource utilization ratio of each device,ultimately achieving a Pareto-optimal solution set for delay and energy consumption.Experimental results demonstrate that the proposed joint offloading scheme significantly reduces both delay and energy consumption compared to existing approaches,offering superior performance for vehicular networks.
基金funded by the Committee of Science of the Ministry of Science and Higher Education of the Republic of Kazakhstan(Grant No.AP23489127)。
文摘The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the systems.There is a lack of explainability,challenges with imbalanced attack classes,and limited consideration of practical edge–cloud deployment strategies in prior works.In the proposed study,we suggest an Impact-Aware Taxonomy-Driven Machine Learning Framework with Edge Deployment and SHapley Additive exPlanations(SHAP)-based Explainable AI(XAI)to attack detection and classification in IIoT-CPS settings.It includes not only unsupervised clustering(K-Means and DBSCAN)to extract latent traffic patterns but also supervised classification based on taxonomy to classify 33 different kinds of attacks into seven high-level categories:Flood Attacks,Botnet/Mirai,Reconnaissance,Spoofing/Man-In-The-Middle(MITM),Injection Attacks,Backdoors/Exploits,and Benign.The three machine learning algorithms,Random Forest,XGBoost,and Multi-Layer Perceptron(MLP),were trained on a realworld dataset of more than 1 million network traffic records,with overall accuracy of 99.4%(RF),99.5%(XGBoost),and 99.1%(MLP).Rare types of attacks,such as injection attacks and backdoors,were examined even in the case of extreme imbalance between the classes.SHAP-based XAI was performed on every model to help gain transparency and trust in the model and identify important features that drive the classification decisions,such as inter-arrival time,TCP flags,and protocol type.A workable edge-computing implementation strategy is proposed,whereby lightweight computing is performed at the edge devices and heavy,computation-intensive analytics is performed at the cloud.This framework is highly accurate,interpretable,and has real-time application,hence a robust and scalable solution to securing IIoT-CPS infrastructure against dynamic cyber-attacks.
基金supported by Future Network Scientific Research Fund Project (FNSRFP-2021-ZD-4)National Natural Science Foundation of China (No.61991404,61902182)+1 种基金National Key Research and Development Program of China under Grant 2020YFB1600104Key Research and Development Plan of Jiangsu Province under Grant BE2020084-2。
文摘With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.
基金supported in part by the Aeronautical Science Foundation of China under Grant 2022Z005057001the Joint Research Fund of Shanghai Commercial Aircraft System Engineering Science and Technology Innovation Center under CASEF-2023-M19.
文摘An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This paper presents a robust solution in the form of a fast flutter suppression digital control logic of edge computing aileron mechatronics(ECAM).We have effectively eliminated passive and active oscillating response biases by integrating nonlinear functional parameters and an antiphase hysteresis Schmitt trigger.Our findings demonstrate that self-tuning nonlinear parameters can optimize stability,robustness,and accuracy.At the same time,the antiphase hysteresis Schmitt trigger effectively rejects flutters without the need for collaborative navigation and guidance.Our hardware-in-the-loop simulation results confirm that this approach can eliminate aircraft jitter and shaking while ensuring expected stability and maneuverability.In conclusion,this nonlinear aileron mechatronics with a Schmitt positive feedback mechanism is a highly effective solution for distributed flight control and active flutter rejection.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)(Project Nos.RS-2024-00438551,30%,2022-11220701,30%,2021-0-01816,30%)the National Research Foundation of Korea(NRF)grant funded by the Korean Government(Project No.RS2023-00208460,10%).
文摘The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing presents substantial security challenges,rendering it a prominent target for sophisticated malware attacks.Existing signature-based and behavior-based detection methods are ineffective against the swiftly evolving nature of malware threats and are constrained by the availability of resources.This paper suggests the Genetic Encoding for Novel Optimization of Malware Evaluation(GENOME)framework,a novel solution that is intended to improve the performance of malware detection and classification in peripheral computing environments.GENOME optimizes data storage and computa-tional efficiency by converting malware artifacts into compact,structured sequences through a Deoxyribonucleic Acid(DNA)encoding mechanism.The framework employs two DNA encoding algorithms,standard and compressed,which substantially reduce data size while preserving high detection accuracy.The Edge-IIoTset dataset was used to conduct experiments that showed that GENOME was able to achieve high classification performance using models such as Random Forest and Logistic Regression,resulting in a reduction of data size by up to 42%.Further evaluations with the CIC-IoT-23 dataset and Deep Learning models confirmed GENOME’s scalability and adaptability across diverse datasets and algorithms.The potential of GENOME to address critical challenges,such as the rapid mutation of malware,real-time processing demands,and resource limitations,is emphasized in this study.GENOME offers comprehensive protection for peripheral computing environments by offering a security solution that is both efficient and scalable.
基金supported by National Natural Science Foundation of China(No.62101499)Science and National Key Research and Development Program of China(2019YFB1803200).
文摘Non-panoramic virtual reality(VR)provides users with immersive experiences involving strong interactivity,thus attracting growing research and development attention.However,the demand for high bandwidth and low latency in VR services presents greater challenges to existing networks.Inspired by mobile edge computing(MEC),VR users can offload rendering tasks to other devices.The main challenge of task offloading is to minimize latency and energy consumption.Yet,in non-panoramic VR scenarios,it is essential to consider the Quality of Perceptual Experience(QOPE)for users.Simultaneously,one must also take into account the diverse requirements of users in real-world scenarios.Therefore,this paper proposes a QOPE model to measure the visual quality of non-panoramic VR users and models the non-panoramic VR task offloading problem based on MEC as a constrained multi-objective optimization problem(CMOP)that minimizes latency and energy consumption while providing a satisfied QOPE.And we propose an evolutionary algorithm(EA),GNSGA-II,to solve the CMOP.Simulation results show that the algorithm can effectively find various trade-off solutions among the objectives,satisfying the requirements of different users.
文摘The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with each other.Blockchain-based STS devices need a lot of computer power to change different video feed quality and forms into different versions and structures that meet the needs of different users.On the other hand,existing blockchains can’t support live streaming because they take too long to process and don’t have enough computer power.Large amounts of video data being sent and analyzed put too much stress on networks for vehicles.A video surveillance method is suggested in this paper to improve the performance of the blockchain system’s data and lower the latency across the multiple access edge computing(MEC)system.The integration of MEC and blockchain for video surveillance in autonomous vehicles(IMEC-BVS)framework has been proposed.To deal with this problem,the joint optimization problem is shown using the actor-critical asynchronous advantage(ACAA)method and deep reinforcement training as a Markov Choice Progression(MCP).Simulation results show that the suggested method quickly converges and improves the performance of MEC and blockchain when used together for video surveillance in self-driving cars compared to other methods.
基金The authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through the Large Group Project under grant number(RGP2/337/46)The research team thanks the Deanship of Graduate Studies and Scientific Research at Najran University for supporting the research project through the Nama’a program,with the project code NU/GP/SERC/13/352-4.
文摘Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks.
基金supported in part by the National Science Foundation of China(Grant No.62172450)the Key R&D Plan of Hunan Province(Grant No.2022GK2008)the Nature Science Foundation of Hunan Province(Grant No.2020JJ4756)。
文摘In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements on task offloading and reveal that data preparation time for task execution can be minimized via forward-looking scheduling.Then,a Bi-LSTM-based model is proposed to predict the trajectories of vehicles.The service area is divided into several equal-sized grids.If the actual position of the vehicle and the predicted position by the model belong to the same grid,the prediction is considered correct,thereby reducing the difficulty of vehicle trajectory prediction.Moreover,we propose a scheduling strategy for delay optimization based on the vehicle trajectory prediction.Considering the inevitable prediction error,we take some edge servers around the predicted area as candidate execution servers and the data required for task execution are backed up to these candidate servers,thereby reducing the impact of prediction deviations on task offloading and converting the modest increase of resource overheads into delay reduction in task offloading.Simulation results show that,compared with other classical schemes,the proposed strategy has lower average task offloading delays.
基金supported by the University Putra Malaysia and the Ministry of Higher Education Malaysia under grantNumber:(FRGS/1/2023/ICT11/UPM/02/3).
文摘As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,for instance,limited processing capability,a few energy sources,and erratic availability being some of the common ones.Correspondingly,these problems require an effective task allocation algorithmto optimize the resources through continued high system performance and dependability in dynamic environments.This paper proposes an improved Particle Swarm Optimization technique,known as IPSO,for multi-objective optimization in edge computing to overcome these issues.To this end,the IPSO algorithm tries to make a trade-off between two important objectives,which are energy consumption minimization and task execution time reduction.Because of global optimal position mutation and dynamic adjustment to inertia weight,the proposed optimization algorithm can effectively distribute tasks among edge nodes.As a result,it reduces the execution time of tasks and energy consumption.In comparative assessments carried out by IPSO with benchmark methods such as Energy-aware Double-fitness Particle Swarm Optimization(EADPSO)and ICBA,IPSO provides better results than these algorithms.For the maximum task size,when compared with the benchmark methods,IPSO reduces the execution time by 17.1%and energy consumption by 31.58%.These results allow the conclusion that IPSO is an efficient and scalable technique for task allocation at the edge environment.It provides peak efficiency while handling scarce resources and variable workloads.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2025).
文摘The rapid expansion of the Internet of Things(IoT)has introduced significant security challenges due to the scale,complexity,and heterogeneity of interconnected devices.The current traditional centralized security models are deemed irrelevant in dealing with these threats,especially in decentralized applications where the IoT devices may at times operate on minimal resources.The emergence of new technologies,including Artificial Intelligence(AI),blockchain,edge computing,and Zero-Trust-Architecture(ZTA),is offering potential solutions as it helps with additional threat detection,data integrity,and system resilience in real-time.AI offers sophisticated anomaly detection and prediction analytics,and blockchain delivers decentralized and tamper-proof insurance over device communication and exchange of information.Edge computing enables low-latency character processing by distributing and moving the computational workload near the devices.The ZTA enhances security by continuously verifying each device and user on the network,adhering to the“never trust,always verify”ideology.The present research paper is a review of these technologies,finding out how they are used in securing IoT ecosystems,the issues of such integration,and the possibility of developing a multi-layered,adaptive security structure.Major concerns,such as scalability,resource limitations,and interoperability,are identified,and the way to optimize the application of AI,blockchain,and edge computing in zero-trust IoT systems in the future is discussed.
文摘This paper presents an algorithm named the dependency-aware offloading framework(DeAOff),which is designed to optimize the deployment of Gen-AI decoder models in mobile edge computing(MEC)environments.These models,such as decoders,pose significant challenges due to their interlayer dependencies and high computational demands,especially under edge resource constraints.To address these challenges,we propose a two-phase optimization algorithm that first handles dependencyaware task allocation and subsequently optimizes energy consumption.By modeling the inference process using directed acyclic graphs(DAGs)and applying constraint relaxation techniques,our approach effectively reduces execution latency and energy usage.Experimental results demonstrate that our method achieves a reduction of up to 20%in task completion time and approximately 30%savings in energy consumption compared to traditional methods.These outcomes underscore our solution’s robustness in managing complex sequential dependencies and dynamic MEC conditions,enhancing quality of service.Thus,our work presents a practical and efficient resource optimization strategy for deploying models in resourceconstrained MEC scenarios.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
文摘Mobile Edge Computing(MEC) is an emerging technology in 5G era which enables the provision of the cloud and IT services within the close proximity of mobile subscribers.It allows the availability of the cloud servers inside or adjacent to the base station.The endto-end latency perceived by the mobile user is therefore reduced with the MEC platform.The context-aware services are able to be served by the application developers by leveraging the real time radio access network information from MEC.The MEC additionally enables the compute intensive applications execution in the resource constraint devices with the collaborative computing involving the cloud servers.This paper presents the architectural description of the MEC platform as well as the key functionalities enabling the above features.The relevant state-of-the-art research efforts are then surveyed.The paper finally discusses and identifies the open research challenges of MEC.