Multi-access Edge Computing(MEC)is one of the key technologies of the future 5G network.By deploying edge computing centers at the edge of wireless access network,the computation tasks can be offloaded to edge servers...Multi-access Edge Computing(MEC)is one of the key technologies of the future 5G network.By deploying edge computing centers at the edge of wireless access network,the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios.Meanwhile,with the development of IOV(Internet of Vehicles)technology,various delay-sensitive and compute-intensive in-vehicle applications continue to appear.Compared with traditional Internet business,these computation tasks have higher processing priority and lower delay requirements.In this paper,we design a 5G-based vehicle-aware Multi-access Edge Computing network(VAMECN)and propose a joint optimization problem of minimizing total system cost.In view of the problem,a deep reinforcement learningbased joint computation offloading and task migration optimization(JCOTM)algorithm is proposed,considering the influences of multiple factors such as concurrent multiple computation tasks,system computing resources distribution,and network communication bandwidth.And,the mixed integer nonlinear programming problem is described as a Markov Decision Process.Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption,optimize computing offloading and resource allocation schemes,and improve system resource utilization,compared with other computing offloading policies.展开更多
In this paper,the security problem for the multi-access edge computing(MEC)network is researched,and an intelligent immunity-based security defense system is proposed to identify the unauthorized mobile users and to p...In this paper,the security problem for the multi-access edge computing(MEC)network is researched,and an intelligent immunity-based security defense system is proposed to identify the unauthorized mobile users and to protect the security of whole system.In the proposed security defense system,the security is protected by the intelligent immunity through three functions,identification function,learning function,and regulation function,respectively.Meanwhile,a three process-based intelligent algorithm is proposed for the intelligent immunity system.Numerical simulations are given to prove the effeteness of the proposed approach.展开更多
The main aim of future mobile networks is to provide secure,reliable,intelligent,and seamless connectivity.It also enables mobile network operators to ensure their customer’s a better quality of service(QoS).Nowadays...The main aim of future mobile networks is to provide secure,reliable,intelligent,and seamless connectivity.It also enables mobile network operators to ensure their customer’s a better quality of service(QoS).Nowadays,Unmanned Aerial Vehicles(UAVs)are a significant part of the mobile network due to their continuously growing use in various applications.For better coverage,cost-effective,and seamless service connectivity and provisioning,UAVs have emerged as the best choice for telco operators.UAVs can be used as flying base stations,edge servers,and relay nodes in mobile networks.On the other side,Multi-access EdgeComputing(MEC)technology also emerged in the 5G network to provide a better quality of experience(QoE)to users with different QoS requirements.However,UAVs in a mobile network for coverage enhancement and better QoS face several challenges such as trajectory designing,path planning,optimization,QoS assurance,mobilitymanagement,etc.The efficient and proactive path planning and optimization in a highly dynamic environment containing buildings and obstacles are challenging.So,an automated Artificial Intelligence(AI)enabled QoSaware solution is needed for trajectory planning and optimization.Therefore,this work introduces a well-designed AI and MEC-enabled architecture for a UAVs-assisted future network.It has an efficient Deep Reinforcement Learning(DRL)algorithm for real-time and proactive trajectory planning and optimization.It also fulfills QoS-aware service provisioning.A greedypolicy approach is used to maximize the long-term reward for serving more users withQoS.Simulation results reveal the superiority of the proposed DRL mechanism for energy-efficient and QoS-aware trajectory planning over the existing models.展开更多
Blockchain and multi-access edge com-puting(MEC)are two emerging promising tech-nologies that have received extensive attention from academia and industry.As a brand-new information storage,dissemination and managemen...Blockchain and multi-access edge com-puting(MEC)are two emerging promising tech-nologies that have received extensive attention from academia and industry.As a brand-new information storage,dissemination and management mechanism,blockchain technology achieves the reliable transmis-sion of data and value.While as a new computing paradigm,multi-access edge computing enables the high-frequency interaction and real-time transmission of data.The integration of communication and com-puting in blockchain-enabled multi-access edge com-puting networks has been studied without a systemat-ical view.In the survey,we focus on the integration of communication and computing,explores the mu-tual empowerment and mutual promotion effects be-tween the blockchain and MEC,and introduces the resource integration architecture of blockchain and multi-access edge computing.Then,the paper sum-marizes the applications of the resource integration ar-chitecture,resource management,data sharing,incen-tive mechanism,and consensus mechanism,and ana-lyzes corresponding applications in real-world scenar-ios.Finally,future challenges and potentially promis-ing research directions are discussed and present in de-tail.展开更多
5G is a new generation of mobile networking that aims to achieve unparalleled speed and performance. To accomplish this, three technologies, Device-to-Device communication (D2D), multi-access edge computing (MEC) and ...5G is a new generation of mobile networking that aims to achieve unparalleled speed and performance. To accomplish this, three technologies, Device-to-Device communication (D2D), multi-access edge computing (MEC) and network function virtualization (NFV) with ClickOS, have been a significant part of 5G, and this paper mainly discusses them. D2D enables direct communication between devices without the relay of base station. In 5G, a two-tier cellular network composed of traditional cellular network system and D2D is an efficient method for realizing high-speed communication. MEC unloads work from end devices and clouds platforms to widespread nodes, and connects the nodes together with outside devices and third-party providers, in order to diminish the overloading effect on any device caused by enormous applications and improve users’ quality of experience (QoE). There is also a NFV method in order to fulfill the 5G requirements. In this part, an optimized virtual machine for middle-boxes named ClickOS is introduced, and it is evaluated in several aspects. Some middle boxes are being implemented in the ClickOS and proved to have outstanding performances.展开更多
Edge computing is swiftly gaining traction and is being standardised by the European Telecommunications Standards Institute(ETSI)as Multi-access Edge Computing(MEC).Simultaneously,oneM2M has been actively developing s...Edge computing is swiftly gaining traction and is being standardised by the European Telecommunications Standards Institute(ETSI)as Multi-access Edge Computing(MEC).Simultaneously,oneM2M has been actively developing standards for dynamic data management and IoT services at the edge,particularly for applications that require real-time support and security.Integrating MEC and oneM2M offers a unique opportunity to maximize their individual strengths.Therefore,this article proposes a framework that integrates MEC and oneM2M standard platforms for IoT applications,demonstrating how the synergy of these architectures can leverage the geographically distributed computing resources at base stations,enabling efficient deployment and added value for time-sensitive IoT applications.In addition,this study offers a concept of potential interworking models between oneM2M and the MEC architectures.The adoption of these standard architectures can enable various IoT edge services,such as smart city mobility and real-time analytics functions,by leveraging the oneM2M common service layer instantiated on the MEC host.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the...As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.展开更多
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc...The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.展开更多
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta...Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.展开更多
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up ...The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up new industrial opportunities in both air and space,with satellite-powered computing emerging as a new computing paradigm:Orbital Edge Computing(OEC).Compared to terrestrial edge computing,the mobility of LEO satellites and their limited communication,computation,and storage resources pose challenges in designing task-specific scheduling algorithms.Previous survey papers have largely focused on terrestrial edge computing or the integration of space and ground technologies,lacking a comprehensive summary of OEC architecture,algorithms,and case studies.This paper conducts a comprehensive survey and analysis of OEC's system architecture,applications,algorithms,and simulation tools,providing a solid background for researchers in the field.By discussing OEC use cases and the challenges faced,potential research directions for future OEC research are proposed.展开更多
Bluetooth low energy(BLE)-based indoor localization has been extensively researched due to its cost-effectiveness,low power consumption,and ubiquity.Despite these advantages,the variability of received signal strength...Bluetooth low energy(BLE)-based indoor localization has been extensively researched due to its cost-effectiveness,low power consumption,and ubiquity.Despite these advantages,the variability of received signal strength indicator(RSSI)measurements,influenced by physical obstacles,human presence,and electronic interference,poses a significant challenge to accurate localization.In this work,we present an optimised method to enhance indoor localization accuracy by utilising multiple BLE beacons in a radio frequency(RF)-dense modern building environment.Through a proof-of-concept study,we demonstrate that using three BLE beacons reduces localization error from a worst-case distance of 9.09-2.94 m,whereas additional beacons offer minimal incremental benefit in such settings.Furthermore,our framework for BLE-based localization,implemented on an edge network of Raspberry Pies,has been released under an open-source license,enabling broader application and further research.展开更多
The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial veh...The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial vehicles-assisted mobile edge computing(UAV-MEC)has gained attention in providing computing resources to vehicles and optimizing system costs.We model the computing offloading problem as a multi-objective optimization challenge aimed at minimizing both task processing delay and energy consumption.We propose a three-stage hybrid offloading scheme called Dynamic Vehicle Clustering Game-based Multi-objective Whale Optimization Algorithm(DVCG-MWOA)to address this problem.A novel dynamic clustering algorithm is designed based on vehiclemobility and task offloading efficiency requirements,where each UAV independently serves as the cluster head for a vehicle cluster and adjusts its position at the end of each timeslot in response to vehiclemovement.Within eachUAV-led cluster,cooperative game theory is applied to allocate computing resourceswhile respecting delay constraints,ensuring efficient resource utilization.To enhance offloading efficiency,we improve the multi-objective whale optimization algorithm(MOWOA),resulting in the MWOA.This enhanced algorithm determines the optimal allocation of pending tasks to different edge computing devices and the resource utilization ratio of each device,ultimately achieving a Pareto-optimal solution set for delay and energy consumption.Experimental results demonstrate that the proposed joint offloading scheme significantly reduces both delay and energy consumption compared to existing approaches,offering superior performance for vehicular networks.展开更多
The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet ...The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.展开更多
As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational ...As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.展开更多
With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications a...With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.展开更多
The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the ...The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the systems.There is a lack of explainability,challenges with imbalanced attack classes,and limited consideration of practical edge–cloud deployment strategies in prior works.In the proposed study,we suggest an Impact-Aware Taxonomy-Driven Machine Learning Framework with Edge Deployment and SHapley Additive exPlanations(SHAP)-based Explainable AI(XAI)to attack detection and classification in IIoT-CPS settings.It includes not only unsupervised clustering(K-Means and DBSCAN)to extract latent traffic patterns but also supervised classification based on taxonomy to classify 33 different kinds of attacks into seven high-level categories:Flood Attacks,Botnet/Mirai,Reconnaissance,Spoofing/Man-In-The-Middle(MITM),Injection Attacks,Backdoors/Exploits,and Benign.The three machine learning algorithms,Random Forest,XGBoost,and Multi-Layer Perceptron(MLP),were trained on a realworld dataset of more than 1 million network traffic records,with overall accuracy of 99.4%(RF),99.5%(XGBoost),and 99.1%(MLP).Rare types of attacks,such as injection attacks and backdoors,were examined even in the case of extreme imbalance between the classes.SHAP-based XAI was performed on every model to help gain transparency and trust in the model and identify important features that drive the classification decisions,such as inter-arrival time,TCP flags,and protocol type.A workable edge-computing implementation strategy is proposed,whereby lightweight computing is performed at the edge devices and heavy,computation-intensive analytics is performed at the cloud.This framework is highly accurate,interpretable,and has real-time application,hence a robust and scalable solution to securing IIoT-CPS infrastructure against dynamic cyber-attacks.展开更多
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
Vehicular Edge Computing(VEC)enhances the quality of user services by deploying wealth of resources near vehicles.However,due to highly dynamic and complex nature of vehicular networks,centralized decisionmaking for r...Vehicular Edge Computing(VEC)enhances the quality of user services by deploying wealth of resources near vehicles.However,due to highly dynamic and complex nature of vehicular networks,centralized decisionmaking for resource allocation proves inadequate within VECs.Conversely,allocating resources via distributed decision-making consumes vehicular resources.To improve the quality of user service,we formulate a problem of latency minimization,further subdividing this problem into two subproblems to be solved through distributed decision-making.To mitigate the resource consumption caused by distributed decision-making,we propose Reinforcement Learning(RL)algorithm based on sequential alternating multi-agent system mechanism,which effectively reduces the dimensionality of action space without losing the informational content of action,achieving network lightweighting.We discuss the rationality,generalizability,and inherent advantages of proposed mechanism.Simulation results indicate that our proposed mechanism outperforms traditional RL algorithms in terms of stability,generalizability,and adaptability to scenarios with invalid actions,all while achieving network lightweighting.展开更多
基金supported in part by the National Key R&D Program of China under Grant 2018YFC0831502.
文摘Multi-access Edge Computing(MEC)is one of the key technologies of the future 5G network.By deploying edge computing centers at the edge of wireless access network,the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios.Meanwhile,with the development of IOV(Internet of Vehicles)technology,various delay-sensitive and compute-intensive in-vehicle applications continue to appear.Compared with traditional Internet business,these computation tasks have higher processing priority and lower delay requirements.In this paper,we design a 5G-based vehicle-aware Multi-access Edge Computing network(VAMECN)and propose a joint optimization problem of minimizing total system cost.In view of the problem,a deep reinforcement learningbased joint computation offloading and task migration optimization(JCOTM)algorithm is proposed,considering the influences of multiple factors such as concurrent multiple computation tasks,system computing resources distribution,and network communication bandwidth.And,the mixed integer nonlinear programming problem is described as a Markov Decision Process.Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption,optimize computing offloading and resource allocation schemes,and improve system resource utilization,compared with other computing offloading policies.
基金This work was supported by National Natural Science Foundation of China(No.61971026)the Fundamental Research Funds for the Central Universities(No.FRF-TP-18-008A3).
文摘In this paper,the security problem for the multi-access edge computing(MEC)network is researched,and an intelligent immunity-based security defense system is proposed to identify the unauthorized mobile users and to protect the security of whole system.In the proposed security defense system,the security is protected by the intelligent immunity through three functions,identification function,learning function,and regulation function,respectively.Meanwhile,a three process-based intelligent algorithm is proposed for the intelligent immunity system.Numerical simulations are given to prove the effeteness of the proposed approach.
基金This work was supported by the Fundamental Research Funds for the Central Universities(No.2019XD-A07)the Director Fund of Beijing Key Laboratory of Space-ground Interconnection and Convergencethe National Key Laboratory of Science and Technology on Vacuum Electronics.
文摘The main aim of future mobile networks is to provide secure,reliable,intelligent,and seamless connectivity.It also enables mobile network operators to ensure their customer’s a better quality of service(QoS).Nowadays,Unmanned Aerial Vehicles(UAVs)are a significant part of the mobile network due to their continuously growing use in various applications.For better coverage,cost-effective,and seamless service connectivity and provisioning,UAVs have emerged as the best choice for telco operators.UAVs can be used as flying base stations,edge servers,and relay nodes in mobile networks.On the other side,Multi-access EdgeComputing(MEC)technology also emerged in the 5G network to provide a better quality of experience(QoE)to users with different QoS requirements.However,UAVs in a mobile network for coverage enhancement and better QoS face several challenges such as trajectory designing,path planning,optimization,QoS assurance,mobilitymanagement,etc.The efficient and proactive path planning and optimization in a highly dynamic environment containing buildings and obstacles are challenging.So,an automated Artificial Intelligence(AI)enabled QoSaware solution is needed for trajectory planning and optimization.Therefore,this work introduces a well-designed AI and MEC-enabled architecture for a UAVs-assisted future network.It has an efficient Deep Reinforcement Learning(DRL)algorithm for real-time and proactive trajectory planning and optimization.It also fulfills QoS-aware service provisioning.A greedypolicy approach is used to maximize the long-term reward for serving more users withQoS.Simulation results reveal the superiority of the proposed DRL mechanism for energy-efficient and QoS-aware trajectory planning over the existing models.
基金the National Key Re-search and Development Program of China(No.2020YFB1807500)the National Natural Science Foundation of China(No.62102297,No.61902292)+2 种基金the Guangdong Basic and Applied Basic Research Foundation(No.2020A1515110496)the Fundamen-tal Research Funds for the Central Universities(No.XJS210105,No.XJS201502)the Open Project of Shaanxi Key Laboratory of Information Communi-cation Network and Security(No.ICNS202005).
文摘Blockchain and multi-access edge com-puting(MEC)are two emerging promising tech-nologies that have received extensive attention from academia and industry.As a brand-new information storage,dissemination and management mechanism,blockchain technology achieves the reliable transmis-sion of data and value.While as a new computing paradigm,multi-access edge computing enables the high-frequency interaction and real-time transmission of data.The integration of communication and com-puting in blockchain-enabled multi-access edge com-puting networks has been studied without a systemat-ical view.In the survey,we focus on the integration of communication and computing,explores the mu-tual empowerment and mutual promotion effects be-tween the blockchain and MEC,and introduces the resource integration architecture of blockchain and multi-access edge computing.Then,the paper sum-marizes the applications of the resource integration ar-chitecture,resource management,data sharing,incen-tive mechanism,and consensus mechanism,and ana-lyzes corresponding applications in real-world scenar-ios.Finally,future challenges and potentially promis-ing research directions are discussed and present in de-tail.
文摘5G is a new generation of mobile networking that aims to achieve unparalleled speed and performance. To accomplish this, three technologies, Device-to-Device communication (D2D), multi-access edge computing (MEC) and network function virtualization (NFV) with ClickOS, have been a significant part of 5G, and this paper mainly discusses them. D2D enables direct communication between devices without the relay of base station. In 5G, a two-tier cellular network composed of traditional cellular network system and D2D is an efficient method for realizing high-speed communication. MEC unloads work from end devices and clouds platforms to widespread nodes, and connects the nodes together with outside devices and third-party providers, in order to diminish the overloading effect on any device caused by enormous applications and improve users’ quality of experience (QoE). There is also a NFV method in order to fulfill the 5G requirements. In this part, an optimized virtual machine for middle-boxes named ClickOS is introduced, and it is evaluated in several aspects. Some middle boxes are being implemented in the ClickOS and proved to have outstanding performances.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)-Information Technology Research Center(ITRC)grant funded by the Korea government(IITP-2025-RS-2021-II211816)supported by the National Research Foundation of Korea(NRF)grant(NRF-2023R1A2C1004453)+3 种基金funded by the European Union’s HORIZON-JUSNS-2023 HE research and innovation program(6G-Path project,Grant No.101139172)the Horizon 2020 Research and Innovation Program(aerOS project,Grant No.101069732)supported by the ESTIMED project,conducted by the ETSI Specialist Task Force 685(STF 685)funded by the European Commission(EC)and the European Free Trade Association(EFTA).
文摘Edge computing is swiftly gaining traction and is being standardised by the European Telecommunications Standards Institute(ETSI)as Multi-access Edge Computing(MEC).Simultaneously,oneM2M has been actively developing standards for dynamic data management and IoT services at the edge,particularly for applications that require real-time support and security.Integrating MEC and oneM2M offers a unique opportunity to maximize their individual strengths.Therefore,this article proposes a framework that integrates MEC and oneM2M standard platforms for IoT applications,demonstrating how the synergy of these architectures can leverage the geographically distributed computing resources at base stations,enabling efficient deployment and added value for time-sensitive IoT applications.In addition,this study offers a concept of potential interworking models between oneM2M and the MEC architectures.The adoption of these standard architectures can enable various IoT edge services,such as smart city mobility and real-time analytics functions,by leveraging the oneM2M common service layer instantiated on the MEC host.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
基金funded by the Fundamental Research Funds for the Central Universities(J2023-024,J2023-027).
文摘As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.
基金the National Research Foundation(NRF)Singapore mid-sized center grant(NRF-MSG-2023-0002)FrontierCRP grant(NRF-F-CRP-2024-0006)+2 种基金A*STAR Singapore MTC RIE2025 project(M24W1NS005)IAF-PP project(M23M5a0069)Ministry of Education(MOE)Singapore Tier 2 project(MOE-T2EP50220-0014).
文摘The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.
基金supported in part by the National Natural Science Foundation of China under Grant No.61473066in part by the Natural Science Foundation of Hebei Province under Grant No.F2021501020+2 种基金in part by the S&T Program of Qinhuangdao under Grant No.202401A195in part by the Science Research Project of Hebei Education Department under Grant No.QN2025008in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant No.22567637H
文摘Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.
基金funded by the Hong Kong-Macao-Taiwan Science and Technology Cooperation Project of the Science and Technology Innovation Action Plan in Shanghai,China(23510760200)the Oriental Talent Youth Program of Shanghai,China(No.Y3DFRCZL01)+1 种基金the Outstanding Program of the Youth Innovation Promotion Association of the Chinese Academy of Sciences(No.Y2023080)the Strategic Priority Research Program of the Chinese Academy of Sciences Category A(No.XDA0360404).
文摘The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up new industrial opportunities in both air and space,with satellite-powered computing emerging as a new computing paradigm:Orbital Edge Computing(OEC).Compared to terrestrial edge computing,the mobility of LEO satellites and their limited communication,computation,and storage resources pose challenges in designing task-specific scheduling algorithms.Previous survey papers have largely focused on terrestrial edge computing or the integration of space and ground technologies,lacking a comprehensive summary of OEC architecture,algorithms,and case studies.This paper conducts a comprehensive survey and analysis of OEC's system architecture,applications,algorithms,and simulation tools,providing a solid background for researchers in the field.By discussing OEC use cases and the challenges faced,potential research directions for future OEC research are proposed.
基金supported by James M.Cox Foundation,National Institute on Deafness and Other Communication Disorders(grant no.1R21DC021029-01A1)Cox Enterprises Inc.,National Institute of Child Health and Human Development(grant no.AWD-006196-G1)Thrasher Research Fund Early Career Award Program.
文摘Bluetooth low energy(BLE)-based indoor localization has been extensively researched due to its cost-effectiveness,low power consumption,and ubiquity.Despite these advantages,the variability of received signal strength indicator(RSSI)measurements,influenced by physical obstacles,human presence,and electronic interference,poses a significant challenge to accurate localization.In this work,we present an optimised method to enhance indoor localization accuracy by utilising multiple BLE beacons in a radio frequency(RF)-dense modern building environment.Through a proof-of-concept study,we demonstrate that using three BLE beacons reduces localization error from a worst-case distance of 9.09-2.94 m,whereas additional beacons offer minimal incremental benefit in such settings.Furthermore,our framework for BLE-based localization,implemented on an edge network of Raspberry Pies,has been released under an open-source license,enabling broader application and further research.
基金funded by Shandong University of Technology Doctoral Program in Science and Technology,grant number 4041422007.
文摘The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial vehicles-assisted mobile edge computing(UAV-MEC)has gained attention in providing computing resources to vehicles and optimizing system costs.We model the computing offloading problem as a multi-objective optimization challenge aimed at minimizing both task processing delay and energy consumption.We propose a three-stage hybrid offloading scheme called Dynamic Vehicle Clustering Game-based Multi-objective Whale Optimization Algorithm(DVCG-MWOA)to address this problem.A novel dynamic clustering algorithm is designed based on vehiclemobility and task offloading efficiency requirements,where each UAV independently serves as the cluster head for a vehicle cluster and adjusts its position at the end of each timeslot in response to vehiclemovement.Within eachUAV-led cluster,cooperative game theory is applied to allocate computing resourceswhile respecting delay constraints,ensuring efficient resource utilization.To enhance offloading efficiency,we improve the multi-objective whale optimization algorithm(MOWOA),resulting in the MWOA.This enhanced algorithm determines the optimal allocation of pending tasks to different edge computing devices and the resource utilization ratio of each device,ultimately achieving a Pareto-optimal solution set for delay and energy consumption.Experimental results demonstrate that the proposed joint offloading scheme significantly reduces both delay and energy consumption compared to existing approaches,offering superior performance for vehicular networks.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R909),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees.
基金in part by the National Natural Science Foundation of China(NSFC)under Grant 62371012in part by the Beijing Natural Science Foundation under Grant 4252001.
文摘As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.
基金supported by Future Network Scientific Research Fund Project (FNSRFP-2021-ZD-4)National Natural Science Foundation of China (No.61991404,61902182)+1 种基金National Key Research and Development Program of China under Grant 2020YFB1600104Key Research and Development Plan of Jiangsu Province under Grant BE2020084-2。
文摘With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.
基金funded by the Committee of Science of the Ministry of Science and Higher Education of the Republic of Kazakhstan(Grant No.AP23489127)。
文摘The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the systems.There is a lack of explainability,challenges with imbalanced attack classes,and limited consideration of practical edge–cloud deployment strategies in prior works.In the proposed study,we suggest an Impact-Aware Taxonomy-Driven Machine Learning Framework with Edge Deployment and SHapley Additive exPlanations(SHAP)-based Explainable AI(XAI)to attack detection and classification in IIoT-CPS settings.It includes not only unsupervised clustering(K-Means and DBSCAN)to extract latent traffic patterns but also supervised classification based on taxonomy to classify 33 different kinds of attacks into seven high-level categories:Flood Attacks,Botnet/Mirai,Reconnaissance,Spoofing/Man-In-The-Middle(MITM),Injection Attacks,Backdoors/Exploits,and Benign.The three machine learning algorithms,Random Forest,XGBoost,and Multi-Layer Perceptron(MLP),were trained on a realworld dataset of more than 1 million network traffic records,with overall accuracy of 99.4%(RF),99.5%(XGBoost),and 99.1%(MLP).Rare types of attacks,such as injection attacks and backdoors,were examined even in the case of extreme imbalance between the classes.SHAP-based XAI was performed on every model to help gain transparency and trust in the model and identify important features that drive the classification decisions,such as inter-arrival time,TCP flags,and protocol type.A workable edge-computing implementation strategy is proposed,whereby lightweight computing is performed at the edge devices and heavy,computation-intensive analytics is performed at the cloud.This framework is highly accurate,interpretable,and has real-time application,hence a robust and scalable solution to securing IIoT-CPS infrastructure against dynamic cyber-attacks.
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.
基金supported by the National Natural Science Foundation of China(62271096,U20A20157)Science and Technology Research Program of Chongqing Municipal Education Commission(KJQN202000626)+4 种基金Natural Science Foundation of Chongqing,China(cstc2020jcyjzdxmX0024)University Innovation Research Group of Chongqing(CXQT20017)Youth Innovation Group Support Program of ICE Discipline of CQUPT(SCIE-QN-2022-04)Chongqing Postdoctoral Science Special Foundation(2021XM3058)Chongqing Postgraduate Research and Innovation Project under grant(CYB22250).
文摘Vehicular Edge Computing(VEC)enhances the quality of user services by deploying wealth of resources near vehicles.However,due to highly dynamic and complex nature of vehicular networks,centralized decisionmaking for resource allocation proves inadequate within VECs.Conversely,allocating resources via distributed decision-making consumes vehicular resources.To improve the quality of user service,we formulate a problem of latency minimization,further subdividing this problem into two subproblems to be solved through distributed decision-making.To mitigate the resource consumption caused by distributed decision-making,we propose Reinforcement Learning(RL)algorithm based on sequential alternating multi-agent system mechanism,which effectively reduces the dimensionality of action space without losing the informational content of action,achieving network lightweighting.We discuss the rationality,generalizability,and inherent advantages of proposed mechanism.Simulation results indicate that our proposed mechanism outperforms traditional RL algorithms in terms of stability,generalizability,and adaptability to scenarios with invalid actions,all while achieving network lightweighting.