This paper presents an algorithm named the dependency-aware offloading framework(DeAOff),which is designed to optimize the deployment of Gen-AI decoder models in mobile edge computing(MEC)environments.These models,suc...This paper presents an algorithm named the dependency-aware offloading framework(DeAOff),which is designed to optimize the deployment of Gen-AI decoder models in mobile edge computing(MEC)environments.These models,such as decoders,pose significant challenges due to their interlayer dependencies and high computational demands,especially under edge resource constraints.To address these challenges,we propose a two-phase optimization algorithm that first handles dependencyaware task allocation and subsequently optimizes energy consumption.By modeling the inference process using directed acyclic graphs(DAGs)and applying constraint relaxation techniques,our approach effectively reduces execution latency and energy usage.Experimental results demonstrate that our method achieves a reduction of up to 20%in task completion time and approximately 30%savings in energy consumption compared to traditional methods.These outcomes underscore our solution’s robustness in managing complex sequential dependencies and dynamic MEC conditions,enhancing quality of service.Thus,our work presents a practical and efficient resource optimization strategy for deploying models in resourceconstrained MEC scenarios.展开更多
This paper investigates the traffic offloading optimization challenge in Space-Air-Ground Integrated Networks(SAGIN)through a novel Recursive Multi-Agent Proximal Policy Optimization(RMAPPO)algorithm.The exponential g...This paper investigates the traffic offloading optimization challenge in Space-Air-Ground Integrated Networks(SAGIN)through a novel Recursive Multi-Agent Proximal Policy Optimization(RMAPPO)algorithm.The exponential growth of mobile devices and data traffic has substantially increased network congestion,particularly in urban areas and regions with limited terrestrial infrastructure.Our approach jointly optimizes unmanned aerial vehicle(UAV)trajectories and satellite-assisted offloading strategies to simultaneously maximize data throughput,minimize energy consumption,and maintain equitable resource distribution.The proposed RMAPPO framework incorporates recurrent neural networks(RNNs)to model temporal dependencies in UAV mobility patterns and utilizes a decentralized multi-agent reinforcement learning architecture to reduce communication overhead while improving system robustness.The proposed RMAPPO algorithm was evaluated through simulation experiments,with the results indicating that it significantly enhances the cumulative traffic offloading rate of nodes and reduces the energy consumption of UAVs.展开更多
As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays...As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e...In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.展开更多
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the...As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.展开更多
The advent of the internet-of-everything era has led to the increased use of mobile edge computing.The rise of artificial intelligence has provided many possibilities for the low-latency task-offloading demands of use...The advent of the internet-of-everything era has led to the increased use of mobile edge computing.The rise of artificial intelligence has provided many possibilities for the low-latency task-offloading demands of users,but existing technologies rigidly assume that there is only one task to be offloaded in each time slot at the terminal.In practical scenarios,there are often numerous computing tasks to be executed at the terminal,leading to a cumulative delay for subsequent task offloading.Therefore,the efficient processing of multiple computing tasks on the terminal has become highly challenging.To address the lowlatency offloading requirements for multiple computational tasks on terminal devices,we propose a terminal multitask parallel offloading algorithm based on deep reinforcement learning.Specifically,we first establish a mobile edge computing system model consisting of a single edge server and multiple terminal users.We then model the task offloading decision problem as a Markov decision process,and solve this problem using the Dueling Deep-Q Network algorithm to obtain the optimal offloading strategy.Experimental results demonstrate that,under the same constraints,our proposed algorithm reduces the average system latency.展开更多
Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will ...Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will be an integral part of 6G systems and efficient task offloading techniques using fog computing will improve their performance and reliability.In this paper,the focus is on the scenario of Partial Offloading of a Task to Multiple Helpers(POMH)in which larger tasks are divided into smaller subtasks and processed in parallel,hence expediting task completion.However,using POMH presents challenges such as breaking tasks into subtasks and scaling these subtasks based on many interdependent factors to ensure that all subtasks of a task finish simultaneously,preventing resource wastage.Additionally,applying matching theory to POMH scenarios results in dynamic preference profiles of helping devices due to changing subtask sizes,resulting in a difficult-to-solve,externalities problem.This paper introduces a novel many-to-one matching-based algorithm,designed to address the externalities problem and optimize resource allocation within POMH scenarios.Additionally,we propose a new time-efficient preference profiling technique that further enhances time optimization in POMH scenarios.The performance of the proposed technique is thoroughly evaluated in comparison to alternate baseline schemes,revealing many advantages of the proposed approach.The simulation findings indisputably show that the proposed matching-based offloading technique outperforms existing methodologies in the literature,yielding a remarkable 52 reduction in task latency,particularly under high workloads.展开更多
Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)du...Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)due to its tamper-proof and non-repudiation features.Although blockchain typically does not require the endorsement of third-party trust organizations,it mostly needs to perform necessary mathematical calculations to prevent malicious attacks,which results in stricter requirements for computation resources on the participating devices.By offloading the computation tasks required to support blockchain consensus to edge service nodes or the cloud,while providing data privacy protection for IoT applications,it can effectively address the limitations of computation and energy resources in IoT devices.However,how to make reasonable offloading decisions for IoT devices remains an open issue.Due to the excellent self-learning ability of Reinforcement Learning(RL),this paper proposes a RL enabled Swarm Intelligence Optimization Algorithm(RLSIOA)that aims to improve the quality of initial solutions and achieve efficient optimization of computation task offloading decisions.The algorithm considers various factors that may affect the revenue obtained by IoT devices executing consensus algorithms(e.g.,Proof-of-Work),it optimizes the proportion of sub-tasks to be offloaded and the scale of computing resources to be rented from the edge and cloud to maximize the revenue of devices.Experimental results show that RLSIOA can obtain higher-quality offloading decision-making schemes at lower latency costs compared to representative benchmark algorithms.展开更多
Recent advances in integrating Digital Twins(DTs)with Heterogeneous Vehicular Networks(HetVNets)enhance decision-making and improve network performance.Additionally,developments in Mobile Edge Computing(MEC)support th...Recent advances in integrating Digital Twins(DTs)with Heterogeneous Vehicular Networks(HetVNets)enhance decision-making and improve network performance.Additionally,developments in Mobile Edge Computing(MEC)support the computational demands of DTs.However,the decentralized nature of MEC systems introduces security challenges and traditional HetVNets fail to efficiently integrate diverse computing and network resources,limiting their ability to handle services for vehicles.This paper presents a novel service request offloading framework for DT-HetVNets to address these issues.In this framework,we design utility functions for vehicles and infrastructures to maximize satisfaction of their requirements through data synchronization and decision-making between DTs and entities.Furthermore,we propose a new honestly based distributed PoA(HDPoA)via scalable work.The interactions between infrastructures and vehicles are modeled as a multi-leader multi-follower(MLMF)game,and we develop a dynamic iterative algorithm to achieve the Nash equilibrium(NE)of the proposed game-theoretic model.Experimental results validate the effectiveness and accuracy of our scheme.展开更多
Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient int...Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption.展开更多
With the development of vehicle networks and the construction of roadside units,Vehicular Ad Hoc Networks(VANETs)are increasingly promoting cooperative computing patterns among vehicles.Vehicular edge computing(VEC)of...With the development of vehicle networks and the construction of roadside units,Vehicular Ad Hoc Networks(VANETs)are increasingly promoting cooperative computing patterns among vehicles.Vehicular edge computing(VEC)offers an effective solution to mitigate resource constraints by enabling task offloading to edge cloud infrastructure,thereby reducing the computational burden on connected vehicles.However,this sharing-based and distributed computing paradigm necessitates ensuring the credibility and reliability of various computation nodes.Existing vehicular edge computing platforms have not adequately considered themisbehavior of vehicles.We propose a practical task offloading algorithm based on reputation assessment to address the task offloading problem in vehicular edge computing under an unreliable environment.This approach integrates deep reinforcement learning and reputation management to address task offloading challenges.Simulation experiments conducted using Veins demonstrate the feasibility and effectiveness of the proposed method.展开更多
Low Earth Orbit(LEO)satellites have gained significant attention for their low-latency communication and computing capabilities but face challenges due to high mobility and limited resources.Existing studies integrate...Low Earth Orbit(LEO)satellites have gained significant attention for their low-latency communication and computing capabilities but face challenges due to high mobility and limited resources.Existing studies integrate edge computing with LEO satellite networks to optimize task offloading;however,they often overlook the impact of frequent topology changes,unstable transmission links,and intermittent satellite visibility,leading to task execution failures and increased latency.To address these issues,this paper proposes a dynamic integrated spaceground computing framework that optimizes task offloading under LEO satellite mobility constraints.We design an adaptive task migration strategy through inter-satellite links when target satellites become inaccessible.To enhance data transmission reliability,we introduce a communication stability constraint based on transmission bit error rate(BER).Additionally,we develop a genetic algorithm(GA)-based task scheduling method that dynamically allocates computing resources while minimizing latency and energy consumption.Our approach jointly considers satellite computing capacity,link stability,and task execution reliability to achieve efficient task offloading.Experimental results demonstrate that the proposed method significantly improves task execution success rates,reduces system overhead,and enhances overall computational efficiency in LEO satellite networks.展开更多
This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environmen...This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environments.This innovative approach not only significantly improves the system’s real-time responsiveness and resource utilization efficiency but also addresses critical challenges in Internet of Things(IoT)ecosystems—such as high demand variability,resource allocation uncertainties,and data privacy concerns—through practical solutions.Initially,the framework employs an adaptive adjustment mechanism to dynamically manage task and resource states,complemented by online learning models for precise predictive analytics.Secondly,it accelerates the search for optimal solutions using Grover’s algorithm while efficiently evaluating complex constraints through multi-controlled Toffoli gates,thereby markedly enhancing the practicality and robustness of the proposed solution.Furthermore,to bolster the system’s adaptability and response speed in dynamic environments,an efficientmonitoring mechanism and event-driven architecture are incorporated,ensuring timely responses to environmental changes and maintaining synchronization between internal and external systems.Experimental evaluations confirm that the proposed algorithm demonstrates superior performance in complex application scenarios,characterized by faster convergence,enhanced stability,and superior data privacy protection,alongside notable reductions in latency and optimized resource utilization.This research paves the way for transformative advancements in edge computing and IoT technologies,driving smart edge computing towards unprecedented levels of intelligence and automation.展开更多
The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial veh...The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial vehicles-assisted mobile edge computing(UAV-MEC)has gained attention in providing computing resources to vehicles and optimizing system costs.We model the computing offloading problem as a multi-objective optimization challenge aimed at minimizing both task processing delay and energy consumption.We propose a three-stage hybrid offloading scheme called Dynamic Vehicle Clustering Game-based Multi-objective Whale Optimization Algorithm(DVCG-MWOA)to address this problem.A novel dynamic clustering algorithm is designed based on vehiclemobility and task offloading efficiency requirements,where each UAV independently serves as the cluster head for a vehicle cluster and adjusts its position at the end of each timeslot in response to vehiclemovement.Within eachUAV-led cluster,cooperative game theory is applied to allocate computing resourceswhile respecting delay constraints,ensuring efficient resource utilization.To enhance offloading efficiency,we improve the multi-objective whale optimization algorithm(MOWOA),resulting in the MWOA.This enhanced algorithm determines the optimal allocation of pending tasks to different edge computing devices and the resource utilization ratio of each device,ultimately achieving a Pareto-optimal solution set for delay and energy consumption.Experimental results demonstrate that the proposed joint offloading scheme significantly reduces both delay and energy consumption compared to existing approaches,offering superior performance for vehicular networks.展开更多
Cybertwin-enabled 6th Generation(6G)network is envisioned to support artificial intelligence-native management to meet changing demands of 6G applications.Multi-Agent Deep Reinforcement Learning(MADRL)technologies dri...Cybertwin-enabled 6th Generation(6G)network is envisioned to support artificial intelligence-native management to meet changing demands of 6G applications.Multi-Agent Deep Reinforcement Learning(MADRL)technologies driven by Cybertwins have been proposed for adaptive task offloading strategies.However,the existence of random transmission delay between Cybertwin-driven agents and underlying networks is not considered in related works,which destroys the standard Markov property and increases the decision reaction time to reduce the task offloading strategy performance.In order to address this problem,we propose a pipelining task offloading method to lower the decision reaction time and model it as a delay-aware Markov Decision Process(MDP).Then,we design a delay-aware MADRL algorithm to minimize the weighted sum of task execution latency and energy consumption.Firstly,the state space is augmented using the lastly-received state and historical actions to rebuild the Markov property.Secondly,Gate Transformer-XL is introduced to capture historical actions'importance and maintain the consistent input dimension dynamically changed due to random transmission delays.Thirdly,a sampling method and a new loss function with the difference between the current and target state value and the difference between real state-action value and augmented state-action value are designed to obtain state transition trajectories close to the real ones.Numerical results demonstrate that the proposed methods are effective in reducing reaction time and improving the task offloading performance in the random-delay Cybertwin-enabled 6G networks.展开更多
In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements o...In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements on task offloading and reveal that data preparation time for task execution can be minimized via forward-looking scheduling.Then,a Bi-LSTM-based model is proposed to predict the trajectories of vehicles.The service area is divided into several equal-sized grids.If the actual position of the vehicle and the predicted position by the model belong to the same grid,the prediction is considered correct,thereby reducing the difficulty of vehicle trajectory prediction.Moreover,we propose a scheduling strategy for delay optimization based on the vehicle trajectory prediction.Considering the inevitable prediction error,we take some edge servers around the predicted area as candidate execution servers and the data required for task execution are backed up to these candidate servers,thereby reducing the impact of prediction deviations on task offloading and converting the modest increase of resource overheads into delay reduction in task offloading.Simulation results show that,compared with other classical schemes,the proposed strategy has lower average task offloading delays.展开更多
For better flexibility and greater coverage areas,Unmanned Aerial Vehicles(UAVs)have been applied in Flying Mobile Edge Computing(F-MEC)systems to offer offloading services for the User Equipment(UEs).This paper consi...For better flexibility and greater coverage areas,Unmanned Aerial Vehicles(UAVs)have been applied in Flying Mobile Edge Computing(F-MEC)systems to offer offloading services for the User Equipment(UEs).This paper considers a disaster-affected scenario where UAVs undertake the role of MEC servers to provide computing resources for Disaster Relief Devices(DRDs).Considering the fairness of DRDs,a max-min problem is formulated to optimize the saved time by jointly designing the trajectory of the UAVs,the offloading policy and serving time under the constraint of the UAVs'energy capacity.To solve the above non-convex problem,we first model the service process as a Markov Decision Process(MDP)with the Reward Shaping(RS)technique,and then propose a Deep Reinforcement Learning(DRL)based algorithm to find the optimal solution for the MDP.Simulations show that the proposed RS-DRL algorithm is valid and effective,and has better performance than the baseline algorithms.展开更多
Edge computing has transformed smart grids by lowering latency,reducing network congestion,and enabling real-time decision-making.Nevertheless,devising an optimal task-offloading strategy remains challenging,as it mus...Edge computing has transformed smart grids by lowering latency,reducing network congestion,and enabling real-time decision-making.Nevertheless,devising an optimal task-offloading strategy remains challenging,as it must jointly minimise energy consumption and response time under fluctuating workloads and volatile network conditions.We cast the offloading problem as aMarkov Decision Process(MDP)and solve it with Deep Reinforcement Learning(DRL).Specifically,we present a three-tier architecture—end devices,edge nodes,and a cloud server—and enhance Proximal Policy Optimization(PPO)to learn adaptive,energy-aware policies.A Convolutional Neural Network(CNN)extracts high-level features from system states,enabling the agent to respond continually to changing conditions.Extensive simulations show that the proposed method reduces task latency and energy consumption far more than several baseline algorithms,thereby improving overall system performance.These results demonstrate the effectiveness and robustness of the framework for real-time task offloading in dynamic smart-grid environments.展开更多
Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it ...Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it is difficult for them to satisfy low-delay and low-energy consumed task processing re-quirements due to their limited computing resources.To address the above problems,this paper presents the LEO satellites cooperative task offloading and computing resource allocation(LEOC-TC)algorithm.Firstly,a LEO satellites cooperative task offloading system was designed so that the multispectral LEO satellites in the system could leave their tasks locally or offload them to other LEO satellites with servers for processing,thus providing high-quality information-processing services for multispectral LEO satellites.Secondly,an optimization problem with the objective of minimizing the weighted sum of the total task pro-cessing delay and total energy consumed for multi-spectral LEO satellite is established,and the optimiza-tion problem is split into an offloading ratio subprob-lem and a computing resource subproblem.Finally,Bernoulli mapping tuna swarm optimization algorithm is used to solve the above two sub-problems separately in order to satisfy the demand of low delay and low energy consumed by the system.Simulation results show that the total task processing cost of the LEOCTC algorithm can be reduced by 63.32%,66.67%,and 80.72%compared to the random offloading ratio algorithm,the average resource offloading algorithm,and the local computing algorithm,respectively.展开更多
文摘This paper presents an algorithm named the dependency-aware offloading framework(DeAOff),which is designed to optimize the deployment of Gen-AI decoder models in mobile edge computing(MEC)environments.These models,such as decoders,pose significant challenges due to their interlayer dependencies and high computational demands,especially under edge resource constraints.To address these challenges,we propose a two-phase optimization algorithm that first handles dependencyaware task allocation and subsequently optimizes energy consumption.By modeling the inference process using directed acyclic graphs(DAGs)and applying constraint relaxation techniques,our approach effectively reduces execution latency and energy usage.Experimental results demonstrate that our method achieves a reduction of up to 20%in task completion time and approximately 30%savings in energy consumption compared to traditional methods.These outcomes underscore our solution’s robustness in managing complex sequential dependencies and dynamic MEC conditions,enhancing quality of service.Thus,our work presents a practical and efficient resource optimization strategy for deploying models in resourceconstrained MEC scenarios.
文摘This paper investigates the traffic offloading optimization challenge in Space-Air-Ground Integrated Networks(SAGIN)through a novel Recursive Multi-Agent Proximal Policy Optimization(RMAPPO)algorithm.The exponential growth of mobile devices and data traffic has substantially increased network congestion,particularly in urban areas and regions with limited terrestrial infrastructure.Our approach jointly optimizes unmanned aerial vehicle(UAV)trajectories and satellite-assisted offloading strategies to simultaneously maximize data throughput,minimize energy consumption,and maintain equitable resource distribution.The proposed RMAPPO framework incorporates recurrent neural networks(RNNs)to model temporal dependencies in UAV mobility patterns and utilizes a decentralized multi-agent reinforcement learning architecture to reduce communication overhead while improving system robustness.The proposed RMAPPO algorithm was evaluated through simulation experiments,with the results indicating that it significantly enhances the cumulative traffic offloading rate of nodes and reduces the energy consumption of UAVs.
基金supported by Youth Talent Project of Scientific Research Program of Hubei Provincial Department of Education under Grant Q20241809Doctoral Scientific Research Foundation of Hubei University of Automotive Technology under Grant 202404.
文摘As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
基金supported by the National Natural Science Foundation of China(62202215)Liaoning Province Applied Basic Research Program(Youth Special Project,2023JH2/101600038)+4 种基金Shenyang Youth Science and Technology Innovation Talent Support Program(RC220458)Guangxuan Program of Shenyang Ligong University(SYLUGXRC202216)the Basic Research Special Funds for Undergraduate Universities in Liaoning Province(LJ212410144067)the Natural Science Foundation of Liaoning Province(2024-MS-113)the science and technology funds from Liaoning Education Department(LJKZ0242).
文摘In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.
基金funded by the Fundamental Research Funds for the Central Universities(J2023-024,J2023-027).
文摘As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.
基金supported by the National Natural Science Foundation of China(62202215)Liaoning Province Applied Basic Research Program(Youth Special Project,2023JH2/101600038)+2 种基金Shenyang Youth Science and Technology Innovation Talent Support Program(RC220458)Guangxuan Program of Shenyang Ligong University(SYLUGXRC202216)the Basic Research Special Funds for Undergraduate Universities in Liaoning Province(LJ212410144067).
文摘The advent of the internet-of-everything era has led to the increased use of mobile edge computing.The rise of artificial intelligence has provided many possibilities for the low-latency task-offloading demands of users,but existing technologies rigidly assume that there is only one task to be offloaded in each time slot at the terminal.In practical scenarios,there are often numerous computing tasks to be executed at the terminal,leading to a cumulative delay for subsequent task offloading.Therefore,the efficient processing of multiple computing tasks on the terminal has become highly challenging.To address the lowlatency offloading requirements for multiple computational tasks on terminal devices,we propose a terminal multitask parallel offloading algorithm based on deep reinforcement learning.Specifically,we first establish a mobile edge computing system model consisting of a single edge server and multiple terminal users.We then model the task offloading decision problem as a Markov decision process,and solve this problem using the Dueling Deep-Q Network algorithm to obtain the optimal offloading strategy.Experimental results demonstrate that,under the same constraints,our proposed algorithm reduces the average system latency.
基金supported and funded by theDeanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-RP23082).
文摘Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will be an integral part of 6G systems and efficient task offloading techniques using fog computing will improve their performance and reliability.In this paper,the focus is on the scenario of Partial Offloading of a Task to Multiple Helpers(POMH)in which larger tasks are divided into smaller subtasks and processed in parallel,hence expediting task completion.However,using POMH presents challenges such as breaking tasks into subtasks and scaling these subtasks based on many interdependent factors to ensure that all subtasks of a task finish simultaneously,preventing resource wastage.Additionally,applying matching theory to POMH scenarios results in dynamic preference profiles of helping devices due to changing subtask sizes,resulting in a difficult-to-solve,externalities problem.This paper introduces a novel many-to-one matching-based algorithm,designed to address the externalities problem and optimize resource allocation within POMH scenarios.Additionally,we propose a new time-efficient preference profiling technique that further enhances time optimization in POMH scenarios.The performance of the proposed technique is thoroughly evaluated in comparison to alternate baseline schemes,revealing many advantages of the proposed approach.The simulation findings indisputably show that the proposed matching-based offloading technique outperforms existing methodologies in the literature,yielding a remarkable 52 reduction in task latency,particularly under high workloads.
基金supported by the Project of Science and Technology Research Program of Chongqing Education Commission of China(No.KJZD-K202401105)High-Quality Development Action Plan for Graduate Education at Chongqing University of Technology(No.gzljg2023308,No.gzljd2024204)+1 种基金the Graduate Innovation Program of Chongqing University of Technology(No.gzlcx20233197)Yunnan Provincial Key R&D Program(202203AA080006).
文摘Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)due to its tamper-proof and non-repudiation features.Although blockchain typically does not require the endorsement of third-party trust organizations,it mostly needs to perform necessary mathematical calculations to prevent malicious attacks,which results in stricter requirements for computation resources on the participating devices.By offloading the computation tasks required to support blockchain consensus to edge service nodes or the cloud,while providing data privacy protection for IoT applications,it can effectively address the limitations of computation and energy resources in IoT devices.However,how to make reasonable offloading decisions for IoT devices remains an open issue.Due to the excellent self-learning ability of Reinforcement Learning(RL),this paper proposes a RL enabled Swarm Intelligence Optimization Algorithm(RLSIOA)that aims to improve the quality of initial solutions and achieve efficient optimization of computation task offloading decisions.The algorithm considers various factors that may affect the revenue obtained by IoT devices executing consensus algorithms(e.g.,Proof-of-Work),it optimizes the proportion of sub-tasks to be offloaded and the scale of computing resources to be rented from the edge and cloud to maximize the revenue of devices.Experimental results show that RLSIOA can obtain higher-quality offloading decision-making schemes at lower latency costs compared to representative benchmark algorithms.
基金supported by the National Natural Science Foundation of China(No 62371250)the Natural Science Foundation on Frontier Leading Technology Basic Research Project of Jiangsu(No BK20212001)the Jiangsu Natural Science Foundation for Distinguished Young Scholars(No BK20220054).
文摘Recent advances in integrating Digital Twins(DTs)with Heterogeneous Vehicular Networks(HetVNets)enhance decision-making and improve network performance.Additionally,developments in Mobile Edge Computing(MEC)support the computational demands of DTs.However,the decentralized nature of MEC systems introduces security challenges and traditional HetVNets fail to efficiently integrate diverse computing and network resources,limiting their ability to handle services for vehicles.This paper presents a novel service request offloading framework for DT-HetVNets to address these issues.In this framework,we design utility functions for vehicles and infrastructures to maximize satisfaction of their requirements through data synchronization and decision-making between DTs and entities.Furthermore,we propose a new honestly based distributed PoA(HDPoA)via scalable work.The interactions between infrastructures and vehicles are modeled as a multi-leader multi-follower(MLMF)game,and we develop a dynamic iterative algorithm to achieve the Nash equilibrium(NE)of the proposed game-theoretic model.Experimental results validate the effectiveness and accuracy of our scheme.
基金supported in part by Sub Project of National Key Research and Development plan in 2020 NO.2020YFC1511704Beijing Information Science and Technology University NO.2020KYNH212,NO.2021CGZH302+1 种基金Beijing Science and Technology Project(Grant No.Z211100004421009)in part by the National Natural Science Foundation of China(Grant No.62301058).
文摘Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption.
基金supported by the Open Foundation of Henan Key Laboratory of Cyberspace Situation Awareness(No.HNTS2022020)the Science and Technology Research Program of Henan Province of China(232102210134,182102210130)Key Research Projects of Henan Provincial Universities(25B520005).
文摘With the development of vehicle networks and the construction of roadside units,Vehicular Ad Hoc Networks(VANETs)are increasingly promoting cooperative computing patterns among vehicles.Vehicular edge computing(VEC)offers an effective solution to mitigate resource constraints by enabling task offloading to edge cloud infrastructure,thereby reducing the computational burden on connected vehicles.However,this sharing-based and distributed computing paradigm necessitates ensuring the credibility and reliability of various computation nodes.Existing vehicular edge computing platforms have not adequately considered themisbehavior of vehicles.We propose a practical task offloading algorithm based on reputation assessment to address the task offloading problem in vehicular edge computing under an unreliable environment.This approach integrates deep reinforcement learning and reputation management to address task offloading challenges.Simulation experiments conducted using Veins demonstrate the feasibility and effectiveness of the proposed method.
基金supported by Guangdong Basic and Applied Basic Research Project(No.2025A1515012874)Foundation of Yunnan Key Laboratory of Service Computing(No.YNSC24115)+5 种基金Research Project of Pazhou Lab for Excellent Young Scholars(No.PZL2021KF0024)Guangdong Undergraduate Teaching Quality and Teaching Reform ProjectUniversity Research Project of Guangzhou Education Bureau(No.2024312189)Guangzhou Basic and Applied Basic Research Project(No.SL2024A03J00397)National Natural Science Foundation of China(No.62272113)Guangzhou Basic Research Program(No.2024A03J0398)。
文摘Low Earth Orbit(LEO)satellites have gained significant attention for their low-latency communication and computing capabilities but face challenges due to high mobility and limited resources.Existing studies integrate edge computing with LEO satellite networks to optimize task offloading;however,they often overlook the impact of frequent topology changes,unstable transmission links,and intermittent satellite visibility,leading to task execution failures and increased latency.To address these issues,this paper proposes a dynamic integrated spaceground computing framework that optimizes task offloading under LEO satellite mobility constraints.We design an adaptive task migration strategy through inter-satellite links when target satellites become inaccessible.To enhance data transmission reliability,we introduce a communication stability constraint based on transmission bit error rate(BER).Additionally,we develop a genetic algorithm(GA)-based task scheduling method that dynamically allocates computing resources while minimizing latency and energy consumption.Our approach jointly considers satellite computing capacity,link stability,and task execution reliability to achieve efficient task offloading.Experimental results demonstrate that the proposed method significantly improves task execution success rates,reduces system overhead,and enhances overall computational efficiency in LEO satellite networks.
基金supported by National Natural Science Foundation of China(Nos.62071481 and 61501471).
文摘This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environments.This innovative approach not only significantly improves the system’s real-time responsiveness and resource utilization efficiency but also addresses critical challenges in Internet of Things(IoT)ecosystems—such as high demand variability,resource allocation uncertainties,and data privacy concerns—through practical solutions.Initially,the framework employs an adaptive adjustment mechanism to dynamically manage task and resource states,complemented by online learning models for precise predictive analytics.Secondly,it accelerates the search for optimal solutions using Grover’s algorithm while efficiently evaluating complex constraints through multi-controlled Toffoli gates,thereby markedly enhancing the practicality and robustness of the proposed solution.Furthermore,to bolster the system’s adaptability and response speed in dynamic environments,an efficientmonitoring mechanism and event-driven architecture are incorporated,ensuring timely responses to environmental changes and maintaining synchronization between internal and external systems.Experimental evaluations confirm that the proposed algorithm demonstrates superior performance in complex application scenarios,characterized by faster convergence,enhanced stability,and superior data privacy protection,alongside notable reductions in latency and optimized resource utilization.This research paves the way for transformative advancements in edge computing and IoT technologies,driving smart edge computing towards unprecedented levels of intelligence and automation.
基金funded by Shandong University of Technology Doctoral Program in Science and Technology,grant number 4041422007.
文摘The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial vehicles-assisted mobile edge computing(UAV-MEC)has gained attention in providing computing resources to vehicles and optimizing system costs.We model the computing offloading problem as a multi-objective optimization challenge aimed at minimizing both task processing delay and energy consumption.We propose a three-stage hybrid offloading scheme called Dynamic Vehicle Clustering Game-based Multi-objective Whale Optimization Algorithm(DVCG-MWOA)to address this problem.A novel dynamic clustering algorithm is designed based on vehiclemobility and task offloading efficiency requirements,where each UAV independently serves as the cluster head for a vehicle cluster and adjusts its position at the end of each timeslot in response to vehiclemovement.Within eachUAV-led cluster,cooperative game theory is applied to allocate computing resourceswhile respecting delay constraints,ensuring efficient resource utilization.To enhance offloading efficiency,we improve the multi-objective whale optimization algorithm(MOWOA),resulting in the MWOA.This enhanced algorithm determines the optimal allocation of pending tasks to different edge computing devices and the resource utilization ratio of each device,ultimately achieving a Pareto-optimal solution set for delay and energy consumption.Experimental results demonstrate that the proposed joint offloading scheme significantly reduces both delay and energy consumption compared to existing approaches,offering superior performance for vehicular networks.
基金funded by the National Key Research and Development Program of China under Grant 2019YFB1803301Beijing Natural Science Foundation (L202002)。
文摘Cybertwin-enabled 6th Generation(6G)network is envisioned to support artificial intelligence-native management to meet changing demands of 6G applications.Multi-Agent Deep Reinforcement Learning(MADRL)technologies driven by Cybertwins have been proposed for adaptive task offloading strategies.However,the existence of random transmission delay between Cybertwin-driven agents and underlying networks is not considered in related works,which destroys the standard Markov property and increases the decision reaction time to reduce the task offloading strategy performance.In order to address this problem,we propose a pipelining task offloading method to lower the decision reaction time and model it as a delay-aware Markov Decision Process(MDP).Then,we design a delay-aware MADRL algorithm to minimize the weighted sum of task execution latency and energy consumption.Firstly,the state space is augmented using the lastly-received state and historical actions to rebuild the Markov property.Secondly,Gate Transformer-XL is introduced to capture historical actions'importance and maintain the consistent input dimension dynamically changed due to random transmission delays.Thirdly,a sampling method and a new loss function with the difference between the current and target state value and the difference between real state-action value and augmented state-action value are designed to obtain state transition trajectories close to the real ones.Numerical results demonstrate that the proposed methods are effective in reducing reaction time and improving the task offloading performance in the random-delay Cybertwin-enabled 6G networks.
基金supported in part by the National Science Foundation of China(Grant No.62172450)the Key R&D Plan of Hunan Province(Grant No.2022GK2008)the Nature Science Foundation of Hunan Province(Grant No.2020JJ4756)。
文摘In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements on task offloading and reveal that data preparation time for task execution can be minimized via forward-looking scheduling.Then,a Bi-LSTM-based model is proposed to predict the trajectories of vehicles.The service area is divided into several equal-sized grids.If the actual position of the vehicle and the predicted position by the model belong to the same grid,the prediction is considered correct,thereby reducing the difficulty of vehicle trajectory prediction.Moreover,we propose a scheduling strategy for delay optimization based on the vehicle trajectory prediction.Considering the inevitable prediction error,we take some edge servers around the predicted area as candidate execution servers and the data required for task execution are backed up to these candidate servers,thereby reducing the impact of prediction deviations on task offloading and converting the modest increase of resource overheads into delay reduction in task offloading.Simulation results show that,compared with other classical schemes,the proposed strategy has lower average task offloading delays.
基金supported by the Key Research and Development Program of Jiangsu Province(No.BE2020084-2)the National Key Research and Development Program of China(No.2020YFB1600104)。
文摘For better flexibility and greater coverage areas,Unmanned Aerial Vehicles(UAVs)have been applied in Flying Mobile Edge Computing(F-MEC)systems to offer offloading services for the User Equipment(UEs).This paper considers a disaster-affected scenario where UAVs undertake the role of MEC servers to provide computing resources for Disaster Relief Devices(DRDs).Considering the fairness of DRDs,a max-min problem is formulated to optimize the saved time by jointly designing the trajectory of the UAVs,the offloading policy and serving time under the constraint of the UAVs'energy capacity.To solve the above non-convex problem,we first model the service process as a Markov Decision Process(MDP)with the Reward Shaping(RS)technique,and then propose a Deep Reinforcement Learning(DRL)based algorithm to find the optimal solution for the MDP.Simulations show that the proposed RS-DRL algorithm is valid and effective,and has better performance than the baseline algorithms.
基金supported by the National Natural Science Foundation of China(Grant No.62103349)the Henan Province Science and Technology Research Project(Grant No.232102210104).
文摘Edge computing has transformed smart grids by lowering latency,reducing network congestion,and enabling real-time decision-making.Nevertheless,devising an optimal task-offloading strategy remains challenging,as it must jointly minimise energy consumption and response time under fluctuating workloads and volatile network conditions.We cast the offloading problem as aMarkov Decision Process(MDP)and solve it with Deep Reinforcement Learning(DRL).Specifically,we present a three-tier architecture—end devices,edge nodes,and a cloud server—and enhance Proximal Policy Optimization(PPO)to learn adaptive,energy-aware policies.A Convolutional Neural Network(CNN)extracts high-level features from system states,enabling the agent to respond continually to changing conditions.Extensive simulations show that the proposed method reduces task latency and energy consumption far more than several baseline algorithms,thereby improving overall system performance.These results demonstrate the effectiveness and robustness of the framework for real-time task offloading in dynamic smart-grid environments.
基金supported in part by Sub Project of National Key Research and Development plan in 2020(No.2020YFC1511704)scientific research level improvement project to promote the colleges connotation development of Beijing Information Science&Technology University(No.2020KYNH212,No.2021CGZH302)in part by the National Natural Science Foundation of China(Grant No.61971048).
文摘Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it is difficult for them to satisfy low-delay and low-energy consumed task processing re-quirements due to their limited computing resources.To address the above problems,this paper presents the LEO satellites cooperative task offloading and computing resource allocation(LEOC-TC)algorithm.Firstly,a LEO satellites cooperative task offloading system was designed so that the multispectral LEO satellites in the system could leave their tasks locally or offload them to other LEO satellites with servers for processing,thus providing high-quality information-processing services for multispectral LEO satellites.Secondly,an optimization problem with the objective of minimizing the weighted sum of the total task pro-cessing delay and total energy consumed for multi-spectral LEO satellite is established,and the optimiza-tion problem is split into an offloading ratio subprob-lem and a computing resource subproblem.Finally,Bernoulli mapping tuna swarm optimization algorithm is used to solve the above two sub-problems separately in order to satisfy the demand of low delay and low energy consumed by the system.Simulation results show that the total task processing cost of the LEOCTC algorithm can be reduced by 63.32%,66.67%,and 80.72%compared to the random offloading ratio algorithm,the average resource offloading algorithm,and the local computing algorithm,respectively.