期刊文献+
共找到1,661篇文章
< 1 2 84 >
每页显示 20 50 100
DeAOff:Dependence-Aware Offloading of Decoder-Based Generative Models for Edge Computing
1
作者 Ning Jiahong Yang Tingting +3 位作者 Zheng Ce Wang Xinghan Feng Ping Zhang Xiufeng 《China Communications》 2025年第7期14-29,共16页
This paper presents an algorithm named the dependency-aware offloading framework(DeAOff),which is designed to optimize the deployment of Gen-AI decoder models in mobile edge computing(MEC)environments.These models,suc... This paper presents an algorithm named the dependency-aware offloading framework(DeAOff),which is designed to optimize the deployment of Gen-AI decoder models in mobile edge computing(MEC)environments.These models,such as decoders,pose significant challenges due to their interlayer dependencies and high computational demands,especially under edge resource constraints.To address these challenges,we propose a two-phase optimization algorithm that first handles dependencyaware task allocation and subsequently optimizes energy consumption.By modeling the inference process using directed acyclic graphs(DAGs)and applying constraint relaxation techniques,our approach effectively reduces execution latency and energy usage.Experimental results demonstrate that our method achieves a reduction of up to 20%in task completion time and approximately 30%savings in energy consumption compared to traditional methods.These outcomes underscore our solution’s robustness in managing complex sequential dependencies and dynamic MEC conditions,enhancing quality of service.Thus,our work presents a practical and efficient resource optimization strategy for deploying models in resourceconstrained MEC scenarios. 展开更多
关键词 dependency-aware offloading(deaoff) directed acyclic graph(DAG) generative AI(Gen-AI) mobile edge computing(MEC)
在线阅读 下载PDF
Recurrent MAPPO for Joint UAV Trajectory and Traffic Offloading in Space-Air-Ground Integrated Networks
2
作者 Zheyuan Jia Fenglin Jin +1 位作者 Jun Xie Yuan He 《Computers, Materials & Continua》 2026年第1期447-461,共15页
This paper investigates the traffic offloading optimization challenge in Space-Air-Ground Integrated Networks(SAGIN)through a novel Recursive Multi-Agent Proximal Policy Optimization(RMAPPO)algorithm.The exponential g... This paper investigates the traffic offloading optimization challenge in Space-Air-Ground Integrated Networks(SAGIN)through a novel Recursive Multi-Agent Proximal Policy Optimization(RMAPPO)algorithm.The exponential growth of mobile devices and data traffic has substantially increased network congestion,particularly in urban areas and regions with limited terrestrial infrastructure.Our approach jointly optimizes unmanned aerial vehicle(UAV)trajectories and satellite-assisted offloading strategies to simultaneously maximize data throughput,minimize energy consumption,and maintain equitable resource distribution.The proposed RMAPPO framework incorporates recurrent neural networks(RNNs)to model temporal dependencies in UAV mobility patterns and utilizes a decentralized multi-agent reinforcement learning architecture to reduce communication overhead while improving system robustness.The proposed RMAPPO algorithm was evaluated through simulation experiments,with the results indicating that it significantly enhances the cumulative traffic offloading rate of nodes and reduces the energy consumption of UAVs. 展开更多
关键词 Space-air-ground integrated networks UAV traffic offloading reinforcement learning
在线阅读 下载PDF
High-Dimensional Multi-Objective Computation Offloading for MEC in Serial Isomerism Tasks via Flexible Optimization Framework
3
作者 Zheng Yao Puqing Chang 《Computers, Materials & Continua》 2026年第1期1160-1177,共18页
As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays... As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality. 展开更多
关键词 Edge computing offload serial Isomerism applications many-objective optimization flexible resource scheduling
在线阅读 下载PDF
A Multi-Objective Deep Reinforcement Learning Algorithm for Computation Offloading in Internet of Vehicles
4
作者 Junjun Ren Guoqiang Chen +1 位作者 Zheng-Yi Chai Dong Yuan 《Computers, Materials & Continua》 2026年第1期2111-2136,共26页
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain... Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively. 展开更多
关键词 Deep reinforcement learning internet of vehicles multi-objective optimization cloud-edge computing computation offloading service caching
在线阅读 下载PDF
DRL-Based Cross-Regional Computation Offloading Algorithm
5
作者 Lincong Zhang Yuqing Liu +2 位作者 Kefeng Wei Weinan Zhao Bo Qian 《Computers, Materials & Continua》 2026年第1期901-918,共18页
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e... In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads. 展开更多
关键词 Edge computing computational task offloading deep reinforcement learning D3QN device-to-device communication system latency optimization
在线阅读 下载PDF
Dynamic Task Offloading Scheme for Edge Computing via Meta-Reinforcement Learning 被引量:1
6
作者 Jiajia Liu Peng Xie +2 位作者 Wei Li Bo Tang Jianhua Liu 《Computers, Materials & Continua》 2025年第2期2609-2635,共27页
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the... As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments. 展开更多
关键词 Edge computing adaptive META task offloading joint optimization
在线阅读 下载PDF
Terminal Multitask Parallel Offloading Algorithm Based on Deep Reinforcement Learning
7
作者 Zhang Lincong Li Yang +2 位作者 Zhao Weinan Liu Xiangyu Guo Lei 《China Communications》 2025年第7期30-43,共14页
The advent of the internet-of-everything era has led to the increased use of mobile edge computing.The rise of artificial intelligence has provided many possibilities for the low-latency task-offloading demands of use... The advent of the internet-of-everything era has led to the increased use of mobile edge computing.The rise of artificial intelligence has provided many possibilities for the low-latency task-offloading demands of users,but existing technologies rigidly assume that there is only one task to be offloaded in each time slot at the terminal.In practical scenarios,there are often numerous computing tasks to be executed at the terminal,leading to a cumulative delay for subsequent task offloading.Therefore,the efficient processing of multiple computing tasks on the terminal has become highly challenging.To address the lowlatency offloading requirements for multiple computational tasks on terminal devices,we propose a terminal multitask parallel offloading algorithm based on deep reinforcement learning.Specifically,we first establish a mobile edge computing system model consisting of a single edge server and multiple terminal users.We then model the task offloading decision problem as a Markov decision process,and solve this problem using the Dueling Deep-Q Network algorithm to obtain the optimal offloading strategy.Experimental results demonstrate that,under the same constraints,our proposed algorithm reduces the average system latency. 展开更多
关键词 deep reinforcement learning mobile edge computing multitask parallel offloading task offloading
在线阅读 下载PDF
Reliable Task Offloading for 6G-Based IoT Applications
8
作者 Usman Mahmood Malik Muhammad Awais Javed +1 位作者 Ahmad Naseem Alvi Mohammed Alkhathami 《Computers, Materials & Continua》 2025年第2期2255-2274,共20页
Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will ... Fog computing is a key enabling technology of 6G systems as it provides quick and reliable computing,and data storage services which are required for several 6G applications.Artificial Intelligence(AI)algorithms will be an integral part of 6G systems and efficient task offloading techniques using fog computing will improve their performance and reliability.In this paper,the focus is on the scenario of Partial Offloading of a Task to Multiple Helpers(POMH)in which larger tasks are divided into smaller subtasks and processed in parallel,hence expediting task completion.However,using POMH presents challenges such as breaking tasks into subtasks and scaling these subtasks based on many interdependent factors to ensure that all subtasks of a task finish simultaneously,preventing resource wastage.Additionally,applying matching theory to POMH scenarios results in dynamic preference profiles of helping devices due to changing subtask sizes,resulting in a difficult-to-solve,externalities problem.This paper introduces a novel many-to-one matching-based algorithm,designed to address the externalities problem and optimize resource allocation within POMH scenarios.Additionally,we propose a new time-efficient preference profiling technique that further enhances time optimization in POMH scenarios.The performance of the proposed technique is thoroughly evaluated in comparison to alternate baseline schemes,revealing many advantages of the proposed approach.The simulation findings indisputably show that the proposed matching-based offloading technique outperforms existing methodologies in the literature,yielding a remarkable 52 reduction in task latency,particularly under high workloads. 展开更多
关键词 6G IOT task offloading fog computing
在线阅读 下载PDF
Reinforcement learning-enabled swarm intelligence method for computation task offloading in Internet-of-Things blockchain
9
作者 Zhuo Chen Jiahuan Yi +1 位作者 Yang Zhou Wei Luo 《Digital Communications and Networks》 2025年第3期912-924,共13页
Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)du... Blockchain technology,based on decentralized data storage and distributed consensus design,has become a promising solution to address data security risks and provide privacy protection in the Internet-of-Things(IoT)due to its tamper-proof and non-repudiation features.Although blockchain typically does not require the endorsement of third-party trust organizations,it mostly needs to perform necessary mathematical calculations to prevent malicious attacks,which results in stricter requirements for computation resources on the participating devices.By offloading the computation tasks required to support blockchain consensus to edge service nodes or the cloud,while providing data privacy protection for IoT applications,it can effectively address the limitations of computation and energy resources in IoT devices.However,how to make reasonable offloading decisions for IoT devices remains an open issue.Due to the excellent self-learning ability of Reinforcement Learning(RL),this paper proposes a RL enabled Swarm Intelligence Optimization Algorithm(RLSIOA)that aims to improve the quality of initial solutions and achieve efficient optimization of computation task offloading decisions.The algorithm considers various factors that may affect the revenue obtained by IoT devices executing consensus algorithms(e.g.,Proof-of-Work),it optimizes the proportion of sub-tasks to be offloaded and the scale of computing resources to be rented from the edge and cloud to maximize the revenue of devices.Experimental results show that RLSIOA can obtain higher-quality offloading decision-making schemes at lower latency costs compared to representative benchmark algorithms. 展开更多
关键词 Blockchain Task offloading Swarm intelligence Reinforcement learning
在线阅读 下载PDF
Multi-Leader Multi-Follower Stackelberg Game-Based Offloading Strategy for Blockchain-Enabled DT-HetVNets
10
作者 Zhao Haitao Yang Dexian +2 位作者 Wang Qin Zhu Hongbo Cai Yan 《China Communications》 2025年第11期223-241,共19页
Recent advances in integrating Digital Twins(DTs)with Heterogeneous Vehicular Networks(HetVNets)enhance decision-making and improve network performance.Additionally,developments in Mobile Edge Computing(MEC)support th... Recent advances in integrating Digital Twins(DTs)with Heterogeneous Vehicular Networks(HetVNets)enhance decision-making and improve network performance.Additionally,developments in Mobile Edge Computing(MEC)support the computational demands of DTs.However,the decentralized nature of MEC systems introduces security challenges and traditional HetVNets fail to efficiently integrate diverse computing and network resources,limiting their ability to handle services for vehicles.This paper presents a novel service request offloading framework for DT-HetVNets to address these issues.In this framework,we design utility functions for vehicles and infrastructures to maximize satisfaction of their requirements through data synchronization and decision-making between DTs and entities.Furthermore,we propose a new honestly based distributed PoA(HDPoA)via scalable work.The interactions between infrastructures and vehicles are modeled as a multi-leader multi-follower(MLMF)game,and we develop a dynamic iterative algorithm to achieve the Nash equilibrium(NE)of the proposed game-theoretic model.Experimental results validate the effectiveness and accuracy of our scheme. 展开更多
关键词 blockchain digital twins service offloading stackelberg game vehicular networks
在线阅读 下载PDF
A Study for Inter-Satellite Cooperative Computation Offloading in LEO Satellite Networks
11
作者 Gang Yuanshuo Zhang Yuexia +2 位作者 Wu Peng Zheng Hui Fan Guangteng 《China Communications》 2025年第2期12-25,共14页
Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient int... Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption. 展开更多
关键词 computation offloading inter-satellite co-operation LEO satellite networks
在线阅读 下载PDF
A Task Offloading Method for Vehicular Edge Computing Based on Reputation Assessment
12
作者 Jun Li Yawei Dong +2 位作者 Liang Ni Guopeng Feng Fangfang Shan 《Computers, Materials & Continua》 2025年第5期3537-3552,共16页
With the development of vehicle networks and the construction of roadside units,Vehicular Ad Hoc Networks(VANETs)are increasingly promoting cooperative computing patterns among vehicles.Vehicular edge computing(VEC)of... With the development of vehicle networks and the construction of roadside units,Vehicular Ad Hoc Networks(VANETs)are increasingly promoting cooperative computing patterns among vehicles.Vehicular edge computing(VEC)offers an effective solution to mitigate resource constraints by enabling task offloading to edge cloud infrastructure,thereby reducing the computational burden on connected vehicles.However,this sharing-based and distributed computing paradigm necessitates ensuring the credibility and reliability of various computation nodes.Existing vehicular edge computing platforms have not adequately considered themisbehavior of vehicles.We propose a practical task offloading algorithm based on reputation assessment to address the task offloading problem in vehicular edge computing under an unreliable environment.This approach integrates deep reinforcement learning and reputation management to address task offloading challenges.Simulation experiments conducted using Veins demonstrate the feasibility and effectiveness of the proposed method. 展开更多
关键词 Vehicular edge computing task offloading reputation assessment
在线阅读 下载PDF
Resilient task offloading in integrated satellite-terrestrial networks with mobility-induced variability
13
作者 Kongyang Chen Guomin Liang +2 位作者 Hongfa Zhang Waixi Liu Jiaxing Shen 《Digital Communications and Networks》 2025年第6期1961-1972,共12页
Low Earth Orbit(LEO)satellites have gained significant attention for their low-latency communication and computing capabilities but face challenges due to high mobility and limited resources.Existing studies integrate... Low Earth Orbit(LEO)satellites have gained significant attention for their low-latency communication and computing capabilities but face challenges due to high mobility and limited resources.Existing studies integrate edge computing with LEO satellite networks to optimize task offloading;however,they often overlook the impact of frequent topology changes,unstable transmission links,and intermittent satellite visibility,leading to task execution failures and increased latency.To address these issues,this paper proposes a dynamic integrated spaceground computing framework that optimizes task offloading under LEO satellite mobility constraints.We design an adaptive task migration strategy through inter-satellite links when target satellites become inaccessible.To enhance data transmission reliability,we introduce a communication stability constraint based on transmission bit error rate(BER).Additionally,we develop a genetic algorithm(GA)-based task scheduling method that dynamically allocates computing resources while minimizing latency and energy consumption.Our approach jointly considers satellite computing capacity,link stability,and task execution reliability to achieve efficient task offloading.Experimental results demonstrate that the proposed method significantly improves task execution success rates,reduces system overhead,and enhances overall computational efficiency in LEO satellite networks. 展开更多
关键词 LEO satellites Task offloading Edge computing Communication reliability
在线阅读 下载PDF
Quantum-Enhanced Edge Offloading and Resource Scheduling with Privacy-Preserving Machine Learning
14
作者 Junjie Cao Zhiyong Yu +2 位作者 Xiaotao Xu Baohong Zhu Jian Yang 《Computers, Materials & Continua》 2025年第6期5235-5257,共23页
This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environmen... This paper introduces a quantum-enhanced edge computing framework that synergizes quantuminspired algorithms with advanced machine learning techniques to optimize real-time task offloading in edge computing environments.This innovative approach not only significantly improves the system’s real-time responsiveness and resource utilization efficiency but also addresses critical challenges in Internet of Things(IoT)ecosystems—such as high demand variability,resource allocation uncertainties,and data privacy concerns—through practical solutions.Initially,the framework employs an adaptive adjustment mechanism to dynamically manage task and resource states,complemented by online learning models for precise predictive analytics.Secondly,it accelerates the search for optimal solutions using Grover’s algorithm while efficiently evaluating complex constraints through multi-controlled Toffoli gates,thereby markedly enhancing the practicality and robustness of the proposed solution.Furthermore,to bolster the system’s adaptability and response speed in dynamic environments,an efficientmonitoring mechanism and event-driven architecture are incorporated,ensuring timely responses to environmental changes and maintaining synchronization between internal and external systems.Experimental evaluations confirm that the proposed algorithm demonstrates superior performance in complex application scenarios,characterized by faster convergence,enhanced stability,and superior data privacy protection,alongside notable reductions in latency and optimized resource utilization.This research paves the way for transformative advancements in edge computing and IoT technologies,driving smart edge computing towards unprecedented levels of intelligence and automation. 展开更多
关键词 Edge offloading resource scheduling machine learning privacy protection
在线阅读 下载PDF
AMulti-Objective Joint Task Offloading Scheme for Vehicular Edge Computing
15
作者 Yiwei Zhang Xin Cui Qinghui Zhao 《Computers, Materials & Continua》 2025年第8期2355-2373,共19页
The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial veh... The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial vehicles-assisted mobile edge computing(UAV-MEC)has gained attention in providing computing resources to vehicles and optimizing system costs.We model the computing offloading problem as a multi-objective optimization challenge aimed at minimizing both task processing delay and energy consumption.We propose a three-stage hybrid offloading scheme called Dynamic Vehicle Clustering Game-based Multi-objective Whale Optimization Algorithm(DVCG-MWOA)to address this problem.A novel dynamic clustering algorithm is designed based on vehiclemobility and task offloading efficiency requirements,where each UAV independently serves as the cluster head for a vehicle cluster and adjusts its position at the end of each timeslot in response to vehiclemovement.Within eachUAV-led cluster,cooperative game theory is applied to allocate computing resourceswhile respecting delay constraints,ensuring efficient resource utilization.To enhance offloading efficiency,we improve the multi-objective whale optimization algorithm(MOWOA),resulting in the MWOA.This enhanced algorithm determines the optimal allocation of pending tasks to different edge computing devices and the resource utilization ratio of each device,ultimately achieving a Pareto-optimal solution set for delay and energy consumption.Experimental results demonstrate that the proposed joint offloading scheme significantly reduces both delay and energy consumption compared to existing approaches,offering superior performance for vehicular networks. 展开更多
关键词 Vehicular edge computing cooperative game theory multi-objective optimization computation offloading
在线阅读 下载PDF
A pipelining task offloading strategy via delay-aware multi-agent reinforcement learning in Cybertwin-enabled 6G network
16
作者 Haiwen Niu Luhan Wang +3 位作者 Keliang Du Zhaoming Lu Xiangming Wen Yu Liu 《Digital Communications and Networks》 2025年第1期92-105,共14页
Cybertwin-enabled 6th Generation(6G)network is envisioned to support artificial intelligence-native management to meet changing demands of 6G applications.Multi-Agent Deep Reinforcement Learning(MADRL)technologies dri... Cybertwin-enabled 6th Generation(6G)network is envisioned to support artificial intelligence-native management to meet changing demands of 6G applications.Multi-Agent Deep Reinforcement Learning(MADRL)technologies driven by Cybertwins have been proposed for adaptive task offloading strategies.However,the existence of random transmission delay between Cybertwin-driven agents and underlying networks is not considered in related works,which destroys the standard Markov property and increases the decision reaction time to reduce the task offloading strategy performance.In order to address this problem,we propose a pipelining task offloading method to lower the decision reaction time and model it as a delay-aware Markov Decision Process(MDP).Then,we design a delay-aware MADRL algorithm to minimize the weighted sum of task execution latency and energy consumption.Firstly,the state space is augmented using the lastly-received state and historical actions to rebuild the Markov property.Secondly,Gate Transformer-XL is introduced to capture historical actions'importance and maintain the consistent input dimension dynamically changed due to random transmission delays.Thirdly,a sampling method and a new loss function with the difference between the current and target state value and the difference between real state-action value and augmented state-action value are designed to obtain state transition trajectories close to the real ones.Numerical results demonstrate that the proposed methods are effective in reducing reaction time and improving the task offloading performance in the random-delay Cybertwin-enabled 6G networks. 展开更多
关键词 Cybertwin Multi-Agent Deep Reinforcement Learning(MADRL) Task offloading PIPELINING Delay-aware
在线阅读 下载PDF
Task offloading delay minimization in vehicular edge computing based on vehicle trajectory prediction
17
作者 Feng Zeng Zheng Zhang Jinsong Wu 《Digital Communications and Networks》 2025年第2期537-546,共10页
In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements o... In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements on task offloading and reveal that data preparation time for task execution can be minimized via forward-looking scheduling.Then,a Bi-LSTM-based model is proposed to predict the trajectories of vehicles.The service area is divided into several equal-sized grids.If the actual position of the vehicle and the predicted position by the model belong to the same grid,the prediction is considered correct,thereby reducing the difficulty of vehicle trajectory prediction.Moreover,we propose a scheduling strategy for delay optimization based on the vehicle trajectory prediction.Considering the inevitable prediction error,we take some edge servers around the predicted area as candidate execution servers and the data required for task execution are backed up to these candidate servers,thereby reducing the impact of prediction deviations on task offloading and converting the modest increase of resource overheads into delay reduction in task offloading.Simulation results show that,compared with other classical schemes,the proposed strategy has lower average task offloading delays. 展开更多
关键词 Vehicular edge computing Task offloading Vehicle trajectory prediction Delay minimization Bi-LSTM model
在线阅读 下载PDF
RS-DRL-based offloading policy and UAV trajectory design in F-MEC systems
18
作者 Yulu Yang Han Xu +3 位作者 Zhu Jin Tiecheng Song Jing Hu Xiaoqin Song 《Digital Communications and Networks》 2025年第2期377-386,共10页
For better flexibility and greater coverage areas,Unmanned Aerial Vehicles(UAVs)have been applied in Flying Mobile Edge Computing(F-MEC)systems to offer offloading services for the User Equipment(UEs).This paper consi... For better flexibility and greater coverage areas,Unmanned Aerial Vehicles(UAVs)have been applied in Flying Mobile Edge Computing(F-MEC)systems to offer offloading services for the User Equipment(UEs).This paper considers a disaster-affected scenario where UAVs undertake the role of MEC servers to provide computing resources for Disaster Relief Devices(DRDs).Considering the fairness of DRDs,a max-min problem is formulated to optimize the saved time by jointly designing the trajectory of the UAVs,the offloading policy and serving time under the constraint of the UAVs'energy capacity.To solve the above non-convex problem,we first model the service process as a Markov Decision Process(MDP)with the Reward Shaping(RS)technique,and then propose a Deep Reinforcement Learning(DRL)based algorithm to find the optimal solution for the MDP.Simulations show that the proposed RS-DRL algorithm is valid and effective,and has better performance than the baseline algorithms. 展开更多
关键词 Flying mobile edge computing Task offloading Reward shaping Deep reinforcement learning
在线阅读 下载PDF
Improved PPO-Based Task Offloading Strategies for Smart Grids
19
作者 Qian Wang Ya Zhou 《Computers, Materials & Continua》 2025年第8期3835-3856,共22页
Edge computing has transformed smart grids by lowering latency,reducing network congestion,and enabling real-time decision-making.Nevertheless,devising an optimal task-offloading strategy remains challenging,as it mus... Edge computing has transformed smart grids by lowering latency,reducing network congestion,and enabling real-time decision-making.Nevertheless,devising an optimal task-offloading strategy remains challenging,as it must jointly minimise energy consumption and response time under fluctuating workloads and volatile network conditions.We cast the offloading problem as aMarkov Decision Process(MDP)and solve it with Deep Reinforcement Learning(DRL).Specifically,we present a three-tier architecture—end devices,edge nodes,and a cloud server—and enhance Proximal Policy Optimization(PPO)to learn adaptive,energy-aware policies.A Convolutional Neural Network(CNN)extracts high-level features from system states,enabling the agent to respond continually to changing conditions.Extensive simulations show that the proposed method reduces task latency and energy consumption far more than several baseline algorithms,thereby improving overall system performance.These results demonstrate the effectiveness and robustness of the framework for real-time task offloading in dynamic smart-grid environments. 展开更多
关键词 Smart grid task offloading deep reinforcement learning improved PPO algorithm edge computing
在线阅读 下载PDF
Joint Cooperative Task Offloading and Computing Resource Allocation for Low Earth Orbit Satellites
20
作者 Zhang Yuexia Zhang Siyu Zheng Hui 《China Communications》 2025年第10期88-100,共13页
Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it ... Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it is difficult for them to satisfy low-delay and low-energy consumed task processing re-quirements due to their limited computing resources.To address the above problems,this paper presents the LEO satellites cooperative task offloading and computing resource allocation(LEOC-TC)algorithm.Firstly,a LEO satellites cooperative task offloading system was designed so that the multispectral LEO satellites in the system could leave their tasks locally or offload them to other LEO satellites with servers for processing,thus providing high-quality information-processing services for multispectral LEO satellites.Secondly,an optimization problem with the objective of minimizing the weighted sum of the total task pro-cessing delay and total energy consumed for multi-spectral LEO satellite is established,and the optimiza-tion problem is split into an offloading ratio subprob-lem and a computing resource subproblem.Finally,Bernoulli mapping tuna swarm optimization algorithm is used to solve the above two sub-problems separately in order to satisfy the demand of low delay and low energy consumed by the system.Simulation results show that the total task processing cost of the LEOCTC algorithm can be reduced by 63.32%,66.67%,and 80.72%compared to the random offloading ratio algorithm,the average resource offloading algorithm,and the local computing algorithm,respectively. 展开更多
关键词 computing resource allocation interstellar collaboration low earth orbit satellites task offloading
在线阅读 下载PDF
上一页 1 2 84 下一页 到第
使用帮助 返回顶部