As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the...As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.展开更多
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc...The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.展开更多
The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi...The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.展开更多
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta...Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.展开更多
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resourc...Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.展开更多
As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational ...As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.展开更多
With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications a...With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.展开更多
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
In the wake of major natural disasters or human-made disasters,the communication infrastruc-ture within disaster-stricken areas is frequently dam-aged.Unmanned aerial vehicles(UAVs),thanks to their merits such as rapi...In the wake of major natural disasters or human-made disasters,the communication infrastruc-ture within disaster-stricken areas is frequently dam-aged.Unmanned aerial vehicles(UAVs),thanks to their merits such as rapid deployment and high mobil-ity,are commonly regarded as an ideal option for con-structing temporary communication networks.Con-sidering the limited computing capability and battery power of UAVs,this paper proposes a two-layer UAV cooperative computing offloading strategy for emer-gency disaster relief scenarios.The multi-agent twin delayed deep deterministic policy gradient(MATD3)algorithm integrated with prioritized experience replay(PER)is utilized to jointly optimize the scheduling strategies of UAVs,task offloading ratios,and their mobility,aiming to diminish the energy consumption and delay of the system to the minimum.In order to address the aforementioned non-convex optimiza-tion issue,a Markov decision process(MDP)has been established.The results of simulation experiments demonstrate that,compared with the other four base-line algorithms,the algorithm introduced in this paper exhibits better convergence performance,verifying its feasibility and efficacy.展开更多
Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a...Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a system's overall delay and energy consumption by using a deep reinforcement learning(DRL)model to predict and allocate incoming workloads flexibly.The proposed methodology integrates workload prediction utilising long short-term memory(LSTM)networks with efficient load-balancing techniques led by deep Q-learning and Actor-critic algorithms.By continuously analysing current and historical data,the model can efficiently allocate resources,prioritizing speed and energy preservation.The experimental results demonstrate that our load balancing system,which utilises DRL,significantly reduces average response times and energy usage compared to traditional methods.This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance.It consistently provides reliable and durable performance across a range of dynamic workloads.展开更多
This article explores the characteristics of data resources from the perspective of production factors,analyzes the demand for trustworthy circulation technology,designs a fusion architecture and related solutions,inc...This article explores the characteristics of data resources from the perspective of production factors,analyzes the demand for trustworthy circulation technology,designs a fusion architecture and related solutions,including multi-party data intersection calculation,distributed machine learning,etc.It also compares performance differences,conducts formal verification,points out the value and limitations of architecture innovation,and looks forward to future opportunities.展开更多
Vehicular Edge Computing(VEC)enhances the quality of user services by deploying wealth of resources near vehicles.However,due to highly dynamic and complex nature of vehicular networks,centralized decisionmaking for r...Vehicular Edge Computing(VEC)enhances the quality of user services by deploying wealth of resources near vehicles.However,due to highly dynamic and complex nature of vehicular networks,centralized decisionmaking for resource allocation proves inadequate within VECs.Conversely,allocating resources via distributed decision-making consumes vehicular resources.To improve the quality of user service,we formulate a problem of latency minimization,further subdividing this problem into two subproblems to be solved through distributed decision-making.To mitigate the resource consumption caused by distributed decision-making,we propose Reinforcement Learning(RL)algorithm based on sequential alternating multi-agent system mechanism,which effectively reduces the dimensionality of action space without losing the informational content of action,achieving network lightweighting.We discuss the rationality,generalizability,and inherent advantages of proposed mechanism.Simulation results indicate that our proposed mechanism outperforms traditional RL algorithms in terms of stability,generalizability,and adaptability to scenarios with invalid actions,all while achieving network lightweighting.展开更多
Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exh...Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exhibit multidimensionality,nonlinearity,and high volatility,making the high-precision prediction of resource utilization a complex and challenging task.At present,cloud computing resource prediction methods include traditional statistical models,hybrid approaches combining machine learning and classical models,and deep learning techniques.Traditional statistical methods struggle with nonlinear predictions,hybrid methods face challenges in feature extraction and long-term dependencies,and deep learning methods incur high computational costs.The above methods are insufficient to achieve high-precision resource prediction in cloud computing systems.Therefore,we propose a new time series prediction model,called SDVformer,which is based on the Informer model by integrating the Savitzky-Golay(SG)filters,a novel Discrete-Variation Self-Attention(DVSA)mechanism,and a type-aware mixture of experts(T-MOE)framework.The SG filter is designed to reduce noise and enhance the feature representation of input data.The DVSA mechanism is proposed to optimize the selection of critical features to reduce computational complexity.The T-MOE framework is designed to adjust the model structure based on different resource characteristics,thereby improving prediction accuracy and adaptability.Experimental results show that our proposed SDVformer significantly outperforms baseline models,including Recurrent Neural Network(RNN),Long Short-Term Memory(LSTM),and Informer in terms of prediction precision,on both the Alibaba public dataset and the dataset collected by Beijing Jiaotong University(BJTU).Particularly compared with the Informer model,the average Mean Squared Error(MSE)of SDVformer decreases by about 80%,fully demonstrating its advantages in complex time series prediction tasks in cloud computing systems.展开更多
With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further impro...With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks.展开更多
Sequential-modular-based process flowsheeting software remains an indispensable tool for process design,control,and optimization.Yet,as the process industry advances in intelligent operation and maintenance,convention...Sequential-modular-based process flowsheeting software remains an indispensable tool for process design,control,and optimization.Yet,as the process industry advances in intelligent operation and maintenance,conventional sequential-modular-based process-simulation techniques present challenges regarding computationally intensive calculations and significant central processing unit(CPU)time requirements,particularly in large-scale design and optimization tasks.To address these challenges,this paper proposes a novel process-simulation parallel computing framework(PSPCF).This framework achieves layered parallelism in recycling processes at the unit operation level.Notably,PSPCF introduces a groundbreaking concept of formulating simulation problems as task graphs and utilizes Taskflow,an advanced task graph computing system,for hierarchical parallel scheduling and the execution of unit operation tasks.PSPCF also integrates an advanced work-stealing scheme to automatically balance thread resources with the demanding workload of unit operation tasks.For evaluation,both a simpler parallel column process and a more complex cracked gas separation process were simulated on a flowsheeting platform using PSPCF.The framework demonstrates significant time savings,achieving over 60%reduction in processing time for the simpler process and a 35%–40%speed-up for the more complex separation process.展开更多
In this work,we consider an Unmanned Aerial Vehicle(UAV)aided covert edge computing architecture,where multiple sensors are scattered with a certain distance on the ground.The sensor can implement several computation ...In this work,we consider an Unmanned Aerial Vehicle(UAV)aided covert edge computing architecture,where multiple sensors are scattered with a certain distance on the ground.The sensor can implement several computation tasks.In an emergency scenario,the computational capabilities of sensors are often limited,as seen in vehicular networks or Internet of Things(IoT)networks.The UAV can be utilized to undertake parts of the computation tasks,i.e.,edge computing.While various studies have advanced the performance of UAV-based edge computing systems,the security of wireless transmission in future 6G networks is becoming increasingly crucial due to its inherent broadcast nature,yet it has not received adequate attention.In this paper,we improve the covert performance in a UAV aided edge computing system.Parts of the computation tasks of multiple ground sensors are offloaded to the UAV,where the sensors offload the computing tasks to the UAV,and Willie around detects the transmissions.The transmit power of sensors,the offloading proportions of sensors and the hovering height of the UAV affect the system covert performance,we propose a deep reinforcement learning framework to jointly optimize them.The proposed algorithm minimizes the system average task processing delay while guaranteeing that the transmissions of sensors are not detected by the Willie under the covertness constraint.Extensive simulations are conducted to verify the effectiveness of the proposed algorithm to decrease the average task processing delay with comparison with other algorithms.展开更多
基金funded by the Fundamental Research Funds for the Central Universities(J2023-024,J2023-027).
文摘As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.
基金the National Research Foundation(NRF)Singapore mid-sized center grant(NRF-MSG-2023-0002)FrontierCRP grant(NRF-F-CRP-2024-0006)+2 种基金A*STAR Singapore MTC RIE2025 project(M24W1NS005)IAF-PP project(M23M5a0069)Ministry of Education(MOE)Singapore Tier 2 project(MOE-T2EP50220-0014).
文摘The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.
文摘The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.
基金supported in part by the National Natural Science Foundation of China under Grant No.61473066in part by the Natural Science Foundation of Hebei Province under Grant No.F2021501020+2 种基金in part by the S&T Program of Qinhuangdao under Grant No.202401A195in part by the Science Research Project of Hebei Education Department under Grant No.QN2025008in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant No.22567637H
文摘Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.
基金funded by Researchers Supporting Project Number(RSPD2025R947)King Saud University,Riyadh,Saudi Arabia.
文摘Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.
基金in part by the National Natural Science Foundation of China(NSFC)under Grant 62371012in part by the Beijing Natural Science Foundation under Grant 4252001.
文摘As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.
基金supported by Future Network Scientific Research Fund Project (FNSRFP-2021-ZD-4)National Natural Science Foundation of China (No.61991404,61902182)+1 种基金National Key Research and Development Program of China under Grant 2020YFB1600104Key Research and Development Plan of Jiangsu Province under Grant BE2020084-2。
文摘With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.
基金supported by the Basic Scientific Research Business Fund Project of Higher Education Institutions in Heilongjiang Province(145409601)the First Batch of Experimental Teaching and Teaching Laboratory Construction Research Projects in Heilongjiang Province(SJGZ20240038).
文摘In the wake of major natural disasters or human-made disasters,the communication infrastruc-ture within disaster-stricken areas is frequently dam-aged.Unmanned aerial vehicles(UAVs),thanks to their merits such as rapid deployment and high mobil-ity,are commonly regarded as an ideal option for con-structing temporary communication networks.Con-sidering the limited computing capability and battery power of UAVs,this paper proposes a two-layer UAV cooperative computing offloading strategy for emer-gency disaster relief scenarios.The multi-agent twin delayed deep deterministic policy gradient(MATD3)algorithm integrated with prioritized experience replay(PER)is utilized to jointly optimize the scheduling strategies of UAVs,task offloading ratios,and their mobility,aiming to diminish the energy consumption and delay of the system to the minimum.In order to address the aforementioned non-convex optimiza-tion issue,a Markov decision process(MDP)has been established.The results of simulation experiments demonstrate that,compared with the other four base-line algorithms,the algorithm introduced in this paper exhibits better convergence performance,verifying its feasibility and efficacy.
文摘Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a system's overall delay and energy consumption by using a deep reinforcement learning(DRL)model to predict and allocate incoming workloads flexibly.The proposed methodology integrates workload prediction utilising long short-term memory(LSTM)networks with efficient load-balancing techniques led by deep Q-learning and Actor-critic algorithms.By continuously analysing current and historical data,the model can efficiently allocate resources,prioritizing speed and energy preservation.The experimental results demonstrate that our load balancing system,which utilises DRL,significantly reduces average response times and energy usage compared to traditional methods.This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance.It consistently provides reliable and durable performance across a range of dynamic workloads.
文摘This article explores the characteristics of data resources from the perspective of production factors,analyzes the demand for trustworthy circulation technology,designs a fusion architecture and related solutions,including multi-party data intersection calculation,distributed machine learning,etc.It also compares performance differences,conducts formal verification,points out the value and limitations of architecture innovation,and looks forward to future opportunities.
基金supported by the National Natural Science Foundation of China(62271096,U20A20157)Science and Technology Research Program of Chongqing Municipal Education Commission(KJQN202000626)+4 种基金Natural Science Foundation of Chongqing,China(cstc2020jcyjzdxmX0024)University Innovation Research Group of Chongqing(CXQT20017)Youth Innovation Group Support Program of ICE Discipline of CQUPT(SCIE-QN-2022-04)Chongqing Postdoctoral Science Special Foundation(2021XM3058)Chongqing Postgraduate Research and Innovation Project under grant(CYB22250).
文摘Vehicular Edge Computing(VEC)enhances the quality of user services by deploying wealth of resources near vehicles.However,due to highly dynamic and complex nature of vehicular networks,centralized decisionmaking for resource allocation proves inadequate within VECs.Conversely,allocating resources via distributed decision-making consumes vehicular resources.To improve the quality of user service,we formulate a problem of latency minimization,further subdividing this problem into two subproblems to be solved through distributed decision-making.To mitigate the resource consumption caused by distributed decision-making,we propose Reinforcement Learning(RL)algorithm based on sequential alternating multi-agent system mechanism,which effectively reduces the dimensionality of action space without losing the informational content of action,achieving network lightweighting.We discuss the rationality,generalizability,and inherent advantages of proposed mechanism.Simulation results indicate that our proposed mechanism outperforms traditional RL algorithms in terms of stability,generalizability,and adaptability to scenarios with invalid actions,all while achieving network lightweighting.
文摘Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exhibit multidimensionality,nonlinearity,and high volatility,making the high-precision prediction of resource utilization a complex and challenging task.At present,cloud computing resource prediction methods include traditional statistical models,hybrid approaches combining machine learning and classical models,and deep learning techniques.Traditional statistical methods struggle with nonlinear predictions,hybrid methods face challenges in feature extraction and long-term dependencies,and deep learning methods incur high computational costs.The above methods are insufficient to achieve high-precision resource prediction in cloud computing systems.Therefore,we propose a new time series prediction model,called SDVformer,which is based on the Informer model by integrating the Savitzky-Golay(SG)filters,a novel Discrete-Variation Self-Attention(DVSA)mechanism,and a type-aware mixture of experts(T-MOE)framework.The SG filter is designed to reduce noise and enhance the feature representation of input data.The DVSA mechanism is proposed to optimize the selection of critical features to reduce computational complexity.The T-MOE framework is designed to adjust the model structure based on different resource characteristics,thereby improving prediction accuracy and adaptability.Experimental results show that our proposed SDVformer significantly outperforms baseline models,including Recurrent Neural Network(RNN),Long Short-Term Memory(LSTM),and Informer in terms of prediction precision,on both the Alibaba public dataset and the dataset collected by Beijing Jiaotong University(BJTU).Particularly compared with the Informer model,the average Mean Squared Error(MSE)of SDVformer decreases by about 80%,fully demonstrating its advantages in complex time series prediction tasks in cloud computing systems.
基金supported in part by National Natural Science Foundation of China(No.62071393)Fundamental Research Funds for the Central Universities(2682023ZTPY058).
文摘With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks.
基金supported by the National Key Research and Development Program of China(2022YFB3305900)the National Natural Science Foundation of China(Key Program)(62136003)+2 种基金the National Natural Science Foundation of China(62394345)the Major Science and Technology Projects of Longmen Laboratory(LMZDXM202206)the Fundamental Research Funds for the Central Universities.
文摘Sequential-modular-based process flowsheeting software remains an indispensable tool for process design,control,and optimization.Yet,as the process industry advances in intelligent operation and maintenance,conventional sequential-modular-based process-simulation techniques present challenges regarding computationally intensive calculations and significant central processing unit(CPU)time requirements,particularly in large-scale design and optimization tasks.To address these challenges,this paper proposes a novel process-simulation parallel computing framework(PSPCF).This framework achieves layered parallelism in recycling processes at the unit operation level.Notably,PSPCF introduces a groundbreaking concept of formulating simulation problems as task graphs and utilizes Taskflow,an advanced task graph computing system,for hierarchical parallel scheduling and the execution of unit operation tasks.PSPCF also integrates an advanced work-stealing scheme to automatically balance thread resources with the demanding workload of unit operation tasks.For evaluation,both a simpler parallel column process and a more complex cracked gas separation process were simulated on a flowsheeting platform using PSPCF.The framework demonstrates significant time savings,achieving over 60%reduction in processing time for the simpler process and a 35%–40%speed-up for the more complex separation process.
基金co-supported by the National Natural Science Foundation of China(No.62271093)the Natural Science Foundation of Chongqing,China(No.CSTB2023NSCQ-LZX0108)the Chongqing Graduate Research Innovation Project,China(No.CYS23093).
文摘In this work,we consider an Unmanned Aerial Vehicle(UAV)aided covert edge computing architecture,where multiple sensors are scattered with a certain distance on the ground.The sensor can implement several computation tasks.In an emergency scenario,the computational capabilities of sensors are often limited,as seen in vehicular networks or Internet of Things(IoT)networks.The UAV can be utilized to undertake parts of the computation tasks,i.e.,edge computing.While various studies have advanced the performance of UAV-based edge computing systems,the security of wireless transmission in future 6G networks is becoming increasingly crucial due to its inherent broadcast nature,yet it has not received adequate attention.In this paper,we improve the covert performance in a UAV aided edge computing system.Parts of the computation tasks of multiple ground sensors are offloaded to the UAV,where the sensors offload the computing tasks to the UAV,and Willie around detects the transmissions.The transmit power of sensors,the offloading proportions of sensors and the hovering height of the UAV affect the system covert performance,we propose a deep reinforcement learning framework to jointly optimize them.The proposed algorithm minimizes the system average task processing delay while guaranteeing that the transmissions of sensors are not detected by the Willie under the covertness constraint.Extensive simulations are conducted to verify the effectiveness of the proposed algorithm to decrease the average task processing delay with comparison with other algorithms.