The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi...The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.展开更多
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resourc...Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.展开更多
As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational ...As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.展开更多
With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications a...With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.展开更多
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further impro...With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks.展开更多
Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it ...Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it is difficult for them to satisfy low-delay and low-energy consumed task processing re-quirements due to their limited computing resources.To address the above problems,this paper presents the LEO satellites cooperative task offloading and computing resource allocation(LEOC-TC)algorithm.Firstly,a LEO satellites cooperative task offloading system was designed so that the multispectral LEO satellites in the system could leave their tasks locally or offload them to other LEO satellites with servers for processing,thus providing high-quality information-processing services for multispectral LEO satellites.Secondly,an optimization problem with the objective of minimizing the weighted sum of the total task pro-cessing delay and total energy consumed for multi-spectral LEO satellite is established,and the optimiza-tion problem is split into an offloading ratio subprob-lem and a computing resource subproblem.Finally,Bernoulli mapping tuna swarm optimization algorithm is used to solve the above two sub-problems separately in order to satisfy the demand of low delay and low energy consumed by the system.Simulation results show that the total task processing cost of the LEOCTC algorithm can be reduced by 63.32%,66.67%,and 80.72%compared to the random offloading ratio algorithm,the average resource offloading algorithm,and the local computing algorithm,respectively.展开更多
Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems...Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.展开更多
The unmanned aerial vehicle(UAV)-assisted mobile edge computing(MEC)has been deemed a promising solution for energy-constrained devices to run smart applications with computationintensive and latency-sensitive require...The unmanned aerial vehicle(UAV)-assisted mobile edge computing(MEC)has been deemed a promising solution for energy-constrained devices to run smart applications with computationintensive and latency-sensitive requirements,especially in some infrastructure-limited areas or some emergency scenarios.However,the multi-UAVassisted MEC network remains largely unexplored.In this paper,the dynamic trajectory optimization and computation offloading are studied in a multi-UAVassisted MEC system where multiple UAVs fly over a target area with different trajectories to serve ground users.By considering the dynamic channel condition and random task arrival and jointly optimizing UAVs'trajectories,user association,and subchannel assignment,the average long-term sum of the user energy consumption minimization problem is formulated.To address the problem involving both discrete and continuous variables,a hybrid decision deep reinforcement learning(DRL)-based intelligent energyefficient resource allocation and trajectory optimization algorithm is proposed,named HDRT algorithm,where deep Q network(DQN)and deep deterministic policy gradient(DDPG)are invoked to process discrete and continuous variables,respectively.Simulation results show that the proposed HDRT algorithm converges fast and outperforms other benchmarks in the aspect of user energy consumption and latency.展开更多
Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to ...Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to realize joint allocation of computing and connectivity resources in survivable inter-datacenter EONs,a survivable routing,modulation level,spectrum,and computing resource allocation algorithm(SRMLSCRA)algorithm and three datacenter selection strategies,i.e.Computing Resource First(CRF),Shortest Path First(SPF)and Random Destination(RD),are proposed for different scenarios.Unicast and manycast are applied to the communication of computing requests,and the routing strategies are calculated respectively.Simulation results show that SRMLCRA-CRF can serve the largest amount of protected computing tasks,and the requested calculation blocking probability is reduced by 29.2%,28.3%and 30.5%compared with SRMLSCRA-SPF,SRMLSCRA-RD and the benchmark EPS-RMSA algorithms respectively.Therefore,it is more applicable to the networks with huge calculations.Besides,SRMLSCRA-SPF consumes the least spectrum,thereby exhibiting its suitability for scenarios where the amount of calculation is small and communication resources are scarce.The results demonstrate that the proposed methods realize the joint allocation of computing and connectivity resources,and could provide efficient protection for services under single-link failure and occupy less spectrum.展开更多
Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus t...Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.展开更多
Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of comput...Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of computing resources in the monitoring model are analyzed. Then, a time-series autoregressive prediction model is devised. And an autoregressive support vector regression( ARSVR) monitoring method is put forward to predict the node load of the data grid. Finally, a model for historical observations sequences is set up using the autoregressive (AR) model and the model order is determined. The support vector regression(SVR) model is trained using historical data and the regression function is obtained. Simulation results show that the ARSVR method can effectively predict the node load.展开更多
In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this neces...In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.展开更多
The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation ...The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation and independent task scheduling aim to deliver short response time services demanded by the IoT devices and performed by fog servers.The heterogeneity of the IoT-Fog resources and the huge amount of data that needs to be processed by the IoT-Fog tasks make scheduling fog computing tasks a challenging problem.This study proposes an Adaptive Firefly Algorithm(AFA)for dependent task scheduling in IoT-Fog computing.The proposed AFA is a modified version of the standard Firefly Algorithm(FA),considering the execution times of the submitted tasks,the impact of synchronization requirements,and the communication time between dependent tasks.As IoT-Fog computing depends mainly on distributed fog node servers that receive tasks in a dynamic manner,tackling the communications and synchronization issues between dependent tasks is becoming a challenging problem.The proposed AFA aims to address the dynamic nature of IoT-Fog computing environments.The proposed AFA mechanism considers a dynamic light absorption coefficient to control the decrease in attractiveness over iterations.The proposed AFA mechanism performance was benchmarked against the standard Firefly Algorithm(FA),Puma Optimizer(PO),Genetic Algorithm(GA),and Ant Colony Optimization(ACO)through simulations under light,typical,and heavy workload scenarios.In heavy workloads,the proposed AFA mechanism obtained the shortest average execution time,968.98 ms compared to 970.96,1352.87,1247.28,and 1773.62 of FA,PO,GA,and ACO,respectively.The simulation results demonstrate the proposed AFA’s ability to rapidly converge to optimal solutions,emphasizing its adaptability and efficiency in typical and heavy workloads.展开更多
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ...Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.展开更多
Users and edge servers are not fullymutually trusted inmobile edge computing(MEC),and hence blockchain can be introduced to provide trustableMEC.In blockchain-basedMEC,each edge server functions as a node in bothMEC a...Users and edge servers are not fullymutually trusted inmobile edge computing(MEC),and hence blockchain can be introduced to provide trustableMEC.In blockchain-basedMEC,each edge server functions as a node in bothMEC and blockchain,processing users’tasks and then uploading the task related information to the blockchain.That is,each edge server runs both users’offloaded tasks and blockchain tasks simultaneously.Note that there is a trade-off between the resource allocation for MEC and blockchain tasks.Therefore,the allocation of the resources of edge servers to the blockchain and theMEC is crucial for the processing delay of blockchain-based MEC.Most of the existing research tackles the problem of resource allocation in either blockchain or MEC,which leads to unfavorable performance of the blockchain-based MEC system.In this paper,we study how to allocate the computing resources of edge servers to the MEC and blockchain tasks with the aimtominimize the total systemprocessing delay.For the problem,we propose a computing resource Allocation algorithmfor Blockchain-based MEC(ABM)which utilizes the Slater’s condition,Karush-Kuhn-Tucker(KKT)conditions,partial derivatives of the Lagrangian function and subgradient projection method to obtain the solution.Simulation results show that ABM converges and effectively reduces the processing delay of blockchain-based MEC.展开更多
Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of se...Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.展开更多
With the rapid development of urban rail transit,the existing track detection has some problems such as low efficiency and insufficient detection coverage,so an intelligent and automatic track detectionmethod based on...With the rapid development of urban rail transit,the existing track detection has some problems such as low efficiency and insufficient detection coverage,so an intelligent and automatic track detectionmethod based onUAV is urgently needed to avoid major safety accidents.At the same time,the geographical distribution of IoT devices results in the inefficient use of the significant computing potential held by a large number of devices.As a result,the Dispersed Computing(DCOMP)architecture enables collaborative computing between devices in the Internet of Everything(IoE),promotes low-latency and efficient cross-wide applications,and meets users’growing needs for computing performance and service quality.This paper focuses on examining the resource allocation challenge within a dispersed computing environment that utilizes UAV inspection tracks.Furthermore,the system takes into account both resource constraints and computational constraints and transforms the optimization problem into an energy minimization problem with computational constraints.The Markov Decision Process(MDP)model is employed to capture the connection between the dispersed computing resource allocation strategy and the system environment.Subsequently,a method based on Double Deep Q-Network(DDQN)is introduced to derive the optimal policy.Simultaneously,an experience replay mechanism is implemented to tackle the issue of increasing dimensionality.The experimental simulations validate the efficacy of the method across various scenarios.展开更多
文摘The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.
基金funded by Researchers Supporting Project Number(RSPD2025R947)King Saud University,Riyadh,Saudi Arabia.
文摘Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.
基金in part by the National Natural Science Foundation of China(NSFC)under Grant 62371012in part by the Beijing Natural Science Foundation under Grant 4252001.
文摘As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.
基金supported by Future Network Scientific Research Fund Project (FNSRFP-2021-ZD-4)National Natural Science Foundation of China (No.61991404,61902182)+1 种基金National Key Research and Development Program of China under Grant 2020YFB1600104Key Research and Development Plan of Jiangsu Province under Grant BE2020084-2。
文摘With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.
基金supported in part by National Natural Science Foundation of China(No.62071393)Fundamental Research Funds for the Central Universities(2682023ZTPY058).
文摘With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks.
基金supported in part by Sub Project of National Key Research and Development plan in 2020(No.2020YFC1511704)scientific research level improvement project to promote the colleges connotation development of Beijing Information Science&Technology University(No.2020KYNH212,No.2021CGZH302)in part by the National Natural Science Foundation of China(Grant No.61971048).
文摘Multispectral low earth orbit(LEO)satel-lites are characterized by a large volume of captured data and high spatial resolution,which can provide rich image information and data support for a vari-ety of fields,but it is difficult for them to satisfy low-delay and low-energy consumed task processing re-quirements due to their limited computing resources.To address the above problems,this paper presents the LEO satellites cooperative task offloading and computing resource allocation(LEOC-TC)algorithm.Firstly,a LEO satellites cooperative task offloading system was designed so that the multispectral LEO satellites in the system could leave their tasks locally or offload them to other LEO satellites with servers for processing,thus providing high-quality information-processing services for multispectral LEO satellites.Secondly,an optimization problem with the objective of minimizing the weighted sum of the total task pro-cessing delay and total energy consumed for multi-spectral LEO satellite is established,and the optimiza-tion problem is split into an offloading ratio subprob-lem and a computing resource subproblem.Finally,Bernoulli mapping tuna swarm optimization algorithm is used to solve the above two sub-problems separately in order to satisfy the demand of low delay and low energy consumed by the system.Simulation results show that the total task processing cost of the LEOCTC algorithm can be reduced by 63.32%,66.67%,and 80.72%compared to the random offloading ratio algorithm,the average resource offloading algorithm,and the local computing algorithm,respectively.
基金supported by the National Natural Science Foundation of China(61571149,62001139)the Initiation Fund for Postdoctoral Research in Heilongjiang Province(LBH-Q19098)the Natural Science Foundation of Heilongjiang Province(LH2020F0178).
文摘Fog computing has emerged as an important technology which can improve the performance of computation-intensive and latency-critical communication networks.Nevertheless,the fog computing Internet-of-Things(IoT)systems are susceptible to malicious eavesdropping attacks during the information transmission,and this issue has not been adequately addressed.In this paper,we propose a physical-layer secure fog computing IoT system model,which is able to improve the physical layer security of fog computing IoT networks against the malicious eavesdropping of multiple eavesdroppers.The secrecy rate of the proposed model is analyzed,and the quantum galaxy–based search algorithm(QGSA)is proposed to solve the hybrid task scheduling and resource management problem of the network.The computational complexity and convergence of the proposed algorithm are analyzed.Simulation results validate the efficiency of the proposed model and reveal the influence of various environmental parameters on fog computing IoT networks.Moreover,the simulation results demonstrate that the proposed hybrid task scheduling and resource management scheme can effectively enhance secrecy performance across different communication scenarios.
基金supported by National Natural Science Foundation of China(No.62471254)National Natural Science Foundation of China(No.92367302)。
文摘The unmanned aerial vehicle(UAV)-assisted mobile edge computing(MEC)has been deemed a promising solution for energy-constrained devices to run smart applications with computationintensive and latency-sensitive requirements,especially in some infrastructure-limited areas or some emergency scenarios.However,the multi-UAVassisted MEC network remains largely unexplored.In this paper,the dynamic trajectory optimization and computation offloading are studied in a multi-UAVassisted MEC system where multiple UAVs fly over a target area with different trajectories to serve ground users.By considering the dynamic channel condition and random task arrival and jointly optimizing UAVs'trajectories,user association,and subchannel assignment,the average long-term sum of the user energy consumption minimization problem is formulated.To address the problem involving both discrete and continuous variables,a hybrid decision deep reinforcement learning(DRL)-based intelligent energyefficient resource allocation and trajectory optimization algorithm is proposed,named HDRT algorithm,where deep Q network(DQN)and deep deterministic policy gradient(DDPG)are invoked to process discrete and continuous variables,respectively.Simulation results show that the proposed HDRT algorithm converges fast and outperforms other benchmarks in the aspect of user energy consumption and latency.
基金supported by the National Natural Science Foundation of China(No.62001045)Beijing Municipal Natural Science Foundation(No.4214059)+1 种基金Fund of State Key Laboratory of IPOC(BUPT)(No.IPOC2021ZT17)Fundamental Research Funds for the Central Universities(No.2022RC09).
文摘Inter-datacenter elastic optical networks(EON)need to provide the service for the requests of cloud computing that require not only connectivity and computing resources but also network survivability.In this paper,to realize joint allocation of computing and connectivity resources in survivable inter-datacenter EONs,a survivable routing,modulation level,spectrum,and computing resource allocation algorithm(SRMLSCRA)algorithm and three datacenter selection strategies,i.e.Computing Resource First(CRF),Shortest Path First(SPF)and Random Destination(RD),are proposed for different scenarios.Unicast and manycast are applied to the communication of computing requests,and the routing strategies are calculated respectively.Simulation results show that SRMLCRA-CRF can serve the largest amount of protected computing tasks,and the requested calculation blocking probability is reduced by 29.2%,28.3%and 30.5%compared with SRMLSCRA-SPF,SRMLSCRA-RD and the benchmark EPS-RMSA algorithms respectively.Therefore,it is more applicable to the networks with huge calculations.Besides,SRMLSCRA-SPF consumes the least spectrum,thereby exhibiting its suitability for scenarios where the amount of calculation is small and communication resources are scarce.The results demonstrate that the proposed methods realize the joint allocation of computing and connectivity resources,and could provide efficient protection for services under single-link failure and occupy less spectrum.
基金supported in part by the National Key R&D Program of China under Grant 2020YFB1005900the National Natural Science Foundation of China under Grant 62001220+3 种基金the Jiangsu Provincial Key Research and Development Program under Grants BE2022068the Natural Science Foundation of Jiangsu Province under Grants BK20200440the Future Network Scientific Research Fund Project FNSRFP-2021-YB-03the Young Elite Scientist Sponsorship Program,China Association for Science and Technology.
文摘Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.
基金The National High Technology Research and Development Program of China (863 Program) (No2007AA01Z404)
文摘Based on the monitoring and discovery service 4 (MDS4) model, a monitoring model for a data grid which supports reliable storage and intrusion tolerance is designed. The load characteristics and indicators of computing resources in the monitoring model are analyzed. Then, a time-series autoregressive prediction model is devised. And an autoregressive support vector regression( ARSVR) monitoring method is put forward to predict the node load of the data grid. Finally, a model for historical observations sequences is set up using the autoregressive (AR) model and the model order is determined. The support vector regression(SVR) model is trained using historical data and the regression function is obtained. Simulation results show that the ARSVR method can effectively predict the node load.
基金supported by the National Key Research and Development Program of China(No.2021YFB2900504).
文摘In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.
基金the Deanship of Graduate Studies and Scientific Research at Najran University for funding this work under the Easy Funding Program grant code(NU/EFP/SERC/13/166).
文摘The Internet of Things(IoT)has emerged as an important future technology.IoT-Fog is a new computing paradigm that processes IoT data on servers close to the source of the data.In IoT-Fog computing,resource allocation and independent task scheduling aim to deliver short response time services demanded by the IoT devices and performed by fog servers.The heterogeneity of the IoT-Fog resources and the huge amount of data that needs to be processed by the IoT-Fog tasks make scheduling fog computing tasks a challenging problem.This study proposes an Adaptive Firefly Algorithm(AFA)for dependent task scheduling in IoT-Fog computing.The proposed AFA is a modified version of the standard Firefly Algorithm(FA),considering the execution times of the submitted tasks,the impact of synchronization requirements,and the communication time between dependent tasks.As IoT-Fog computing depends mainly on distributed fog node servers that receive tasks in a dynamic manner,tackling the communications and synchronization issues between dependent tasks is becoming a challenging problem.The proposed AFA aims to address the dynamic nature of IoT-Fog computing environments.The proposed AFA mechanism considers a dynamic light absorption coefficient to control the decrease in attractiveness over iterations.The proposed AFA mechanism performance was benchmarked against the standard Firefly Algorithm(FA),Puma Optimizer(PO),Genetic Algorithm(GA),and Ant Colony Optimization(ACO)through simulations under light,typical,and heavy workload scenarios.In heavy workloads,the proposed AFA mechanism obtained the shortest average execution time,968.98 ms compared to 970.96,1352.87,1247.28,and 1773.62 of FA,PO,GA,and ACO,respectively.The simulation results demonstrate the proposed AFA’s ability to rapidly converge to optimal solutions,emphasizing its adaptability and efficiency in typical and heavy workloads.
基金supported by National Key Research and Development Program of China(2018YFC1504502).
文摘Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.
基金supported by the Key Research and Development Project in Anhui Province of China(Grant No.202304a05020059)the Fundamental Research Funds for the Central Universities of China(Grant No.PA2023GDSK0055)the Project of Anhui Province Economic and Information Bureau(Grant No.JB20099).
文摘Users and edge servers are not fullymutually trusted inmobile edge computing(MEC),and hence blockchain can be introduced to provide trustableMEC.In blockchain-basedMEC,each edge server functions as a node in bothMEC and blockchain,processing users’tasks and then uploading the task related information to the blockchain.That is,each edge server runs both users’offloaded tasks and blockchain tasks simultaneously.Note that there is a trade-off between the resource allocation for MEC and blockchain tasks.Therefore,the allocation of the resources of edge servers to the blockchain and theMEC is crucial for the processing delay of blockchain-based MEC.Most of the existing research tackles the problem of resource allocation in either blockchain or MEC,which leads to unfavorable performance of the blockchain-based MEC system.In this paper,we study how to allocate the computing resources of edge servers to the MEC and blockchain tasks with the aimtominimize the total systemprocessing delay.For the problem,we propose a computing resource Allocation algorithmfor Blockchain-based MEC(ABM)which utilizes the Slater’s condition,Karush-Kuhn-Tucker(KKT)conditions,partial derivatives of the Lagrangian function and subgradient projection method to obtain the solution.Simulation results show that ABM converges and effectively reduces the processing delay of blockchain-based MEC.
基金supported in part by the National Natural Science Foundation of China under Grant 62172192,U20A20228,and 62171203in part by the Science and Technology Demonstration Project of Social Development of Jiangsu Province under Grant BE2019631。
文摘Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.
文摘With the rapid development of urban rail transit,the existing track detection has some problems such as low efficiency and insufficient detection coverage,so an intelligent and automatic track detectionmethod based onUAV is urgently needed to avoid major safety accidents.At the same time,the geographical distribution of IoT devices results in the inefficient use of the significant computing potential held by a large number of devices.As a result,the Dispersed Computing(DCOMP)architecture enables collaborative computing between devices in the Internet of Everything(IoE),promotes low-latency and efficient cross-wide applications,and meets users’growing needs for computing performance and service quality.This paper focuses on examining the resource allocation challenge within a dispersed computing environment that utilizes UAV inspection tracks.Furthermore,the system takes into account both resource constraints and computational constraints and transforms the optimization problem into an energy minimization problem with computational constraints.The Markov Decision Process(MDP)model is employed to capture the connection between the dispersed computing resource allocation strategy and the system environment.Subsequently,a method based on Double Deep Q-Network(DDQN)is introduced to derive the optimal policy.Simultaneously,an experience replay mechanism is implemented to tackle the issue of increasing dimensionality.The experimental simulations validate the efficacy of the method across various scenarios.