Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat...Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.展开更多
Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing...Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework.展开更多
With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability ...With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability to complex scenarios,while traditional Vertical Handoff(VHO)algorithms fail to fully address the unique challenges of UAV-H systems,including high-speed mobility and limited computational resources.To bridge this gap,this paper proposes a heterogeneous network architecture integrating 5th Generation Mobile Communication Technology(5G)cellular networks and self-organizing mesh networks for UAV-H clusters,accompanied by a novel VHO algorithm.The proposed algorithm leverages Multi-Attribute Decision-Making(MADM)theory combined with Genetic Algorithm(GA)optimization,incorporating edge computing to enable real-time decision-making and offload computational tasks efficiently.By constructing a utility function through attribute and weight matrices,the algorithm ensures UAV-H clusters dynamically select the optimal network access with the highest utility value.Simulation results demonstrate that the proposed method reduces network handoff times by 26.13%compared to the Decision Tree VHO(DT-VHO),effectively mitigating the ping-pong effect,and enhancing total system throughput by 19.99%under the same conditions.In terms of handoff delay,it outperforms the Artificial Neural Network VHO(ANN-VHO),significantly improving the Quality of Service(QoS).Finally,real-world hardware platform experiments validate the algorithm’s feasibility and superior performance in practical UAV-H cluster operations.This work provides a robust solution for seamless network connectivity in high-mobility UAV clusters,offering critical support for emerging applications requiring reliable and efficient wireless communication.展开更多
With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further impro...With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks.展开更多
Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-int...Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-intensive applications for satellite communication networks(SCNs).By deploying edge computing servers in satellite and gateway stations,SCNs can achieve significant performance gains of the computing capacities at the expense of extending the dimensions and complexity of resource management.Therefore,in this paper,we investigate the joint computing and communication resource management problem for SCNs to minimize the execution latency of the computation-intensive applications,while two different satellite edge computing scenarios and local execution are considered.Furthermore,the joint computing and communication resource allocation problem for the computation-intensive services is formulated as a mixed-integer programming problem.A game-theoretic and many-to-one matching theorybased scheme(JCCRA-GM)is proposed to achieve an approximate optimal solution.Numerical results show that the proposed method with low complexity can achieve almost the same weight-sum latency as the Brute-force method.展开更多
Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the afford...Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the affordability of robust hardware for remote audio recording is stimulating international interest in the potential for acoustic methods for biodiversity monitoring.The scale of the data involved requires automated methods,however, the development of acoustic sensor networks capable of sampling the soundscape across time and space and relaying the data to an accessible storage location remains a significant technical challenge, with power management at its core. Recording and transmitting large quantities of audio data is power intensive,hampering long-term deployment in remote, off-grid locations of key ecological interest. Rather than transmitting heavy audio data, in this paper, we propose a low-cost and energy efficient wireless acoustic sensor network integrated with edge computing structure for remote acoustic monitoring and in situ analysis.Recording and computation of acoustic indices are carried out directly on edge devices built from low noise primo condenser microphones and Teensy microcontrollers, using internal FFT hardware support. Resultant indices are transmitted over a ZigBee-based wireless mesh network to a destination server.Benchmark tests of audio quality, indices computation and power consumption demonstrate acoustic equivalence and significant power savings over current solutions.展开更多
The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resourc...The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resources(ONER)for EC over Fiber-Wireless(FiWi)access networks is proposed in this paper.In the ONER scheme,the FiWi network connects edge computing nodes with fiber and converges wireless and fiber connections seamlessly,so that it can support the offloading transmission with low delay and wide bandwidth.Based on the ONER scheme supported by FiWi networks,computation tasks can be offloaded to edge computing nodes in a wider range of area without increasing wireless hops(e.g.,just one wireless hop),which achieves low delay.Additionally,an efficient Computation Resource Scheduling(CRS)algorithm based on the ONER scheme is also proposed to make offloading decision.The results show that more offloading requests can be satisfied and the average completion time of computation tasks decreases significantly with the ONER scheme and the CRS algorithm.Therefore,the ONER scheme and the CRS algorithm can schedule computation resources at neighboring edge computing nodes for offloading to meet the challenge of large scale computation tasks.展开更多
As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file tra...As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%.展开更多
In recent years, artificial intelligence and automotive industry have developed rapidly, and autonomous driving has gradually become the focus of the industry. In road networks, the problem of proximity detection refe...In recent years, artificial intelligence and automotive industry have developed rapidly, and autonomous driving has gradually become the focus of the industry. In road networks, the problem of proximity detection refers to detecting whether two moving objects are close to each other or not in real time. However, the battery life and computing capability of mobile devices are limited in the actual scene,which results in high latency and energy consumption. Therefore, it is a tough problem to determine the proximity relationship between mobile users with low latency and energy consumption. In this article, we aim at finding a tradeoff between latency and energy consumption. We formalize the computation offloading problem base on mobile edge computing(MEC)into a constrained multiobjective optimization problem(CMOP) and utilize NSGA-II to solve it. The simulation results demonstrate that NSGA-II can find the Pareto set, which reduces the latency and energy consumption effectively. In addition, a large number of solutions provided by the Pareto set give us more choices of the offloading decision according to the actual situation.展开更多
ultra-Dense Network(UDN)has been envisioned as a promising technology to provide high-quality wireless connectivity in dense urban areas,in which the density of Access Points(APs)is increased up to the point where it ...ultra-Dense Network(UDN)has been envisioned as a promising technology to provide high-quality wireless connectivity in dense urban areas,in which the density of Access Points(APs)is increased up to the point where it is comparable with or surpasses the density of active mobile users.In order to mitigate inter-AP interference and improve spectrum efficiency,APs in UDNs are usually clustered into multiple groups to serve different mobile users,respectively.However,as the number of APs increases,the computational capability within an AP group has become the bottleneck of AP clustering.In this paper,we first propose a novel UDN architecture based on Mobile Edge Computing(MEC),in which each MEC server is associated with a user-centric AP cluster to act as a mobile agent.In addition,in the context of MEC-based UDN,we leverage mobility prediction techniques to achieve a dynamic AP clustering scheme,in which the cluster structure can automatically adapt to the dynamic distribution of user traffic in a specific area.Simulation results show that the proposed scheme can highly increase the average user throughput compared with the baseline algorithm using max-SINR user association and equal bandwidth allocation,while it guarantees at the same time low transmission delay.展开更多
Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this r...Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques.展开更多
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ...Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.展开更多
Unmanned aerial vehicle(UAV)-enabled edge computing is emerging as a potential enabler for Artificial Intelligence of Things(AIoT)in the forthcoming sixth-generation(6G)communication networks.With the use of flexible ...Unmanned aerial vehicle(UAV)-enabled edge computing is emerging as a potential enabler for Artificial Intelligence of Things(AIoT)in the forthcoming sixth-generation(6G)communication networks.With the use of flexible UAVs,massive sensing data is gathered and processed promptly without considering geographical locations.Deep neural networks(DNNs)are becoming a driving force to extract valuable information from sensing data.However,the lightweight servers installed on UAVs are not able to meet the extremely high requirements of inference tasks due to the limited battery capacities of UAVs.In this work,we investigate a DNN model placement problem for AIoT applications,where the trained DNN models are selected and placed on UAVs to execute inference tasks locally.It is impractical to obtain future DNN model request profiles and system operation states in UAV-enabled edge computing.The Lyapunov optimization technique is leveraged for the proposed DNN model placement problem.Based on the observed system overview,an advanced online placement(AOP)algorithm is developed to solve the transformed problem in each time slot,which can reduce DNN model transmission delay and disk I/O energy cost simultaneously while keeping the input data queues stable.Finally,extensive simulations are provided to depict the effectiveness of the AOP algorithm.The numerical results demonstrate that the AOP algorithm can reduce 18.14%of the model placement cost and 29.89%of the input data queue backlog on average by comparing it with benchmark algorithms.展开更多
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc...The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.展开更多
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
Non-orthogonal multiple access (NOMA) technology has recently been widely integrated into multi-access edge computing (MEC) to support task offloading in industrial wireless networks (IWNs) with limited radio resource...Non-orthogonal multiple access (NOMA) technology has recently been widely integrated into multi-access edge computing (MEC) to support task offloading in industrial wireless networks (IWNs) with limited radio resources. This paper minimizes the system overhead regarding task processing delay and energy consumption for the IWN with hybrid NOMA and orthogonal multiple access (OMA) schemes. Specifically, we formulate the system overhead minimization (SOM) problem by considering the limited computation and communication resources and NOMA efficiency. To solve the complex mixed-integer nonconvex problem, we combine the multi-agent twin delayed deep deterministic policy gradient (MATD3) and convex optimization, namely MATD3-CO, for iterative optimization. Specifically, we first decouple SOM into two sub-problems, i.e., joint sub-channel allocation and task offloading sub-problem, and computation resource allocation sub-problem. Then, we propose MATD3 to optimize the sub-channel allocation and task offloading ratio, and employ the convex optimization to allocate the computation resource with a closed-form expression derived by the Karush-Kuhn-Tucker (KKT) conditions. The solution is obtained by iteratively solving these two sub-problems. The experimental results indicate that the MATD3-CO scheme, when compared to the benchmark schemes, significantly decreases system overhead with respect to both delay and energy consumption.展开更多
In this paper,the security problem for the multi-access edge computing(MEC)network is researched,and an intelligent immunity-based security defense system is proposed to identify the unauthorized mobile users and to p...In this paper,the security problem for the multi-access edge computing(MEC)network is researched,and an intelligent immunity-based security defense system is proposed to identify the unauthorized mobile users and to protect the security of whole system.In the proposed security defense system,the security is protected by the intelligent immunity through three functions,identification function,learning function,and regulation function,respectively.Meanwhile,a three process-based intelligent algorithm is proposed for the intelligent immunity system.Numerical simulations are given to prove the effeteness of the proposed approach.展开更多
5G is a new generation of mobile networking that aims to achieve unparalleled speed and performance. To accomplish this, three technologies, Device-to-Device communication (D2D), multi-access edge computing (MEC) and ...5G is a new generation of mobile networking that aims to achieve unparalleled speed and performance. To accomplish this, three technologies, Device-to-Device communication (D2D), multi-access edge computing (MEC) and network function virtualization (NFV) with ClickOS, have been a significant part of 5G, and this paper mainly discusses them. D2D enables direct communication between devices without the relay of base station. In 5G, a two-tier cellular network composed of traditional cellular network system and D2D is an efficient method for realizing high-speed communication. MEC unloads work from end devices and clouds platforms to widespread nodes, and connects the nodes together with outside devices and third-party providers, in order to diminish the overloading effect on any device caused by enormous applications and improve users’ quality of experience (QoE). There is also a NFV method in order to fulfill the 5G requirements. In this part, an optimized virtual machine for middle-boxes named ClickOS is introduced, and it is evaluated in several aspects. Some middle boxes are being implemented in the ClickOS and proved to have outstanding performances.展开更多
Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of im...Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of image or video processing,which imposes enormous pressure on the UAV computation platform.To solve this issue,in this work,we propose an intelligent Task Offloading Algorithm(iTOA)for UAV edge computing network.Compared with existing methods,iTOA is able to perceive the network’s environment intelligently to decide the offloading action based on deep Monte Calor Tree Search(MCTS),the core algorithm of Alpha Go.MCTS will simulate the offloading decision trajectories to acquire the best decision by maximizing the reward,such as lowest latency or power consumption.To accelerate the search convergence of MCTS,we also proposed a splitting Deep Neural Network(sDNN)to supply the prior probability for MCTS.The sDNN is trained by a self-supervised learning manager.Here,the training data set is obtained from iTOA itself as its own teacher.Compared with game theory and greedy search-based methods,the proposed iTOA improves service latency performance by 33%and 60%,respectively.展开更多
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
文摘Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.
文摘Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework.
基金supported by the Key R&D Plan of Shandong Province(Major Science and Technology Innovation Project)No.2023CXGC0107012024 City-University Integrated Development Strategic Engineering Project No.JNSX2024066.
文摘With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability to complex scenarios,while traditional Vertical Handoff(VHO)algorithms fail to fully address the unique challenges of UAV-H systems,including high-speed mobility and limited computational resources.To bridge this gap,this paper proposes a heterogeneous network architecture integrating 5th Generation Mobile Communication Technology(5G)cellular networks and self-organizing mesh networks for UAV-H clusters,accompanied by a novel VHO algorithm.The proposed algorithm leverages Multi-Attribute Decision-Making(MADM)theory combined with Genetic Algorithm(GA)optimization,incorporating edge computing to enable real-time decision-making and offload computational tasks efficiently.By constructing a utility function through attribute and weight matrices,the algorithm ensures UAV-H clusters dynamically select the optimal network access with the highest utility value.Simulation results demonstrate that the proposed method reduces network handoff times by 26.13%compared to the Decision Tree VHO(DT-VHO),effectively mitigating the ping-pong effect,and enhancing total system throughput by 19.99%under the same conditions.In terms of handoff delay,it outperforms the Artificial Neural Network VHO(ANN-VHO),significantly improving the Quality of Service(QoS).Finally,real-world hardware platform experiments validate the algorithm’s feasibility and superior performance in practical UAV-H cluster operations.This work provides a robust solution for seamless network connectivity in high-mobility UAV clusters,offering critical support for emerging applications requiring reliable and efficient wireless communication.
基金supported in part by National Natural Science Foundation of China(No.62071393)Fundamental Research Funds for the Central Universities(2682023ZTPY058).
文摘With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks.
基金This work was supported by the National Natural Science Foundation of China(Grants 61971054 and 61601045)Science and Technology on Information Transmission and Dissemination in Communication Networks Laboratory Foundation(HHX21641X002 and HHX20641X003).
文摘Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-intensive applications for satellite communication networks(SCNs).By deploying edge computing servers in satellite and gateway stations,SCNs can achieve significant performance gains of the computing capacities at the expense of extending the dimensions and complexity of resource management.Therefore,in this paper,we investigate the joint computing and communication resource management problem for SCNs to minimize the execution latency of the computation-intensive applications,while two different satellite edge computing scenarios and local execution are considered.Furthermore,the joint computing and communication resource allocation problem for the computation-intensive services is formulated as a mixed-integer programming problem.A game-theoretic and many-to-one matching theorybased scheme(JCCRA-GM)is proposed to achieve an approximate optimal solution.Numerical results show that the proposed method with low complexity can achieve almost the same weight-sum latency as the Brute-force method.
文摘Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the affordability of robust hardware for remote audio recording is stimulating international interest in the potential for acoustic methods for biodiversity monitoring.The scale of the data involved requires automated methods,however, the development of acoustic sensor networks capable of sampling the soundscape across time and space and relaying the data to an accessible storage location remains a significant technical challenge, with power management at its core. Recording and transmitting large quantities of audio data is power intensive,hampering long-term deployment in remote, off-grid locations of key ecological interest. Rather than transmitting heavy audio data, in this paper, we propose a low-cost and energy efficient wireless acoustic sensor network integrated with edge computing structure for remote acoustic monitoring and in situ analysis.Recording and computation of acoustic indices are carried out directly on edge devices built from low noise primo condenser microphones and Teensy microcontrollers, using internal FFT hardware support. Resultant indices are transmitted over a ZigBee-based wireless mesh network to a destination server.Benchmark tests of audio quality, indices computation and power consumption demonstrate acoustic equivalence and significant power savings over current solutions.
基金supported by National Natural Science Foundation of China(Grant No.61471053,61901052)Fundamental Research Funds for the Central Universities(Grant 2018RC03)Beijing Laboratory of Advanced Information Networks
文摘The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resources(ONER)for EC over Fiber-Wireless(FiWi)access networks is proposed in this paper.In the ONER scheme,the FiWi network connects edge computing nodes with fiber and converges wireless and fiber connections seamlessly,so that it can support the offloading transmission with low delay and wide bandwidth.Based on the ONER scheme supported by FiWi networks,computation tasks can be offloaded to edge computing nodes in a wider range of area without increasing wireless hops(e.g.,just one wireless hop),which achieves low delay.Additionally,an efficient Computation Resource Scheduling(CRS)algorithm based on the ONER scheme is also proposed to make offloading decision.The results show that more offloading requests can be satisfied and the average completion time of computation tasks decreases significantly with the ONER scheme and the CRS algorithm.Therefore,the ONER scheme and the CRS algorithm can schedule computation resources at neighboring edge computing nodes for offloading to meet the challenge of large scale computation tasks.
基金supported by the National Key Research and Development Program of China 2021YFB2900504,2020YFB1807900 and 2020YFB1807903by the National Science Foundation of China under Grant 62271062,62071063。
文摘As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%.
基金supported in part by the National Natural Science Foundation of China (Grant No. 61901052)in part by the 111 project (Grant No. B17007)in part by the Director Funds of Beijing Key Laboratory of Network System Architecture and Convergence (Grant No. 2017BKL-NSACZJ-02)。
文摘In recent years, artificial intelligence and automotive industry have developed rapidly, and autonomous driving has gradually become the focus of the industry. In road networks, the problem of proximity detection refers to detecting whether two moving objects are close to each other or not in real time. However, the battery life and computing capability of mobile devices are limited in the actual scene,which results in high latency and energy consumption. Therefore, it is a tough problem to determine the proximity relationship between mobile users with low latency and energy consumption. In this article, we aim at finding a tradeoff between latency and energy consumption. We formalize the computation offloading problem base on mobile edge computing(MEC)into a constrained multiobjective optimization problem(CMOP) and utilize NSGA-II to solve it. The simulation results demonstrate that NSGA-II can find the Pareto set, which reduces the latency and energy consumption effectively. In addition, a large number of solutions provided by the Pareto set give us more choices of the offloading decision according to the actual situation.
基金This work was partially supported by the National Natural Science Foundation of China(61801208,61671233,61931023)the Jiangsu Science Foundation(BK20170650)+2 种基金the Postdoctoral Science Foundation of China(BX201700118,2017M621712)the Jiangsu Postdoctoral Science Foundation(1701118B)the open research fund of National Mobile Communications Research Laboratory(2019D02).
文摘ultra-Dense Network(UDN)has been envisioned as a promising technology to provide high-quality wireless connectivity in dense urban areas,in which the density of Access Points(APs)is increased up to the point where it is comparable with or surpasses the density of active mobile users.In order to mitigate inter-AP interference and improve spectrum efficiency,APs in UDNs are usually clustered into multiple groups to serve different mobile users,respectively.However,as the number of APs increases,the computational capability within an AP group has become the bottleneck of AP clustering.In this paper,we first propose a novel UDN architecture based on Mobile Edge Computing(MEC),in which each MEC server is associated with a user-centric AP cluster to act as a mobile agent.In addition,in the context of MEC-based UDN,we leverage mobility prediction techniques to achieve a dynamic AP clustering scheme,in which the cluster structure can automatically adapt to the dynamic distribution of user traffic in a specific area.Simulation results show that the proposed scheme can highly increase the average user throughput compared with the baseline algorithm using max-SINR user association and equal bandwidth allocation,while it guarantees at the same time low transmission delay.
文摘Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques.
基金supported by National Key Research and Development Program of China(2018YFC1504502).
文摘Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.
基金supported by the National Science Foundation of China(Grant No.62202118)the Top-Technology Talent Project from Guizhou Education Department(Qianjiao Ji[2022]073)+1 种基金the Natural Science Foundation of Hebei Province(Grant No.F2022203045 and F2022203026)the Central Government Guided Local Science and Technology Development Fund Project(Grant No.226Z0701G).
文摘Unmanned aerial vehicle(UAV)-enabled edge computing is emerging as a potential enabler for Artificial Intelligence of Things(AIoT)in the forthcoming sixth-generation(6G)communication networks.With the use of flexible UAVs,massive sensing data is gathered and processed promptly without considering geographical locations.Deep neural networks(DNNs)are becoming a driving force to extract valuable information from sensing data.However,the lightweight servers installed on UAVs are not able to meet the extremely high requirements of inference tasks due to the limited battery capacities of UAVs.In this work,we investigate a DNN model placement problem for AIoT applications,where the trained DNN models are selected and placed on UAVs to execute inference tasks locally.It is impractical to obtain future DNN model request profiles and system operation states in UAV-enabled edge computing.The Lyapunov optimization technique is leveraged for the proposed DNN model placement problem.Based on the observed system overview,an advanced online placement(AOP)algorithm is developed to solve the transformed problem in each time slot,which can reduce DNN model transmission delay and disk I/O energy cost simultaneously while keeping the input data queues stable.Finally,extensive simulations are provided to depict the effectiveness of the AOP algorithm.The numerical results demonstrate that the AOP algorithm can reduce 18.14%of the model placement cost and 29.89%of the input data queue backlog on average by comparing it with benchmark algorithms.
基金the National Research Foundation(NRF)Singapore mid-sized center grant(NRF-MSG-2023-0002)FrontierCRP grant(NRF-F-CRP-2024-0006)+2 种基金A*STAR Singapore MTC RIE2025 project(M24W1NS005)IAF-PP project(M23M5a0069)Ministry of Education(MOE)Singapore Tier 2 project(MOE-T2EP50220-0014).
文摘The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.
基金supported by the National Natural Science Foundation of China under Grants 92267108,62173322 and 61821005the Science and Technology Program of Liaoning Province under Grants 2023JH3/10200004 and 2022JH25/10100005.
文摘Non-orthogonal multiple access (NOMA) technology has recently been widely integrated into multi-access edge computing (MEC) to support task offloading in industrial wireless networks (IWNs) with limited radio resources. This paper minimizes the system overhead regarding task processing delay and energy consumption for the IWN with hybrid NOMA and orthogonal multiple access (OMA) schemes. Specifically, we formulate the system overhead minimization (SOM) problem by considering the limited computation and communication resources and NOMA efficiency. To solve the complex mixed-integer nonconvex problem, we combine the multi-agent twin delayed deep deterministic policy gradient (MATD3) and convex optimization, namely MATD3-CO, for iterative optimization. Specifically, we first decouple SOM into two sub-problems, i.e., joint sub-channel allocation and task offloading sub-problem, and computation resource allocation sub-problem. Then, we propose MATD3 to optimize the sub-channel allocation and task offloading ratio, and employ the convex optimization to allocate the computation resource with a closed-form expression derived by the Karush-Kuhn-Tucker (KKT) conditions. The solution is obtained by iteratively solving these two sub-problems. The experimental results indicate that the MATD3-CO scheme, when compared to the benchmark schemes, significantly decreases system overhead with respect to both delay and energy consumption.
基金This work was supported by National Natural Science Foundation of China(No.61971026)the Fundamental Research Funds for the Central Universities(No.FRF-TP-18-008A3).
文摘In this paper,the security problem for the multi-access edge computing(MEC)network is researched,and an intelligent immunity-based security defense system is proposed to identify the unauthorized mobile users and to protect the security of whole system.In the proposed security defense system,the security is protected by the intelligent immunity through three functions,identification function,learning function,and regulation function,respectively.Meanwhile,a three process-based intelligent algorithm is proposed for the intelligent immunity system.Numerical simulations are given to prove the effeteness of the proposed approach.
文摘5G is a new generation of mobile networking that aims to achieve unparalleled speed and performance. To accomplish this, three technologies, Device-to-Device communication (D2D), multi-access edge computing (MEC) and network function virtualization (NFV) with ClickOS, have been a significant part of 5G, and this paper mainly discusses them. D2D enables direct communication between devices without the relay of base station. In 5G, a two-tier cellular network composed of traditional cellular network system and D2D is an efficient method for realizing high-speed communication. MEC unloads work from end devices and clouds platforms to widespread nodes, and connects the nodes together with outside devices and third-party providers, in order to diminish the overloading effect on any device caused by enormous applications and improve users’ quality of experience (QoE). There is also a NFV method in order to fulfill the 5G requirements. In this part, an optimized virtual machine for middle-boxes named ClickOS is introduced, and it is evaluated in several aspects. Some middle boxes are being implemented in the ClickOS and proved to have outstanding performances.
基金the Artificial Intelligence Key Laboratory of Sichuan Province(Nos.2019RYJ05)National Natural Science Foundation of China(Nos.61971107).
文摘Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of image or video processing,which imposes enormous pressure on the UAV computation platform.To solve this issue,in this work,we propose an intelligent Task Offloading Algorithm(iTOA)for UAV edge computing network.Compared with existing methods,iTOA is able to perceive the network’s environment intelligently to decide the offloading action based on deep Monte Calor Tree Search(MCTS),the core algorithm of Alpha Go.MCTS will simulate the offloading decision trajectories to acquire the best decision by maximizing the reward,such as lowest latency or power consumption.To accelerate the search convergence of MCTS,we also proposed a splitting Deep Neural Network(sDNN)to supply the prior probability for MCTS.The sDNN is trained by a self-supervised learning manager.Here,the training data set is obtained from iTOA itself as its own teacher.Compared with game theory and greedy search-based methods,the proposed iTOA improves service latency performance by 33%and 60%,respectively.