Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat...Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.展开更多
Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing...Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework.展开更多
With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further impro...With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks.展开更多
Commercial ultra-dense low-Earth-orbit(LEO)satellite constellations have recently been deployed to provide seamless global Internet services.To improve the satellite network transmission efficiency and provide robust ...Commercial ultra-dense low-Earth-orbit(LEO)satellite constellations have recently been deployed to provide seamless global Internet services.To improve the satellite network transmission efficiency and provide robust wide-coverage computing services for future sixth-generation(6G)users,growing attention has been focused on LEO-satellite-based computing networks,to which ground users can offload computation tasks.However,how to design a LEO satellite constellation for computing networks,while considering discrepancies in the computing requirements of different regions,remains an open question.In this paper,we investigate an ultra-dense LEO-satellite-based computing network to which ground user terminals(UTs)offload part of their computing tasks to satellites.We formulate the ultra-dense constellation design problem as a multi-objective optimization problem(MOOP)to maximize the average coverage rate,transmission capacity,and computational capability,while minimizing the number of satellites.In order to depict the connectivity characteristics of satellite-based computing networks,we propose a terrestrial-satellite connectivity model to determine the coverage rate in different regions.We design a priority-adaptive algorithm to design the optimal inclined-orbit constellation by solving this MOOP.Simulation results verify the accuracy of our theoretical connectivity model and show the optimal constellation deployment,given quality-of-service(QoS)requirements.For the same number of deployed LEO satellites,the proposed constellation outperforms its existing counterparts;in particular,it achieves 25%-45%performance improvements in the average coverage rate.展开更多
In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computi...In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on.展开更多
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these...With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.展开更多
Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the afford...Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the affordability of robust hardware for remote audio recording is stimulating international interest in the potential for acoustic methods for biodiversity monitoring.The scale of the data involved requires automated methods,however, the development of acoustic sensor networks capable of sampling the soundscape across time and space and relaying the data to an accessible storage location remains a significant technical challenge, with power management at its core. Recording and transmitting large quantities of audio data is power intensive,hampering long-term deployment in remote, off-grid locations of key ecological interest. Rather than transmitting heavy audio data, in this paper, we propose a low-cost and energy efficient wireless acoustic sensor network integrated with edge computing structure for remote acoustic monitoring and in situ analysis.Recording and computation of acoustic indices are carried out directly on edge devices built from low noise primo condenser microphones and Teensy microcontrollers, using internal FFT hardware support. Resultant indices are transmitted over a ZigBee-based wireless mesh network to a destination server.Benchmark tests of audio quality, indices computation and power consumption demonstrate acoustic equivalence and significant power savings over current solutions.展开更多
The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resourc...The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resources(ONER)for EC over Fiber-Wireless(FiWi)access networks is proposed in this paper.In the ONER scheme,the FiWi network connects edge computing nodes with fiber and converges wireless and fiber connections seamlessly,so that it can support the offloading transmission with low delay and wide bandwidth.Based on the ONER scheme supported by FiWi networks,computation tasks can be offloaded to edge computing nodes in a wider range of area without increasing wireless hops(e.g.,just one wireless hop),which achieves low delay.Additionally,an efficient Computation Resource Scheduling(CRS)algorithm based on the ONER scheme is also proposed to make offloading decision.The results show that more offloading requests can be satisfied and the average completion time of computation tasks decreases significantly with the ONER scheme and the CRS algorithm.Therefore,the ONER scheme and the CRS algorithm can schedule computation resources at neighboring edge computing nodes for offloading to meet the challenge of large scale computation tasks.展开更多
Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-int...Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-intensive applications for satellite communication networks(SCNs).By deploying edge computing servers in satellite and gateway stations,SCNs can achieve significant performance gains of the computing capacities at the expense of extending the dimensions and complexity of resource management.Therefore,in this paper,we investigate the joint computing and communication resource management problem for SCNs to minimize the execution latency of the computation-intensive applications,while two different satellite edge computing scenarios and local execution are considered.Furthermore,the joint computing and communication resource allocation problem for the computation-intensive services is formulated as a mixed-integer programming problem.A game-theoretic and many-to-one matching theorybased scheme(JCCRA-GM)is proposed to achieve an approximate optimal solution.Numerical results show that the proposed method with low complexity can achieve almost the same weight-sum latency as the Brute-force method.展开更多
In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as we...In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as well as ensure the reliability of Vehicular UE(VUE),a Joint Allocation of Wireless resource and MEC Computing resource(JAWC)algorithm is proposed.The JAWC algorithm includes two steps:V2X links clustering and MEC computation resource scheduling.In the V2X links clustering,a Spectral Radius based Interference Cancellation scheme(SR-IC)is proposed to obtain the optimal resource allocation matrix.By converting the calculation of SINR into the calculation of matrix maximum row sum,the accumulated interference of VUE can be constrained and the the SINR calculation complexity can be effectively reduced.In the MEC computation resource scheduling,by transforming the original optimization problem into a convex problem,the optimal task offloading proportion of VUE and MEC computation resource allocation can be obtained.The simulation further demonstrates that the JAWC algorithm can significantly reduce the total delay as well as ensure the communication reliability of VUE in the MEC-enabled vehicular network.展开更多
With the rapid development of wireless networks,the Ad Hoc networks are widely used in many fields,but the current network security solutions for the Ad Hoc network are not competitive enough.So the critical technolog...With the rapid development of wireless networks,the Ad Hoc networks are widely used in many fields,but the current network security solutions for the Ad Hoc network are not competitive enough.So the critical technology of Ad Hoc network applications shall be how to implement the security scheme.Here the discussions are focused on the specific solution against the security threats which the Ad Hoc networks will face,the methodology of a management model which uses trusted computing technology to solve Ad Hoc network security problems,and the analysis and verification for the security of this model.展开更多
The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide...The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide variety of traffic types.Current traffic analysis methods are executed on the cloud,which needs to upload the traffic data.Fog computing is a more promising way to save bandwidth resources by offloading these tasks to the fog nodes.However,traffic analysis models based on traditional machine learning need to retrain all traffic data when updating the trained model,which are not suitable for fog computing due to the poor computing power.In this study,we design a novel fog computing based traffic analysis system using broad learning.For one thing,fog computing can provide a distributed architecture for saving the bandwidth resources.For another,we use the broad learning to incrementally train the traffic data,which is more suitable for fog computing because it can support incremental updates of models without retraining all data.We implement our system on the Raspberry Pi,and experimental results show that we have a 98%probability to accurately identify these traffic data.Moreover,our method has a faster training speed compared with Convolutional Neural Network(CNN).展开更多
The rapid development of information technology has fueled an ever-increasing demand for ultrafast and ultralow-en-ergy-consumption computing.Existing computing instruments are pre-dominantly electronic processors,whi...The rapid development of information technology has fueled an ever-increasing demand for ultrafast and ultralow-en-ergy-consumption computing.Existing computing instruments are pre-dominantly electronic processors,which use elec-trons as information carriers and possess von Neumann architecture featured by physical separation of storage and pro-cessing.The scaling of computing speed is limited not only by data transfer between memory and processing units,but also by RC delay associated with integrated circuits.Moreover,excessive heating due to Ohmic losses is becoming a severe bottleneck for both speed and power consumption scaling.Using photons as information carriers is a promising alternative.Owing to the weak third-order optical nonlinearity of conventional materials,building integrated photonic com-puting chips under traditional von Neumann architecture has been a challenge.Here,we report a new all-optical comput-ing framework to realize ultrafast and ultralow-energy-consumption all-optical computing based on convolutional neural networks.The device is constructed from cascaded silicon Y-shaped waveguides with side-coupled silicon waveguide segments which we termed“weight modulators”to enable complete phase and amplitude control in each waveguide branch.The generic device concept can be used for equation solving,multifunctional logic operations as well as many other mathematical operations.Multiple computing functions including transcendental equation solvers,multifarious logic gate operators,and half-adders were experimentally demonstrated to validate the all-optical computing performances.The time-of-flight of light through the network structure corresponds to an ultrafast computing time of the order of several picoseconds with an ultralow energy consumption of dozens of femtojoules per bit.Our approach can be further expan-ded to fulfill other complex computing tasks based on non-von Neumann architectures and thus paves a new way for on-chip all-optical computing.展开更多
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc...The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.展开更多
Since virtualization technology enables the abstraction and sharing of resources in a flexible management way, the overall expenses of network deployment can be significantly reduced. Therefore, the technology has bee...Since virtualization technology enables the abstraction and sharing of resources in a flexible management way, the overall expenses of network deployment can be significantly reduced. Therefore, the technology has been widely applied in the core network. With the tremendous growth in mobile traffic and services, it is natural to extend virtualization technology to the cloud computing based radio access networks(CCRANs) for achieving high spectral efficiency with low cost.In this paper, the virtualization technologies in CC-RANs are surveyed, including the system architecture, key enabling techniques, challenges, and open issues. The enabling key technologies for virtualization in CC-RANs mainly including virtual resource allocation, radio access network(RAN) slicing, mobility management, and social-awareness have been comprehensively surveyed to satisfy the isolation, customization and high-efficiency utilization of radio resources. The challenges and open issues mainly focus on virtualization levels for CC-RANs, signaling design for CC-RAN virtualization, performance analysis for CC-RAN virtualization, and network security for virtualized CC-RANs.展开更多
The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility ...The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility of users as well as satisfy the required quality of service(QoS) such as the end-to-end response latency experienced by each user. We formulate the problem of joint resource allocation as a combinatorial optimization problem. Three evolutionary approaches are considered to solve the problem: genetic algorithm(GA), ant colony optimization with genetic algorithm(ACO-GA), and quantum genetic algorithm(QGA). To decrease the time complexity, we propose a mapping process between the resource allocation matrix and the chromosome of GA, ACO-GA, and QGA, search the available radio and cloud resource pairs based on the resource availability matrixes for ACOGA, and encode the difference value between the allocated resources and the minimum resource requirement for QGA. Extensive simulation results show that our proposed methods greatly outperform the existing algorithms in terms of running time, the accuracy of final results, the total utility, resource utilization and the end-to-end response latency guaranteeing.展开更多
As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file tra...As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%.展开更多
In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this neces...In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.展开更多
Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different ...Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed.展开更多
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
文摘Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.
文摘Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework.
基金supported in part by National Natural Science Foundation of China(No.62071393)Fundamental Research Funds for the Central Universities(2682023ZTPY058).
文摘With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks.
基金support from the National Natural Science Foundation of China(62322101)partial support from the Guangdong Basic and Applied Basic Research Foundation(2023B0303000019).
文摘Commercial ultra-dense low-Earth-orbit(LEO)satellite constellations have recently been deployed to provide seamless global Internet services.To improve the satellite network transmission efficiency and provide robust wide-coverage computing services for future sixth-generation(6G)users,growing attention has been focused on LEO-satellite-based computing networks,to which ground users can offload computation tasks.However,how to design a LEO satellite constellation for computing networks,while considering discrepancies in the computing requirements of different regions,remains an open question.In this paper,we investigate an ultra-dense LEO-satellite-based computing network to which ground user terminals(UTs)offload part of their computing tasks to satellites.We formulate the ultra-dense constellation design problem as a multi-objective optimization problem(MOOP)to maximize the average coverage rate,transmission capacity,and computational capability,while minimizing the number of satellites.In order to depict the connectivity characteristics of satellite-based computing networks,we propose a terrestrial-satellite connectivity model to determine the coverage rate in different regions.We design a priority-adaptive algorithm to design the optimal inclined-orbit constellation by solving this MOOP.Simulation results verify the accuracy of our theoretical connectivity model and show the optimal constellation deployment,given quality-of-service(QoS)requirements.For the same number of deployed LEO satellites,the proposed constellation outperforms its existing counterparts;in particular,it achieves 25%-45%performance improvements in the average coverage rate.
基金This work was supported by the National Key R&D Program of China No.2019YFB1802800.
文摘In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on.
基金supported by the National Science Foundation of China under Grant 62271062 and 62071063by the Zhijiang Laboratory Open Project Fund 2020LCOAB01。
文摘With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well.
文摘Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the affordability of robust hardware for remote audio recording is stimulating international interest in the potential for acoustic methods for biodiversity monitoring.The scale of the data involved requires automated methods,however, the development of acoustic sensor networks capable of sampling the soundscape across time and space and relaying the data to an accessible storage location remains a significant technical challenge, with power management at its core. Recording and transmitting large quantities of audio data is power intensive,hampering long-term deployment in remote, off-grid locations of key ecological interest. Rather than transmitting heavy audio data, in this paper, we propose a low-cost and energy efficient wireless acoustic sensor network integrated with edge computing structure for remote acoustic monitoring and in situ analysis.Recording and computation of acoustic indices are carried out directly on edge devices built from low noise primo condenser microphones and Teensy microcontrollers, using internal FFT hardware support. Resultant indices are transmitted over a ZigBee-based wireless mesh network to a destination server.Benchmark tests of audio quality, indices computation and power consumption demonstrate acoustic equivalence and significant power savings over current solutions.
基金supported by National Natural Science Foundation of China(Grant No.61471053,61901052)Fundamental Research Funds for the Central Universities(Grant 2018RC03)Beijing Laboratory of Advanced Information Networks
文摘The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resources(ONER)for EC over Fiber-Wireless(FiWi)access networks is proposed in this paper.In the ONER scheme,the FiWi network connects edge computing nodes with fiber and converges wireless and fiber connections seamlessly,so that it can support the offloading transmission with low delay and wide bandwidth.Based on the ONER scheme supported by FiWi networks,computation tasks can be offloaded to edge computing nodes in a wider range of area without increasing wireless hops(e.g.,just one wireless hop),which achieves low delay.Additionally,an efficient Computation Resource Scheduling(CRS)algorithm based on the ONER scheme is also proposed to make offloading decision.The results show that more offloading requests can be satisfied and the average completion time of computation tasks decreases significantly with the ONER scheme and the CRS algorithm.Therefore,the ONER scheme and the CRS algorithm can schedule computation resources at neighboring edge computing nodes for offloading to meet the challenge of large scale computation tasks.
基金This work was supported by the National Natural Science Foundation of China(Grants 61971054 and 61601045)Science and Technology on Information Transmission and Dissemination in Communication Networks Laboratory Foundation(HHX21641X002 and HHX20641X003).
文摘Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-intensive applications for satellite communication networks(SCNs).By deploying edge computing servers in satellite and gateway stations,SCNs can achieve significant performance gains of the computing capacities at the expense of extending the dimensions and complexity of resource management.Therefore,in this paper,we investigate the joint computing and communication resource management problem for SCNs to minimize the execution latency of the computation-intensive applications,while two different satellite edge computing scenarios and local execution are considered.Furthermore,the joint computing and communication resource allocation problem for the computation-intensive services is formulated as a mixed-integer programming problem.A game-theoretic and many-to-one matching theorybased scheme(JCCRA-GM)is proposed to achieve an approximate optimal solution.Numerical results show that the proposed method with low complexity can achieve almost the same weight-sum latency as the Brute-force method.
基金This work was supported in part by the National Key R&D Program of China under Grant 2019YFE0114000in part by the National Natural Science Foundation of China under Grant 61701042+1 种基金in part by the 111 Project of China(Grant No.B16006)the research foundation of Ministry of EducationChina Mobile under Grant MCM20180101.
文摘In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as well as ensure the reliability of Vehicular UE(VUE),a Joint Allocation of Wireless resource and MEC Computing resource(JAWC)algorithm is proposed.The JAWC algorithm includes two steps:V2X links clustering and MEC computation resource scheduling.In the V2X links clustering,a Spectral Radius based Interference Cancellation scheme(SR-IC)is proposed to obtain the optimal resource allocation matrix.By converting the calculation of SINR into the calculation of matrix maximum row sum,the accumulated interference of VUE can be constrained and the the SINR calculation complexity can be effectively reduced.In the MEC computation resource scheduling,by transforming the original optimization problem into a convex problem,the optimal task offloading proportion of VUE and MEC computation resource allocation can be obtained.The simulation further demonstrates that the JAWC algorithm can significantly reduce the total delay as well as ensure the communication reliability of VUE in the MEC-enabled vehicular network.
基金National Natural Science Foundation of China under Grant No. 60970115,National Natural Science Funds Projects of China under Grant No. 91018008
文摘With the rapid development of wireless networks,the Ad Hoc networks are widely used in many fields,but the current network security solutions for the Ad Hoc network are not competitive enough.So the critical technology of Ad Hoc network applications shall be how to implement the security scheme.Here the discussions are focused on the specific solution against the security threats which the Ad Hoc networks will face,the methodology of a management model which uses trusted computing technology to solve Ad Hoc network security problems,and the analysis and verification for the security of this model.
基金supported by JSPS KAKENHI Grant Number JP16K00117, JP19K20250KDDI Foundationthe China Scholarship Council (201808050016)
文摘The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide variety of traffic types.Current traffic analysis methods are executed on the cloud,which needs to upload the traffic data.Fog computing is a more promising way to save bandwidth resources by offloading these tasks to the fog nodes.However,traffic analysis models based on traditional machine learning need to retrain all traffic data when updating the trained model,which are not suitable for fog computing due to the poor computing power.In this study,we design a novel fog computing based traffic analysis system using broad learning.For one thing,fog computing can provide a distributed architecture for saving the bandwidth resources.For another,we use the broad learning to incrementally train the traffic data,which is more suitable for fog computing because it can support incremental updates of models without retraining all data.We implement our system on the Raspberry Pi,and experimental results show that we have a 98%probability to accurately identify these traffic data.Moreover,our method has a faster training speed compared with Convolutional Neural Network(CNN).
基金financial supports from the National Key Research and Development Program of China(2018YFB2200403)National Natural Sci-ence Foundation of China(NSFC)(61775003,11734001,91950204,11527901,11604378,91850117).
文摘The rapid development of information technology has fueled an ever-increasing demand for ultrafast and ultralow-en-ergy-consumption computing.Existing computing instruments are pre-dominantly electronic processors,which use elec-trons as information carriers and possess von Neumann architecture featured by physical separation of storage and pro-cessing.The scaling of computing speed is limited not only by data transfer between memory and processing units,but also by RC delay associated with integrated circuits.Moreover,excessive heating due to Ohmic losses is becoming a severe bottleneck for both speed and power consumption scaling.Using photons as information carriers is a promising alternative.Owing to the weak third-order optical nonlinearity of conventional materials,building integrated photonic com-puting chips under traditional von Neumann architecture has been a challenge.Here,we report a new all-optical comput-ing framework to realize ultrafast and ultralow-energy-consumption all-optical computing based on convolutional neural networks.The device is constructed from cascaded silicon Y-shaped waveguides with side-coupled silicon waveguide segments which we termed“weight modulators”to enable complete phase and amplitude control in each waveguide branch.The generic device concept can be used for equation solving,multifunctional logic operations as well as many other mathematical operations.Multiple computing functions including transcendental equation solvers,multifarious logic gate operators,and half-adders were experimentally demonstrated to validate the all-optical computing performances.The time-of-flight of light through the network structure corresponds to an ultrafast computing time of the order of several picoseconds with an ultralow energy consumption of dozens of femtojoules per bit.Our approach can be further expan-ded to fulfill other complex computing tasks based on non-von Neumann architectures and thus paves a new way for on-chip all-optical computing.
基金the National Research Foundation(NRF)Singapore mid-sized center grant(NRF-MSG-2023-0002)FrontierCRP grant(NRF-F-CRP-2024-0006)+2 种基金A*STAR Singapore MTC RIE2025 project(M24W1NS005)IAF-PP project(M23M5a0069)Ministry of Education(MOE)Singapore Tier 2 project(MOE-T2EP50220-0014).
文摘The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.
文摘Since virtualization technology enables the abstraction and sharing of resources in a flexible management way, the overall expenses of network deployment can be significantly reduced. Therefore, the technology has been widely applied in the core network. With the tremendous growth in mobile traffic and services, it is natural to extend virtualization technology to the cloud computing based radio access networks(CCRANs) for achieving high spectral efficiency with low cost.In this paper, the virtualization technologies in CC-RANs are surveyed, including the system architecture, key enabling techniques, challenges, and open issues. The enabling key technologies for virtualization in CC-RANs mainly including virtual resource allocation, radio access network(RAN) slicing, mobility management, and social-awareness have been comprehensively surveyed to satisfy the isolation, customization and high-efficiency utilization of radio resources. The challenges and open issues mainly focus on virtualization levels for CC-RANs, signaling design for CC-RAN virtualization, performance analysis for CC-RAN virtualization, and network security for virtualized CC-RANs.
基金supported by the National Natural Science Foundation of China (No. 61741102, No. 61471164)China Scholarship Council
文摘The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility of users as well as satisfy the required quality of service(QoS) such as the end-to-end response latency experienced by each user. We formulate the problem of joint resource allocation as a combinatorial optimization problem. Three evolutionary approaches are considered to solve the problem: genetic algorithm(GA), ant colony optimization with genetic algorithm(ACO-GA), and quantum genetic algorithm(QGA). To decrease the time complexity, we propose a mapping process between the resource allocation matrix and the chromosome of GA, ACO-GA, and QGA, search the available radio and cloud resource pairs based on the resource availability matrixes for ACOGA, and encode the difference value between the allocated resources and the minimum resource requirement for QGA. Extensive simulation results show that our proposed methods greatly outperform the existing algorithms in terms of running time, the accuracy of final results, the total utility, resource utilization and the end-to-end response latency guaranteeing.
基金supported by the National Key Research and Development Program of China 2021YFB2900504,2020YFB1807900 and 2020YFB1807903by the National Science Foundation of China under Grant 62271062,62071063。
文摘As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%.
基金supported by the National Key Research and Development Program of China(No.2021YFB2900504).
文摘In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks.
基金supported by the ONR Vannevar Bush Faculty Fellowship(Grant No.N00014-22-1-2795).
文摘Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed.