期刊文献+
共找到948篇文章
< 1 2 48 >
每页显示 20 50 100
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
1
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
Container cluster placement in edge computing based on reinforcement learning incorporating graph convolutional networks scheme
2
作者 Zhuo Chen Bowen Zhu Chuan Zhou 《Digital Communications and Networks》 2025年第1期60-70,共11页
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat... Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods. 展开更多
关键词 edge computing network virtualization Container cluster Deep reinforcement learning Graph convolutional network
在线阅读 下载PDF
Intelligent Management of Resources for Smart Edge Computing in 5G Heterogeneous Networks Using Blockchain and Deep Learning
3
作者 Mohammad Tabrez Quasim Khair Ul Nisa +3 位作者 Mohammad Shahid Husain Abakar Ibraheem Abdalla Aadam Mohammed Waseequ Sheraz Mohammad Zunnun Khan 《Computers, Materials & Continua》 2025年第7期1169-1187,共19页
Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing... Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework. 展开更多
关键词 Smart edge computing heterogeneous networks blockchain 5G network internet of things artificial intelligence
在线阅读 下载PDF
VHO Algorithm for Heterogeneous Networks of UAV-Hangar Cluster Based on GA Optimization and Edge Computing
4
作者 Siliang Chen Dongri Shan Yansheng Niu 《Computers, Materials & Continua》 2025年第12期5263-5286,共24页
With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability ... With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability to complex scenarios,while traditional Vertical Handoff(VHO)algorithms fail to fully address the unique challenges of UAV-H systems,including high-speed mobility and limited computational resources.To bridge this gap,this paper proposes a heterogeneous network architecture integrating 5th Generation Mobile Communication Technology(5G)cellular networks and self-organizing mesh networks for UAV-H clusters,accompanied by a novel VHO algorithm.The proposed algorithm leverages Multi-Attribute Decision-Making(MADM)theory combined with Genetic Algorithm(GA)optimization,incorporating edge computing to enable real-time decision-making and offload computational tasks efficiently.By constructing a utility function through attribute and weight matrices,the algorithm ensures UAV-H clusters dynamically select the optimal network access with the highest utility value.Simulation results demonstrate that the proposed method reduces network handoff times by 26.13%compared to the Decision Tree VHO(DT-VHO),effectively mitigating the ping-pong effect,and enhancing total system throughput by 19.99%under the same conditions.In terms of handoff delay,it outperforms the Artificial Neural Network VHO(ANN-VHO),significantly improving the Quality of Service(QoS).Finally,real-world hardware platform experiments validate the algorithm’s feasibility and superior performance in practical UAV-H cluster operations.This work provides a robust solution for seamless network connectivity in high-mobility UAV clusters,offering critical support for emerging applications requiring reliable and efficient wireless communication. 展开更多
关键词 Vertical handoff heterogeneous networks genetic algorithm multiple-attribute decision-making unmanned aerial vehicle edge computing
在线阅读 下载PDF
Optimized Resource Allocation for Dual-Band Cooperation-Based Edge Computing Vehicular Network
5
作者 Cheng Kaijun Fang Xuming 《China Communications》 2025年第9期352-367,共16页
With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further impro... With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks. 展开更多
关键词 dual-band cooperation edge computing resource allocation task processing vehicular network
在线阅读 下载PDF
Joint Computing and Communication Resource Allocation for Satellite Communication Networks with Edge Computing 被引量:14
6
作者 Shanghong Zhang Gaofeng Cui +1 位作者 Yating Long Weidong Wang 《China Communications》 SCIE CSCD 2021年第7期236-252,共17页
Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-int... Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-intensive applications for satellite communication networks(SCNs).By deploying edge computing servers in satellite and gateway stations,SCNs can achieve significant performance gains of the computing capacities at the expense of extending the dimensions and complexity of resource management.Therefore,in this paper,we investigate the joint computing and communication resource management problem for SCNs to minimize the execution latency of the computation-intensive applications,while two different satellite edge computing scenarios and local execution are considered.Furthermore,the joint computing and communication resource allocation problem for the computation-intensive services is formulated as a mixed-integer programming problem.A game-theoretic and many-to-one matching theorybased scheme(JCCRA-GM)is proposed to achieve an approximate optimal solution.Numerical results show that the proposed method with low complexity can achieve almost the same weight-sum latency as the Brute-force method. 展开更多
关键词 satellite communication networks edge computing resource allocation matching theory
在线阅读 下载PDF
Wireless Acoustic Sensor Networks and Edge Computing for Rapid Acoustic Monitoring 被引量:7
7
作者 Zhengguo Sheng Saskia Pfersich +3 位作者 Alice Eldridge Jianshan Zhou Daxin Tian Victor C.M.Leung 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2019年第1期64-74,共11页
Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the afford... Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the affordability of robust hardware for remote audio recording is stimulating international interest in the potential for acoustic methods for biodiversity monitoring.The scale of the data involved requires automated methods,however, the development of acoustic sensor networks capable of sampling the soundscape across time and space and relaying the data to an accessible storage location remains a significant technical challenge, with power management at its core. Recording and transmitting large quantities of audio data is power intensive,hampering long-term deployment in remote, off-grid locations of key ecological interest. Rather than transmitting heavy audio data, in this paper, we propose a low-cost and energy efficient wireless acoustic sensor network integrated with edge computing structure for remote acoustic monitoring and in situ analysis.Recording and computation of acoustic indices are carried out directly on edge devices built from low noise primo condenser microphones and Teensy microcontrollers, using internal FFT hardware support. Resultant indices are transmitted over a ZigBee-based wireless mesh network to a destination server.Benchmark tests of audio quality, indices computation and power consumption demonstrate acoustic equivalence and significant power savings over current solutions. 展开更多
关键词 ACOUSTIC sensor networks edge computing energy EFFICIENCY
在线阅读 下载PDF
An Offloading Scheme Leveraging on Neighboring Node Resources for Edge Computing over Fiber-Wireless (FiWi) Access Networks 被引量:3
8
作者 Wei Chang Yihong Hu +2 位作者 Guochu Shou Yaqiong Liu Zhigang Guo 《China Communications》 SCIE CSCD 2019年第11期107-119,共13页
The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resourc... The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resources(ONER)for EC over Fiber-Wireless(FiWi)access networks is proposed in this paper.In the ONER scheme,the FiWi network connects edge computing nodes with fiber and converges wireless and fiber connections seamlessly,so that it can support the offloading transmission with low delay and wide bandwidth.Based on the ONER scheme supported by FiWi networks,computation tasks can be offloaded to edge computing nodes in a wider range of area without increasing wireless hops(e.g.,just one wireless hop),which achieves low delay.Additionally,an efficient Computation Resource Scheduling(CRS)algorithm based on the ONER scheme is also proposed to make offloading decision.The results show that more offloading requests can be satisfied and the average completion time of computation tasks decreases significantly with the ONER scheme and the CRS algorithm.Therefore,the ONER scheme and the CRS algorithm can schedule computation resources at neighboring edge computing nodes for offloading to meet the challenge of large scale computation tasks. 展开更多
关键词 edge computing OFFLOADING Fiber-wireless access networks delay
在线阅读 下载PDF
Analysis and Optimization on Partition-Based Caching and Delivery in Satellite-Terrestrial Edge Computing Networks 被引量:4
9
作者 Peng Wang Xing Zhang +2 位作者 Jiaxin Zhang Shuang Zheng Wenhao Liu 《China Communications》 SCIE CSCD 2023年第3期252-285,共34页
As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file tra... As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%. 展开更多
关键词 edge computing satellite terrestrial net-works caching deployment stochastic geometry 6G networks
在线阅读 下载PDF
Joint Optimization of Latency and Energy Consumption for Mobile Edge Computing Based Proximity Detection in Road Networks 被引量:1
10
作者 Tongyu Zhao Yaqiong Liu +1 位作者 Guochu Shou Xinwei Yao 《China Communications》 SCIE CSCD 2022年第4期274-290,共17页
In recent years, artificial intelligence and automotive industry have developed rapidly, and autonomous driving has gradually become the focus of the industry. In road networks, the problem of proximity detection refe... In recent years, artificial intelligence and automotive industry have developed rapidly, and autonomous driving has gradually become the focus of the industry. In road networks, the problem of proximity detection refers to detecting whether two moving objects are close to each other or not in real time. However, the battery life and computing capability of mobile devices are limited in the actual scene,which results in high latency and energy consumption. Therefore, it is a tough problem to determine the proximity relationship between mobile users with low latency and energy consumption. In this article, we aim at finding a tradeoff between latency and energy consumption. We formalize the computation offloading problem base on mobile edge computing(MEC)into a constrained multiobjective optimization problem(CMOP) and utilize NSGA-II to solve it. The simulation results demonstrate that NSGA-II can find the Pareto set, which reduces the latency and energy consumption effectively. In addition, a large number of solutions provided by the Pareto set give us more choices of the offloading decision according to the actual situation. 展开更多
关键词 proximity detection mobile edge computing road networks constrained multiobjective optimization
在线阅读 下载PDF
Mobility-driven user-centric AP clustering in mobile edge computing-based ultra-dense networks 被引量:1
11
作者 Shuxin He Tianyu Wang Shaowei Wang 《Digital Communications and Networks》 SCIE 2020年第2期210-216,共7页
ultra-Dense Network(UDN)has been envisioned as a promising technology to provide high-quality wireless connectivity in dense urban areas,in which the density of Access Points(APs)is increased up to the point where it ... ultra-Dense Network(UDN)has been envisioned as a promising technology to provide high-quality wireless connectivity in dense urban areas,in which the density of Access Points(APs)is increased up to the point where it is comparable with or surpasses the density of active mobile users.In order to mitigate inter-AP interference and improve spectrum efficiency,APs in UDNs are usually clustered into multiple groups to serve different mobile users,respectively.However,as the number of APs increases,the computational capability within an AP group has become the bottleneck of AP clustering.In this paper,we first propose a novel UDN architecture based on Mobile Edge Computing(MEC),in which each MEC server is associated with a user-centric AP cluster to act as a mobile agent.In addition,in the context of MEC-based UDN,we leverage mobility prediction techniques to achieve a dynamic AP clustering scheme,in which the cluster structure can automatically adapt to the dynamic distribution of user traffic in a specific area.Simulation results show that the proposed scheme can highly increase the average user throughput compared with the baseline algorithm using max-SINR user association and equal bandwidth allocation,while it guarantees at the same time low transmission delay. 展开更多
关键词 AP clustering Dynamic user traffic Mobile edge computing Mobility-driven ultra-dense networks
在线阅读 下载PDF
Security Implications of Edge Computing in Cloud Networks 被引量:2
12
作者 Sina Ahmadi 《Journal of Computer and Communications》 2024年第2期26-46,共21页
Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this r... Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques. 展开更多
关键词 edge computing Cloud networks Artificial Intelligence Machine Learning Cloud Security
在线阅读 下载PDF
Online Learning-Based Offloading Decision and Resource Allocation in Mobile Edge Computing-Enabled Satellite-Terrestrial Networks
13
作者 Tong Minglei Li Song +1 位作者 Han Wanjiang Wang Xiaoxiang 《China Communications》 SCIE CSCD 2024年第3期230-246,共17页
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ... Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes. 展开更多
关键词 computing resource allocation mobile edge computing satellite-terrestrial networks task offloading decision
在线阅读 下载PDF
Energy-optimal DNN model placement in UAV-enabled edge computing networks
14
作者 Jianhang Tang Guoquan Wu +3 位作者 Mohammad Mussadiq Jalalzai Lin Wang Bing Zhang Yi Zhou 《Digital Communications and Networks》 SCIE CSCD 2024年第4期827-836,共10页
Unmanned aerial vehicle(UAV)-enabled edge computing is emerging as a potential enabler for Artificial Intelligence of Things(AIoT)in the forthcoming sixth-generation(6G)communication networks.With the use of flexible ... Unmanned aerial vehicle(UAV)-enabled edge computing is emerging as a potential enabler for Artificial Intelligence of Things(AIoT)in the forthcoming sixth-generation(6G)communication networks.With the use of flexible UAVs,massive sensing data is gathered and processed promptly without considering geographical locations.Deep neural networks(DNNs)are becoming a driving force to extract valuable information from sensing data.However,the lightweight servers installed on UAVs are not able to meet the extremely high requirements of inference tasks due to the limited battery capacities of UAVs.In this work,we investigate a DNN model placement problem for AIoT applications,where the trained DNN models are selected and placed on UAVs to execute inference tasks locally.It is impractical to obtain future DNN model request profiles and system operation states in UAV-enabled edge computing.The Lyapunov optimization technique is leveraged for the proposed DNN model placement problem.Based on the observed system overview,an advanced online placement(AOP)algorithm is developed to solve the transformed problem in each time slot,which can reduce DNN model transmission delay and disk I/O energy cost simultaneously while keeping the input data queues stable.Finally,extensive simulations are provided to depict the effectiveness of the AOP algorithm.The numerical results demonstrate that the AOP algorithm can reduce 18.14%of the model placement cost and 29.89%of the input data queue backlog on average by comparing it with benchmark algorithms. 展开更多
关键词 UAV-Enabled edge computing DNN model Placement 6G networks Inference tasks
在线阅读 下载PDF
Near‑Sensor Edge Computing System Enabled by a CMOS Compatible Photonic Integrated Circuit Platform Using Bilayer AlN/Si Waveguides 被引量:1
15
作者 Zhihao Ren Zixuan Zhang +4 位作者 Yangyang Zhuge Zian Xiao Siyu Xu Jingkai Zhou Chengkuo Lee 《Nano-Micro Letters》 2025年第11期1-20,共20页
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc... The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment. 展开更多
关键词 Photonic integrated circuits edge computing Aluminum nitride Neural networks Wearable sensors
在线阅读 下载PDF
A Comprehensive Study of Resource Provisioning and Optimization in Edge Computing
16
作者 Sreebha Bhaskaran Supriya Muthuraman 《Computers, Materials & Continua》 2025年第6期5037-5070,共34页
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ... Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities. 展开更多
关键词 Cloud computing edge computing fog computing resource provisioning resource allocation computation offloading optimization techniques software defined network
在线阅读 下载PDF
MATD3-Based End-Edge Collaborative Resource Optimization for NOMA-Assisted Industrial Wireless Networks
17
作者 Ru Hao Chi Xu Jing Liu 《Computers, Materials & Continua》 2025年第2期3203-3222,共20页
Non-orthogonal multiple access (NOMA) technology has recently been widely integrated into multi-access edge computing (MEC) to support task offloading in industrial wireless networks (IWNs) with limited radio resource... Non-orthogonal multiple access (NOMA) technology has recently been widely integrated into multi-access edge computing (MEC) to support task offloading in industrial wireless networks (IWNs) with limited radio resources. This paper minimizes the system overhead regarding task processing delay and energy consumption for the IWN with hybrid NOMA and orthogonal multiple access (OMA) schemes. Specifically, we formulate the system overhead minimization (SOM) problem by considering the limited computation and communication resources and NOMA efficiency. To solve the complex mixed-integer nonconvex problem, we combine the multi-agent twin delayed deep deterministic policy gradient (MATD3) and convex optimization, namely MATD3-CO, for iterative optimization. Specifically, we first decouple SOM into two sub-problems, i.e., joint sub-channel allocation and task offloading sub-problem, and computation resource allocation sub-problem. Then, we propose MATD3 to optimize the sub-channel allocation and task offloading ratio, and employ the convex optimization to allocate the computation resource with a closed-form expression derived by the Karush-Kuhn-Tucker (KKT) conditions. The solution is obtained by iteratively solving these two sub-problems. The experimental results indicate that the MATD3-CO scheme, when compared to the benchmark schemes, significantly decreases system overhead with respect to both delay and energy consumption. 展开更多
关键词 Industrial wireless networks(IWNs) multi-access edge computing(MEC) non-orthogonal multiple access(NOMA) task offloading resource allocation
在线阅读 下载PDF
Intelligent Immunity Based Security Defense System for Multi-Access Edge Computing Network 被引量:3
18
作者 Chengcheng Zhou Yanping Yu +1 位作者 Shengsong Yang Haitao Xu 《China Communications》 SCIE CSCD 2021年第1期100-107,共8页
In this paper,the security problem for the multi-access edge computing(MEC)network is researched,and an intelligent immunity-based security defense system is proposed to identify the unauthorized mobile users and to p... In this paper,the security problem for the multi-access edge computing(MEC)network is researched,and an intelligent immunity-based security defense system is proposed to identify the unauthorized mobile users and to protect the security of whole system.In the proposed security defense system,the security is protected by the intelligent immunity through three functions,identification function,learning function,and regulation function,respectively.Meanwhile,a three process-based intelligent algorithm is proposed for the intelligent immunity system.Numerical simulations are given to prove the effeteness of the proposed approach. 展开更多
关键词 intelligent immunity security defense multi-access edge computing network security
在线阅读 下载PDF
A Review in the Core Technologies of 5G: Device-to-Device Communication, Multi-Access Edge Computing and Network Function Virtualization 被引量:2
19
作者 Ruixuan Tu Ruxun Xiang +1 位作者 Yang Xu Yihan Mei 《International Journal of Communications, Network and System Sciences》 2019年第9期125-150,共26页
5G is a new generation of mobile networking that aims to achieve unparalleled speed and performance. To accomplish this, three technologies, Device-to-Device communication (D2D), multi-access edge computing (MEC) and ... 5G is a new generation of mobile networking that aims to achieve unparalleled speed and performance. To accomplish this, three technologies, Device-to-Device communication (D2D), multi-access edge computing (MEC) and network function virtualization (NFV) with ClickOS, have been a significant part of 5G, and this paper mainly discusses them. D2D enables direct communication between devices without the relay of base station. In 5G, a two-tier cellular network composed of traditional cellular network system and D2D is an efficient method for realizing high-speed communication. MEC unloads work from end devices and clouds platforms to widespread nodes, and connects the nodes together with outside devices and third-party providers, in order to diminish the overloading effect on any device caused by enormous applications and improve users’ quality of experience (QoE). There is also a NFV method in order to fulfill the 5G requirements. In this part, an optimized virtual machine for middle-boxes named ClickOS is introduced, and it is evaluated in several aspects. Some middle boxes are being implemented in the ClickOS and proved to have outstanding performances. 展开更多
关键词 5th Generation network VIRTUALIZATION Device-To-Device COMMUNICATION Base STATION Direct COMMUNICATION INTERFERENCE Multi-Access edge computing Mobile edge computing
在线阅读 下载PDF
An intelligent task offloading algorithm(iTOA)for UAV edge computing network 被引量:8
20
作者 Jienan Chen Siyu Chen +3 位作者 Siyu Luo Qi Wang Bin Cao Xiaoqian Li 《Digital Communications and Networks》 SCIE 2020年第4期433-443,共11页
Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of im... Unmanned Aerial Vehicle(UAV)has emerged as a promising technology for the support of human activities,such as target tracking,disaster rescue,and surveillance.However,these tasks require a large computation load of image or video processing,which imposes enormous pressure on the UAV computation platform.To solve this issue,in this work,we propose an intelligent Task Offloading Algorithm(iTOA)for UAV edge computing network.Compared with existing methods,iTOA is able to perceive the network’s environment intelligently to decide the offloading action based on deep Monte Calor Tree Search(MCTS),the core algorithm of Alpha Go.MCTS will simulate the offloading decision trajectories to acquire the best decision by maximizing the reward,such as lowest latency or power consumption.To accelerate the search convergence of MCTS,we also proposed a splitting Deep Neural Network(sDNN)to supply the prior probability for MCTS.The sDNN is trained by a self-supervised learning manager.Here,the training data set is obtained from iTOA itself as its own teacher.Compared with game theory and greedy search-based methods,the proposed iTOA improves service latency performance by 33%and 60%,respectively. 展开更多
关键词 Unmanned aerial vehicles(UAVs) Mobile edge computing(MEC) Intelligent task offloading algorithm(iTOA) Monte Carlo tree search(MCTS) Deep reinforcement learning Splitting deep neural network(sDNN)
在线阅读 下载PDF
上一页 1 2 48 下一页 到第
使用帮助 返回顶部