期刊文献+
共找到484篇文章
< 1 2 25 >
每页显示 20 50 100
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
1
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
Providing Robust and Low-Cost Edge Computing in Smart Grid:An Energy Harvesting Based Task Scheduling and Resource Management Framework 被引量:1
2
作者 Xie Zhigang Song Xin +1 位作者 Xu Siyang Cao Jing 《China Communications》 2025年第2期226-240,共15页
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta... Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework. 展开更多
关键词 edge computing energy harvesting energy storage unit renewable energy sampling average approximation task scheduling
在线阅读 下载PDF
Dynamic Task Offloading Scheme for Edge Computing via Meta-Reinforcement Learning 被引量:1
3
作者 Jiajia Liu Peng Xie +2 位作者 Wei Li Bo Tang Jianhua Liu 《Computers, Materials & Continua》 2025年第2期2609-2635,共27页
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the... As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments. 展开更多
关键词 edge computing adaptive META task offloading joint optimization
在线阅读 下载PDF
Near‑Sensor Edge Computing System Enabled by a CMOS Compatible Photonic Integrated Circuit Platform Using Bilayer AlN/Si Waveguides 被引量:1
4
作者 Zhihao Ren Zixuan Zhang +4 位作者 Yangyang Zhuge Zian Xiao Siyu Xu Jingkai Zhou Chengkuo Lee 《Nano-Micro Letters》 2025年第11期1-20,共20页
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc... The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment. 展开更多
关键词 Photonic integrated circuits edge computing Aluminum nitride Neural networks Wearable sensors
在线阅读 下载PDF
A Comprehensive Study of Resource Provisioning and Optimization in Edge Computing
5
作者 Sreebha Bhaskaran Supriya Muthuraman 《Computers, Materials & Continua》 2025年第6期5037-5070,共34页
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ... Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities. 展开更多
关键词 Cloud computing edge computing fog computing resource provisioning resource allocation computation offloading optimization techniques software defined network
在线阅读 下载PDF
An Impact-Aware and Taxonomy-Driven Explainable Machine Learning Framework with Edge Computing for Security in Industrial IoT–Cyber Physical Systems
6
作者 Tamara Zhukabayeva Zulfiqar Ahmad +4 位作者 Nurbolat Tasbolatuly Makpal Zhartybayeva Yerik Mardenov Nurdaulet Karabayev Dilaram Baumuratova 《Computer Modeling in Engineering & Sciences》 2025年第11期2573-2599,共27页
The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the ... The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the systems.There is a lack of explainability,challenges with imbalanced attack classes,and limited consideration of practical edge–cloud deployment strategies in prior works.In the proposed study,we suggest an Impact-Aware Taxonomy-Driven Machine Learning Framework with Edge Deployment and SHapley Additive exPlanations(SHAP)-based Explainable AI(XAI)to attack detection and classification in IIoT-CPS settings.It includes not only unsupervised clustering(K-Means and DBSCAN)to extract latent traffic patterns but also supervised classification based on taxonomy to classify 33 different kinds of attacks into seven high-level categories:Flood Attacks,Botnet/Mirai,Reconnaissance,Spoofing/Man-In-The-Middle(MITM),Injection Attacks,Backdoors/Exploits,and Benign.The three machine learning algorithms,Random Forest,XGBoost,and Multi-Layer Perceptron(MLP),were trained on a realworld dataset of more than 1 million network traffic records,with overall accuracy of 99.4%(RF),99.5%(XGBoost),and 99.1%(MLP).Rare types of attacks,such as injection attacks and backdoors,were examined even in the case of extreme imbalance between the classes.SHAP-based XAI was performed on every model to help gain transparency and trust in the model and identify important features that drive the classification decisions,such as inter-arrival time,TCP flags,and protocol type.A workable edge-computing implementation strategy is proposed,whereby lightweight computing is performed at the edge devices and heavy,computation-intensive analytics is performed at the cloud.This framework is highly accurate,interpretable,and has real-time application,hence a robust and scalable solution to securing IIoT-CPS infrastructure against dynamic cyber-attacks. 展开更多
关键词 Industrial IoT CPS edge computing machine learning XAI attack taxonomy
在线阅读 下载PDF
GENOME:Genetic Encoding for Novel Optimization of Malware Detection and Classification in Edge Computing
7
作者 Sang-Hoon Choi Ki-Woong Park 《Computers, Materials & Continua》 2025年第3期4021-4039,共19页
The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing pr... The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing presents substantial security challenges,rendering it a prominent target for sophisticated malware attacks.Existing signature-based and behavior-based detection methods are ineffective against the swiftly evolving nature of malware threats and are constrained by the availability of resources.This paper suggests the Genetic Encoding for Novel Optimization of Malware Evaluation(GENOME)framework,a novel solution that is intended to improve the performance of malware detection and classification in peripheral computing environments.GENOME optimizes data storage and computa-tional efficiency by converting malware artifacts into compact,structured sequences through a Deoxyribonucleic Acid(DNA)encoding mechanism.The framework employs two DNA encoding algorithms,standard and compressed,which substantially reduce data size while preserving high detection accuracy.The Edge-IIoTset dataset was used to conduct experiments that showed that GENOME was able to achieve high classification performance using models such as Random Forest and Logistic Regression,resulting in a reduction of data size by up to 42%.Further evaluations with the CIC-IoT-23 dataset and Deep Learning models confirmed GENOME’s scalability and adaptability across diverse datasets and algorithms.The potential of GENOME to address critical challenges,such as the rapid mutation of malware,real-time processing demands,and resource limitations,is emphasized in this study.GENOME offers comprehensive protection for peripheral computing environments by offering a security solution that is both efficient and scalable. 展开更多
关键词 edge computing IoT security MALWARE machine learning malware classification malware detection
在线阅读 下载PDF
Edge computing aileron mechatronics using antiphase hysteresis Schmitt trigger for fast flutter suppression
8
作者 Tangwen Yin Dan Huang Xiaochun Zhang 《Control Theory and Technology》 2025年第1期153-160,共8页
An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This ... An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This paper presents a robust solution in the form of a fast flutter suppression digital control logic of edge computing aileron mechatronics(ECAM).We have effectively eliminated passive and active oscillating response biases by integrating nonlinear functional parameters and an antiphase hysteresis Schmitt trigger.Our findings demonstrate that self-tuning nonlinear parameters can optimize stability,robustness,and accuracy.At the same time,the antiphase hysteresis Schmitt trigger effectively rejects flutters without the need for collaborative navigation and guidance.Our hardware-in-the-loop simulation results confirm that this approach can eliminate aircraft jitter and shaking while ensuring expected stability and maneuverability.In conclusion,this nonlinear aileron mechatronics with a Schmitt positive feedback mechanism is a highly effective solution for distributed flight control and active flutter rejection. 展开更多
关键词 AILERON edge computing Flutter suppression MECHATRONICS Nonlinear hysteresis control Positive feedback
原文传递
Privacy Preserving Federated Anomaly Detection in IoT Edge Computing Using Bayesian Game Reinforcement Learning
9
作者 Fatima Asiri Wajdan Al Malwi +4 位作者 Fahad Masood Mohammed S.Alshehri Tamara Zhukabayeva Syed Aziz Shah Jawad Ahmad 《Computers, Materials & Continua》 2025年第8期3943-3960,共18页
Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC ha... Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks. 展开更多
关键词 IOT edge computing smart homes anomaly detection Bayesian game theory reinforcement learning
在线阅读 下载PDF
VHO Algorithm for Heterogeneous Networks of UAV-Hangar Cluster Based on GA Optimization and Edge Computing
10
作者 Siliang Chen Dongri Shan Yansheng Niu 《Computers, Materials & Continua》 2025年第12期5263-5286,共24页
With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability ... With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability to complex scenarios,while traditional Vertical Handoff(VHO)algorithms fail to fully address the unique challenges of UAV-H systems,including high-speed mobility and limited computational resources.To bridge this gap,this paper proposes a heterogeneous network architecture integrating 5th Generation Mobile Communication Technology(5G)cellular networks and self-organizing mesh networks for UAV-H clusters,accompanied by a novel VHO algorithm.The proposed algorithm leverages Multi-Attribute Decision-Making(MADM)theory combined with Genetic Algorithm(GA)optimization,incorporating edge computing to enable real-time decision-making and offload computational tasks efficiently.By constructing a utility function through attribute and weight matrices,the algorithm ensures UAV-H clusters dynamically select the optimal network access with the highest utility value.Simulation results demonstrate that the proposed method reduces network handoff times by 26.13%compared to the Decision Tree VHO(DT-VHO),effectively mitigating the ping-pong effect,and enhancing total system throughput by 19.99%under the same conditions.In terms of handoff delay,it outperforms the Artificial Neural Network VHO(ANN-VHO),significantly improving the Quality of Service(QoS).Finally,real-world hardware platform experiments validate the algorithm’s feasibility and superior performance in practical UAV-H cluster operations.This work provides a robust solution for seamless network connectivity in high-mobility UAV clusters,offering critical support for emerging applications requiring reliable and efficient wireless communication. 展开更多
关键词 Vertical handoff heterogeneous networks genetic algorithm multiple-attribute decision-making unmanned aerial vehicle edge computing
在线阅读 下载PDF
Intelligent Management of Resources for Smart Edge Computing in 5G Heterogeneous Networks Using Blockchain and Deep Learning
11
作者 Mohammad Tabrez Quasim Khair Ul Nisa +3 位作者 Mohammad Shahid Husain Abakar Ibraheem Abdalla Aadam Mohammed Waseequ Sheraz Mohammad Zunnun Khan 《Computers, Materials & Continua》 2025年第7期1169-1187,共19页
Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing... Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework. 展开更多
关键词 Smart edge computing heterogeneous networks blockchain 5G network internet of things artificial intelligence
在线阅读 下载PDF
Integrating AI, Blockchain, and Edge Computing for Zero-Trust IoT Security:A Comprehensive Review of Advanced Cybersecurity Framework
12
作者 Inam Ullah Khan Fida Muhammad Khan +1 位作者 Zeeshan Ali Haider Fahad Alturise 《Computers, Materials & Continua》 2025年第12期4307-4344,共38页
The rapid expansion of the Internet of Things(IoT)has introduced significant security challenges due to the scale,complexity,and heterogeneity of interconnected devices.The current traditional centralized security mod... The rapid expansion of the Internet of Things(IoT)has introduced significant security challenges due to the scale,complexity,and heterogeneity of interconnected devices.The current traditional centralized security models are deemed irrelevant in dealing with these threats,especially in decentralized applications where the IoT devices may at times operate on minimal resources.The emergence of new technologies,including Artificial Intelligence(AI),blockchain,edge computing,and Zero-Trust-Architecture(ZTA),is offering potential solutions as it helps with additional threat detection,data integrity,and system resilience in real-time.AI offers sophisticated anomaly detection and prediction analytics,and blockchain delivers decentralized and tamper-proof insurance over device communication and exchange of information.Edge computing enables low-latency character processing by distributing and moving the computational workload near the devices.The ZTA enhances security by continuously verifying each device and user on the network,adhering to the“never trust,always verify”ideology.The present research paper is a review of these technologies,finding out how they are used in securing IoT ecosystems,the issues of such integration,and the possibility of developing a multi-layered,adaptive security structure.Major concerns,such as scalability,resource limitations,and interoperability,are identified,and the way to optimize the application of AI,blockchain,and edge computing in zero-trust IoT systems in the future is discussed. 展开更多
关键词 Internet of Things(IoT) artificial intelligence(AI) blockchain edge computing zero-trust-architecture(ZTA) IoT security real-time threat detection
在线阅读 下载PDF
Optimized Resource Allocation for Dual-Band Cooperation-Based Edge Computing Vehicular Network
13
作者 Cheng Kaijun Fang Xuming 《China Communications》 2025年第9期352-367,共16页
With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further impro... With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks. 展开更多
关键词 dual-band cooperation edge computing resource allocation task processing vehicular network
在线阅读 下载PDF
Joint optimization of UAV aided covert edge computing via a deep reinforcement learning framework
14
作者 Wei WEI Shu FU +2 位作者 Yujie TANG Yuan WU Haijun ZHANG 《Chinese Journal of Aeronautics》 2025年第10期96-106,共11页
In this work,we consider an Unmanned Aerial Vehicle(UAV)aided covert edge computing architecture,where multiple sensors are scattered with a certain distance on the ground.The sensor can implement several computation ... In this work,we consider an Unmanned Aerial Vehicle(UAV)aided covert edge computing architecture,where multiple sensors are scattered with a certain distance on the ground.The sensor can implement several computation tasks.In an emergency scenario,the computational capabilities of sensors are often limited,as seen in vehicular networks or Internet of Things(IoT)networks.The UAV can be utilized to undertake parts of the computation tasks,i.e.,edge computing.While various studies have advanced the performance of UAV-based edge computing systems,the security of wireless transmission in future 6G networks is becoming increasingly crucial due to its inherent broadcast nature,yet it has not received adequate attention.In this paper,we improve the covert performance in a UAV aided edge computing system.Parts of the computation tasks of multiple ground sensors are offloaded to the UAV,where the sensors offload the computing tasks to the UAV,and Willie around detects the transmissions.The transmit power of sensors,the offloading proportions of sensors and the hovering height of the UAV affect the system covert performance,we propose a deep reinforcement learning framework to jointly optimize them.The proposed algorithm minimizes the system average task processing delay while guaranteeing that the transmissions of sensors are not detected by the Willie under the covertness constraint.Extensive simulations are conducted to verify the effectiveness of the proposed algorithm to decrease the average task processing delay with comparison with other algorithms. 展开更多
关键词 Covert communication Unmanned aerial vehicle edge computing Joint optimization Deep reinforcement
原文传递
Standardised interworking and deployment of IoT and edge computing platforms
15
作者 Jieun Lee JooSung Kim +2 位作者 Seong Ki Yoo Tarik Taleb JaeSeung Song 《Digital Communications and Networks》 2025年第5期1578-1587,共10页
Edge computing is swiftly gaining traction and is being standardised by the European Telecommunications Standards Institute(ETSI)as Multi-access Edge Computing(MEC).Simultaneously,oneM2M has been actively developing s... Edge computing is swiftly gaining traction and is being standardised by the European Telecommunications Standards Institute(ETSI)as Multi-access Edge Computing(MEC).Simultaneously,oneM2M has been actively developing standards for dynamic data management and IoT services at the edge,particularly for applications that require real-time support and security.Integrating MEC and oneM2M offers a unique opportunity to maximize their individual strengths.Therefore,this article proposes a framework that integrates MEC and oneM2M standard platforms for IoT applications,demonstrating how the synergy of these architectures can leverage the geographically distributed computing resources at base stations,enabling efficient deployment and added value for time-sensitive IoT applications.In addition,this study offers a concept of potential interworking models between oneM2M and the MEC architectures.The adoption of these standard architectures can enable various IoT edge services,such as smart city mobility and real-time analytics functions,by leveraging the oneM2M common service layer instantiated on the MEC host. 展开更多
关键词 Internet of things Multi-access edge computing oneM2M INTERWORKING STANDARDS
在线阅读 下载PDF
Intelligent Energy-Efficient Resource Allocation for Multi-UAV-Assisted Mobile Edge Computing Networks
16
作者 Hu Han Shen Le +2 位作者 Zhou Fuhui Wang Qun Zhu Hongbo 《China Communications》 2025年第4期339-355,共17页
The unmanned aerial vehicle(UAV)-assisted mobile edge computing(MEC)has been deemed a promising solution for energy-constrained devices to run smart applications with computationintensive and latency-sensitive require... The unmanned aerial vehicle(UAV)-assisted mobile edge computing(MEC)has been deemed a promising solution for energy-constrained devices to run smart applications with computationintensive and latency-sensitive requirements,especially in some infrastructure-limited areas or some emergency scenarios.However,the multi-UAVassisted MEC network remains largely unexplored.In this paper,the dynamic trajectory optimization and computation offloading are studied in a multi-UAVassisted MEC system where multiple UAVs fly over a target area with different trajectories to serve ground users.By considering the dynamic channel condition and random task arrival and jointly optimizing UAVs'trajectories,user association,and subchannel assignment,the average long-term sum of the user energy consumption minimization problem is formulated.To address the problem involving both discrete and continuous variables,a hybrid decision deep reinforcement learning(DRL)-based intelligent energyefficient resource allocation and trajectory optimization algorithm is proposed,named HDRT algorithm,where deep Q network(DQN)and deep deterministic policy gradient(DDPG)are invoked to process discrete and continuous variables,respectively.Simulation results show that the proposed HDRT algorithm converges fast and outperforms other benchmarks in the aspect of user energy consumption and latency. 展开更多
关键词 dynamic trajectory optimization intelligent resource allocation unmanned aerial vehicle uav assisted uav assisted mec energy efficiency smart applications mobile edge computing mec deep reinforcement learning
在线阅读 下载PDF
An Online Optimization of Prediction-Enhanced Digital Twin Migration over Edge Computing with Adaptive Information Updating
17
作者 Xinyu Yu Lucheng Chen +3 位作者 Xingzhi Feng Xiaoping Lu Yuye Yang You Shi 《Computers, Materials & Continua》 2025年第11期3231-3252,共22页
This paper investigates mobility-aware online optimization for digital twin(DT)-assisted task execution in edge computing environments.In such systems,DTs,hosted on edge servers(ESs),require proactive migration to mai... This paper investigates mobility-aware online optimization for digital twin(DT)-assisted task execution in edge computing environments.In such systems,DTs,hosted on edge servers(ESs),require proactive migration to maintain proximity to their mobile physical twin(PT)counterparts.To minimize task response latency under a stringent energy consumption constraint,we jointly optimize three key components:the status data uploading frequency fromthe PT,theDT migration decisions,and the allocation of computational and communication resources.To address the asynchronous nature of these decisions,we propose a novel two-timescale mobility-aware online optimization(TMO)framework.The TMO scheme leverages an extended two-timescale Lyapunov optimization framework to decompose the long-term problem into sequential subproblems.At the larger timescale,a multi-armed bandit(MAB)algorithm is employed to dynamically learn the optimal status data uploading frequency.Within each shorter timescale,we first employ a gated recurrent unit(GRU)-based predictor to forecast the PT’s trajectory.Based on this prediction,an alternate minimization(AM)algorithm is then utilized to solve for the DT migration and resource allocation variables.Theoretical analysis confirms that the proposed TMO scheme is asymptotically optimal.Furthermore,simulation results demonstrate its significant performance gains over existing benchmark methods. 展开更多
关键词 Digital twin edge computing proactive migration mobility prediction two-timescale online optimization
在线阅读 下载PDF
Indoor Localization Using Multi-Bluetooth Beacon Deployment in a Sparse Edge Computing Environment
18
作者 Soheil Saghafi Yashar Kiarashi +3 位作者 Amy D.Rodriguez Allan I.Levey Hyeokhyen Kwon Gari D.Clifford 《Digital Twins and Applications》 2025年第1期49-56,共8页
Bluetooth low energy(BLE)-based indoor localization has been extensively researched due to its cost-effectiveness,low power consumption,and ubiquity.Despite these advantages,the variability of received signal strength... Bluetooth low energy(BLE)-based indoor localization has been extensively researched due to its cost-effectiveness,low power consumption,and ubiquity.Despite these advantages,the variability of received signal strength indicator(RSSI)measurements,influenced by physical obstacles,human presence,and electronic interference,poses a significant challenge to accurate localization.In this work,we present an optimised method to enhance indoor localization accuracy by utilising multiple BLE beacons in a radio frequency(RF)-dense modern building environment.Through a proof-of-concept study,we demonstrate that using three BLE beacons reduces localization error from a worst-case distance of 9.09-2.94 m,whereas additional beacons offer minimal incremental benefit in such settings.Furthermore,our framework for BLE-based localization,implemented on an edge network of Raspberry Pies,has been released under an open-source license,enabling broader application and further research. 展开更多
关键词 ambient health monitoring bluetooth low energy cloud computing edge computing indoor localization
在线阅读 下载PDF
Task offloading delay minimization in vehicular edge computing based on vehicle trajectory prediction
19
作者 Feng Zeng Zheng Zhang Jinsong Wu 《Digital Communications and Networks》 2025年第2期537-546,共10页
In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements o... In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements on task offloading and reveal that data preparation time for task execution can be minimized via forward-looking scheduling.Then,a Bi-LSTM-based model is proposed to predict the trajectories of vehicles.The service area is divided into several equal-sized grids.If the actual position of the vehicle and the predicted position by the model belong to the same grid,the prediction is considered correct,thereby reducing the difficulty of vehicle trajectory prediction.Moreover,we propose a scheduling strategy for delay optimization based on the vehicle trajectory prediction.Considering the inevitable prediction error,we take some edge servers around the predicted area as candidate execution servers and the data required for task execution are backed up to these candidate servers,thereby reducing the impact of prediction deviations on task offloading and converting the modest increase of resource overheads into delay reduction in task offloading.Simulation results show that,compared with other classical schemes,the proposed strategy has lower average task offloading delays. 展开更多
关键词 Vehicular edge computing Task offloading Vehicle trajectory prediction Delay minimization Bi-LSTM model
在线阅读 下载PDF
Container cluster placement in edge computing based on reinforcement learning incorporating graph convolutional networks scheme
20
作者 Zhuo Chen Bowen Zhu Chuan Zhou 《Digital Communications and Networks》 2025年第1期60-70,共11页
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat... Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods. 展开更多
关键词 edge computing Network virtualization Container cluster Deep reinforcement learning Graph convolutional network
在线阅读 下载PDF
上一页 1 2 25 下一页 到第
使用帮助 返回顶部