期刊文献+
共找到4,969篇文章
< 1 2 249 >
每页显示 20 50 100
Intelligent Energy-Efficient Resource Allocation for Multi-UAV-Assisted Mobile Edge Computing Networks
1
作者 Hu Han Shen Le +2 位作者 Zhou Fuhui Wang Qun Zhu Hongbo 《China Communications》 2025年第4期339-355,共17页
The unmanned aerial vehicle(UAV)-assisted mobile edge computing(MEC)has been deemed a promising solution for energy-constrained devices to run smart applications with computationintensive and latency-sensitive require... The unmanned aerial vehicle(UAV)-assisted mobile edge computing(MEC)has been deemed a promising solution for energy-constrained devices to run smart applications with computationintensive and latency-sensitive requirements,especially in some infrastructure-limited areas or some emergency scenarios.However,the multi-UAVassisted MEC network remains largely unexplored.In this paper,the dynamic trajectory optimization and computation offloading are studied in a multi-UAVassisted MEC system where multiple UAVs fly over a target area with different trajectories to serve ground users.By considering the dynamic channel condition and random task arrival and jointly optimizing UAVs'trajectories,user association,and subchannel assignment,the average long-term sum of the user energy consumption minimization problem is formulated.To address the problem involving both discrete and continuous variables,a hybrid decision deep reinforcement learning(DRL)-based intelligent energyefficient resource allocation and trajectory optimization algorithm is proposed,named HDRT algorithm,where deep Q network(DQN)and deep deterministic policy gradient(DDPG)are invoked to process discrete and continuous variables,respectively.Simulation results show that the proposed HDRT algorithm converges fast and outperforms other benchmarks in the aspect of user energy consumption and latency. 展开更多
关键词 dynamic trajectory optimization intelligent resource allocation unmanned aerial vehicle uav assisted uav assisted mec energy efficiency smart applications mobile edge computing mec deep reinforcement learning
在线阅读 下载PDF
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
2
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
Dynamic Task Offloading Scheme for Edge Computing via Meta-Reinforcement Learning 被引量:1
3
作者 Jiajia Liu Peng Xie +2 位作者 Wei Li Bo Tang Jianhua Liu 《Computers, Materials & Continua》 2025年第2期2609-2635,共27页
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the... As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments. 展开更多
关键词 edge computing adaptive META task offloading joint optimization
在线阅读 下载PDF
Near‑Sensor Edge Computing System Enabled by a CMOS Compatible Photonic Integrated Circuit Platform Using Bilayer AlN/Si Waveguides 被引量:1
4
作者 Zhihao Ren Zixuan Zhang +4 位作者 Yangyang Zhuge Zian Xiao Siyu Xu Jingkai Zhou Chengkuo Lee 《Nano-Micro Letters》 2025年第11期1-20,共20页
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc... The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment. 展开更多
关键词 Photonic integrated circuits edge computing Aluminum nitride Neural networks Wearable sensors
在线阅读 下载PDF
Providing Robust and Low-Cost Edge Computing in Smart Grid:An Energy Harvesting Based Task Scheduling and Resource Management Framework 被引量:1
5
作者 Xie Zhigang Song Xin +1 位作者 Xu Siyang Cao Jing 《China Communications》 2025年第2期226-240,共15页
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta... Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework. 展开更多
关键词 edge computing energy harvesting energy storage unit renewable energy sampling average approximation task scheduling
在线阅读 下载PDF
A Comprehensive Study of Resource Provisioning and Optimization in Edge Computing
6
作者 Sreebha Bhaskaran Supriya Muthuraman 《Computers, Materials & Continua》 2025年第6期5037-5070,共34页
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ... Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities. 展开更多
关键词 Cloud computing edge computing fog computing resource provisioning resource allocation computation offloading optimization techniques software defined network
在线阅读 下载PDF
A comprehensive survey of orbital edge computing:Systems,applications,and algorithms
7
作者 Zengshan YIN Changhao WU +4 位作者 Chongbin GUO Yuanchun LI Mengwei XU Weiwei GAO Chuanxiu CHI 《Chinese Journal of Aeronautics》 2025年第7期310-339,共30页
The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up ... The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up new industrial opportunities in both air and space,with satellite-powered computing emerging as a new computing paradigm:Orbital Edge Computing(OEC).Compared to terrestrial edge computing,the mobility of LEO satellites and their limited communication,computation,and storage resources pose challenges in designing task-specific scheduling algorithms.Previous survey papers have largely focused on terrestrial edge computing or the integration of space and ground technologies,lacking a comprehensive summary of OEC architecture,algorithms,and case studies.This paper conducts a comprehensive survey and analysis of OEC's system architecture,applications,algorithms,and simulation tools,providing a solid background for researchers in the field.By discussing OEC use cases and the challenges faced,potential research directions for future OEC research are proposed. 展开更多
关键词 Orbital edge computing Ubiquitous computing Large-scale satellite constellations computation offloading
原文传递
Indoor Localization Using Multi-Bluetooth Beacon Deployment in a Sparse Edge Computing Environment
8
作者 Soheil Saghafi Yashar Kiarashi +3 位作者 Amy D.Rodriguez Allan I.Levey Hyeokhyen Kwon Gari D.Clifford 《Digital Twins and Applications》 2025年第1期49-56,共8页
Bluetooth low energy(BLE)-based indoor localization has been extensively researched due to its cost-effectiveness,low power consumption,and ubiquity.Despite these advantages,the variability of received signal strength... Bluetooth low energy(BLE)-based indoor localization has been extensively researched due to its cost-effectiveness,low power consumption,and ubiquity.Despite these advantages,the variability of received signal strength indicator(RSSI)measurements,influenced by physical obstacles,human presence,and electronic interference,poses a significant challenge to accurate localization.In this work,we present an optimised method to enhance indoor localization accuracy by utilising multiple BLE beacons in a radio frequency(RF)-dense modern building environment.Through a proof-of-concept study,we demonstrate that using three BLE beacons reduces localization error from a worst-case distance of 9.09-2.94 m,whereas additional beacons offer minimal incremental benefit in such settings.Furthermore,our framework for BLE-based localization,implemented on an edge network of Raspberry Pies,has been released under an open-source license,enabling broader application and further research. 展开更多
关键词 ambient health monitoring bluetooth low energy cloud computing edge computing indoor localization
在线阅读 下载PDF
AMulti-Objective Joint Task Offloading Scheme for Vehicular Edge Computing
9
作者 Yiwei Zhang Xin Cui Qinghui Zhao 《Computers, Materials & Continua》 2025年第8期2355-2373,共19页
The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial veh... The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial vehicles-assisted mobile edge computing(UAV-MEC)has gained attention in providing computing resources to vehicles and optimizing system costs.We model the computing offloading problem as a multi-objective optimization challenge aimed at minimizing both task processing delay and energy consumption.We propose a three-stage hybrid offloading scheme called Dynamic Vehicle Clustering Game-based Multi-objective Whale Optimization Algorithm(DVCG-MWOA)to address this problem.A novel dynamic clustering algorithm is designed based on vehiclemobility and task offloading efficiency requirements,where each UAV independently serves as the cluster head for a vehicle cluster and adjusts its position at the end of each timeslot in response to vehiclemovement.Within eachUAV-led cluster,cooperative game theory is applied to allocate computing resourceswhile respecting delay constraints,ensuring efficient resource utilization.To enhance offloading efficiency,we improve the multi-objective whale optimization algorithm(MOWOA),resulting in the MWOA.This enhanced algorithm determines the optimal allocation of pending tasks to different edge computing devices and the resource utilization ratio of each device,ultimately achieving a Pareto-optimal solution set for delay and energy consumption.Experimental results demonstrate that the proposed joint offloading scheme significantly reduces both delay and energy consumption compared to existing approaches,offering superior performance for vehicular networks. 展开更多
关键词 Vehicular edge computing cooperative game theory multi-objective optimization computation offloading
在线阅读 下载PDF
Priority-Based Scheduling and Orchestration in Edge-Cloud Computing:A Deep Reinforcement Learning-Enhanced Concurrency Control Approach
10
作者 Mohammad A Al Khaldy Ahmad Nabot +4 位作者 Ahmad Al-Qerem Mohammad Alauthman Amina Salhi Suhaila Abuowaida Naceur Chihaoui 《Computer Modeling in Engineering & Sciences》 2025年第10期673-697,共25页
The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet ... The exponential growth of Internet ofThings(IoT)devices has created unprecedented challenges in data processing and resource management for time-critical applications.Traditional cloud computing paradigms cannot meet the stringent latency requirements of modern IoT systems,while pure edge computing faces resource constraints that limit processing capabilities.This paper addresses these challenges by proposing a novel Deep Reinforcement Learning(DRL)-enhanced priority-based scheduling framework for hybrid edge-cloud computing environments.Our approach integrates adaptive priority assignment with a two-level concurrency control protocol that ensures both optimal performance and data consistency.The framework introduces three key innovations:(1)a DRL-based dynamic priority assignmentmechanism that learns fromsystem behavior,(2)a hybrid concurrency control protocol combining local edge validation with global cloud coordination,and(3)an integrated mathematical model that formalizes sensor-driven transactions across edge-cloud architectures.Extensive simulations across diverse workload scenarios demonstrate significant quantitative improvements:40%latency reduction,25%throughput increase,85%resource utilization(compared to 60%for heuristicmethods),40%reduction in energy consumption(300 vs.500 J per task),and 50%improvement in scalability factor(1.8 vs.1.2 for EDF)compared to state-of-the-art heuristic and meta-heuristic approaches.These results establish the framework as a robust solution for large-scale IoT and autonomous applications requiring real-time processing with consistency guarantees. 展开更多
关键词 edge computing cloud computing scheduling algorithms orchestration strategies deep reinforcement learning concurrency control real-time systems IoT
在线阅读 下载PDF
Computational Offloading and Resource Allocation for Internet of Vehicles Based on UAV-Assisted Mobile Edge Computing System
11
作者 Fang Yujie Li Meng +3 位作者 Si Pengbo Yang Ruizhe Sun Enchang Zhang Yanhua 《China Communications》 2025年第9期333-351,共19页
As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational ... As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant. 展开更多
关键词 computational offloading Internet of Vehicles mobile edge computing resource optimization unmanned aerial vehicle
在线阅读 下载PDF
Joint offloading decision and resource allocation in vehicular edge computing networks
12
作者 Shumo Wang Xiaoqin Song +3 位作者 Han Xu Tiecheng Song Guowei Zhang Yang Yang 《Digital Communications and Networks》 2025年第1期71-82,共12页
With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications a... With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay. 展开更多
关键词 computation offloading Resource allocation Vehicular edge computing Potential game Multi-agent deep deterministic policy gradient
在线阅读 下载PDF
An Impact-Aware and Taxonomy-Driven Explainable Machine Learning Framework with Edge Computing for Security in Industrial IoT–Cyber Physical Systems
13
作者 Tamara Zhukabayeva Zulfiqar Ahmad +4 位作者 Nurbolat Tasbolatuly Makpal Zhartybayeva Yerik Mardenov Nurdaulet Karabayev Dilaram Baumuratova 《Computer Modeling in Engineering & Sciences》 2025年第11期2573-2599,共27页
The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the ... The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the systems.There is a lack of explainability,challenges with imbalanced attack classes,and limited consideration of practical edge–cloud deployment strategies in prior works.In the proposed study,we suggest an Impact-Aware Taxonomy-Driven Machine Learning Framework with Edge Deployment and SHapley Additive exPlanations(SHAP)-based Explainable AI(XAI)to attack detection and classification in IIoT-CPS settings.It includes not only unsupervised clustering(K-Means and DBSCAN)to extract latent traffic patterns but also supervised classification based on taxonomy to classify 33 different kinds of attacks into seven high-level categories:Flood Attacks,Botnet/Mirai,Reconnaissance,Spoofing/Man-In-The-Middle(MITM),Injection Attacks,Backdoors/Exploits,and Benign.The three machine learning algorithms,Random Forest,XGBoost,and Multi-Layer Perceptron(MLP),were trained on a realworld dataset of more than 1 million network traffic records,with overall accuracy of 99.4%(RF),99.5%(XGBoost),and 99.1%(MLP).Rare types of attacks,such as injection attacks and backdoors,were examined even in the case of extreme imbalance between the classes.SHAP-based XAI was performed on every model to help gain transparency and trust in the model and identify important features that drive the classification decisions,such as inter-arrival time,TCP flags,and protocol type.A workable edge-computing implementation strategy is proposed,whereby lightweight computing is performed at the edge devices and heavy,computation-intensive analytics is performed at the cloud.This framework is highly accurate,interpretable,and has real-time application,hence a robust and scalable solution to securing IIoT-CPS infrastructure against dynamic cyber-attacks. 展开更多
关键词 Industrial IoT CPS edge computing machine learning XAI attack taxonomy
在线阅读 下载PDF
DDPG-Based Intelligent Computation Offloading and Resource Allocation for LEO Satellite Edge Computing Network
14
作者 Jia Min Wu Jian +2 位作者 Zhang Liang Wang Xinyu Guo Qing 《China Communications》 2025年第3期1-15,共15页
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t... Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms. 展开更多
关键词 computation offloading deep deterministic policy gradient low earth orbit satellite mobile edge computing resource allocation
在线阅读 下载PDF
Lightweight deep reinforcement learning for dynamic resource allocation in vehicular edge computing
15
作者 Dapeng Wu Sijun Wu +4 位作者 Yaping Cui Ailing Zhong Tong Tang Ruyan Wang Xinqi Lin 《Digital Communications and Networks》 2025年第5期1530-1542,共13页
Vehicular Edge Computing(VEC)enhances the quality of user services by deploying wealth of resources near vehicles.However,due to highly dynamic and complex nature of vehicular networks,centralized decisionmaking for r... Vehicular Edge Computing(VEC)enhances the quality of user services by deploying wealth of resources near vehicles.However,due to highly dynamic and complex nature of vehicular networks,centralized decisionmaking for resource allocation proves inadequate within VECs.Conversely,allocating resources via distributed decision-making consumes vehicular resources.To improve the quality of user service,we formulate a problem of latency minimization,further subdividing this problem into two subproblems to be solved through distributed decision-making.To mitigate the resource consumption caused by distributed decision-making,we propose Reinforcement Learning(RL)algorithm based on sequential alternating multi-agent system mechanism,which effectively reduces the dimensionality of action space without losing the informational content of action,achieving network lightweighting.We discuss the rationality,generalizability,and inherent advantages of proposed mechanism.Simulation results indicate that our proposed mechanism outperforms traditional RL algorithms in terms of stability,generalizability,and adaptability to scenarios with invalid actions,all while achieving network lightweighting. 展开更多
关键词 Vehicular edge computing Resource allocation Reinforcement learning LIGHTWEIGHT
在线阅读 下载PDF
Optimizing System Latency for Blockchain-Encrypted Edge Computing in Internet of Vehicles
16
作者 Cui Zhang Maoxin Ji +2 位作者 Qiong Wu Pingyi Fan Qiang Fan 《Computers, Materials & Continua》 2025年第5期3519-3536,共18页
As Internet of Vehicles(IoV)technology continues to advance,edge computing has become an important tool for assisting vehicles in handling complex tasks.However,the process of offloading tasks to edge servers may expo... As Internet of Vehicles(IoV)technology continues to advance,edge computing has become an important tool for assisting vehicles in handling complex tasks.However,the process of offloading tasks to edge servers may expose vehicles to malicious external attacks,resulting in information loss or even tampering,thereby creating serious security vulnerabilities.Blockchain technology can maintain a shared ledger among servers.In the Raft consensus mechanism,as long as more than half of the nodes remain operational,the system will not collapse,effectively maintaining the system’s robustness and security.To protect vehicle information,we propose a security framework that integrates the Raft consensus mechanism from blockchain technology with edge computing.To address the additional latency introduced by blockchain,we derived a theoretical formula for system delay and proposed a convex optimization solution to minimize the system latency,ensuring that the system meets the requirements for low latency and high reliability.Simulation results demonstrate that the optimized data extraction rate significantly reduces systemdelay,with relatively stable variations in latency.Moreover,the proposed optimization solution based on this model can provide valuable insights for enhancing security and efficiency in future network environments,such as 5G and next-generation smart city systems. 展开更多
关键词 Blockchain edge computing internet of vehicles latency optimization
在线阅读 下载PDF
Edge computing aileron mechatronics using antiphase hysteresis Schmitt trigger for fast flutter suppression
17
作者 Tangwen Yin Dan Huang Xiaochun Zhang 《Control Theory and Technology》 2025年第1期153-160,共8页
An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This ... An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This paper presents a robust solution in the form of a fast flutter suppression digital control logic of edge computing aileron mechatronics(ECAM).We have effectively eliminated passive and active oscillating response biases by integrating nonlinear functional parameters and an antiphase hysteresis Schmitt trigger.Our findings demonstrate that self-tuning nonlinear parameters can optimize stability,robustness,and accuracy.At the same time,the antiphase hysteresis Schmitt trigger effectively rejects flutters without the need for collaborative navigation and guidance.Our hardware-in-the-loop simulation results confirm that this approach can eliminate aircraft jitter and shaking while ensuring expected stability and maneuverability.In conclusion,this nonlinear aileron mechatronics with a Schmitt positive feedback mechanism is a highly effective solution for distributed flight control and active flutter rejection. 展开更多
关键词 AILERON edge computing Flutter suppression MECHATRONICS Nonlinear hysteresis control Positive feedback
原文传递
GENOME:Genetic Encoding for Novel Optimization of Malware Detection and Classification in Edge Computing
18
作者 Sang-Hoon Choi Ki-Woong Park 《Computers, Materials & Continua》 2025年第3期4021-4039,共19页
The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing pr... The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing presents substantial security challenges,rendering it a prominent target for sophisticated malware attacks.Existing signature-based and behavior-based detection methods are ineffective against the swiftly evolving nature of malware threats and are constrained by the availability of resources.This paper suggests the Genetic Encoding for Novel Optimization of Malware Evaluation(GENOME)framework,a novel solution that is intended to improve the performance of malware detection and classification in peripheral computing environments.GENOME optimizes data storage and computa-tional efficiency by converting malware artifacts into compact,structured sequences through a Deoxyribonucleic Acid(DNA)encoding mechanism.The framework employs two DNA encoding algorithms,standard and compressed,which substantially reduce data size while preserving high detection accuracy.The Edge-IIoTset dataset was used to conduct experiments that showed that GENOME was able to achieve high classification performance using models such as Random Forest and Logistic Regression,resulting in a reduction of data size by up to 42%.Further evaluations with the CIC-IoT-23 dataset and Deep Learning models confirmed GENOME’s scalability and adaptability across diverse datasets and algorithms.The potential of GENOME to address critical challenges,such as the rapid mutation of malware,real-time processing demands,and resource limitations,is emphasized in this study.GENOME offers comprehensive protection for peripheral computing environments by offering a security solution that is both efficient and scalable. 展开更多
关键词 edge computing IoT security MALWARE machine learning malware classification malware detection
在线阅读 下载PDF
Optimized Resource Allocation for Dual-Band Cooperation-Based Edge Computing Vehicular Network
19
作者 Cheng Kaijun Fang Xuming 《China Communications》 2025年第9期352-367,共16页
With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further impro... With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks. 展开更多
关键词 dual-band cooperation edge computing resource allocation task processing vehicular network
在线阅读 下载PDF
Multi-Objective Optimization for Non-Panoramic VR in Mobile Edge Computing
20
作者 Ren Ruimin Li Fukang +1 位作者 Dang Yaping Yang Shouyi 《China Communications》 2025年第11期273-290,共18页
Non-panoramic virtual reality(VR)provides users with immersive experiences involving strong interactivity,thus attracting growing research and development attention.However,the demand for high bandwidth and low latenc... Non-panoramic virtual reality(VR)provides users with immersive experiences involving strong interactivity,thus attracting growing research and development attention.However,the demand for high bandwidth and low latency in VR services presents greater challenges to existing networks.Inspired by mobile edge computing(MEC),VR users can offload rendering tasks to other devices.The main challenge of task offloading is to minimize latency and energy consumption.Yet,in non-panoramic VR scenarios,it is essential to consider the Quality of Perceptual Experience(QOPE)for users.Simultaneously,one must also take into account the diverse requirements of users in real-world scenarios.Therefore,this paper proposes a QOPE model to measure the visual quality of non-panoramic VR users and models the non-panoramic VR task offloading problem based on MEC as a constrained multi-objective optimization problem(CMOP)that minimizes latency and energy consumption while providing a satisfied QOPE.And we propose an evolutionary algorithm(EA),GNSGA-II,to solve the CMOP.Simulation results show that the algorithm can effectively find various trade-off solutions among the objectives,satisfying the requirements of different users. 展开更多
关键词 mobile edge computing multi-objective optimization non-panoramic VR task offloading
在线阅读 下载PDF
上一页 1 2 249 下一页 到第
使用帮助 返回顶部