Edge computation offloading allows mobile end devices to execute compute-inten?sive tasks on edge servers. End devices can decide whether the tasks are offloaded to edge servers, cloud servers or executed locally acco...Edge computation offloading allows mobile end devices to execute compute-inten?sive tasks on edge servers. End devices can decide whether the tasks are offloaded to edge servers, cloud servers or executed locally according to current network condition and devic?es'profiles in an online manner. In this paper, we propose an edge computation offloading framework based on deep imitation learning (DIL) and knowledge distillation (KD), which assists end devices to quickly make fine-grained decisions to optimize the delay of computa?tion tasks online. We formalize a computation offloading problem into a multi-label classifi?cation problem. Training samples for our DIL model are generated in an offline manner. Af?ter the model is trained, we leverage KD to obtain a lightweight DIL model, by which we fur?ther reduce the model's inference delay. Numerical experiment shows that the offloading de?cisions made by our model not only outperform those made by other related policies in laten?cy metric, but also have the shortest inference delay among all policies.展开更多
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational ...As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.展开更多
Multimodal spatiotemporal data from smart city consumer electronics present critical challenges including cross-modal temporal misalignment,unreliable data quality,limited joint modeling of spatial and temporal depend...Multimodal spatiotemporal data from smart city consumer electronics present critical challenges including cross-modal temporal misalignment,unreliable data quality,limited joint modeling of spatial and temporal dependencies,and weak resilience to adversarial updates.To address these limitations,EdgeST-Fusion is introduced as a cross-modal federated graph transformer framework for context-aware smart city analytics.The architecture integrates cross-modal embedding networks for modality alignment,graph transformer encoders for spatial dependency modeling,temporal self-attention for dynamic pattern learning,and adaptive anomaly detection to ensure data quality and security during aggregation.A privacy-preserving federated learning protocol with differential privacy guarantees enables collaborative model training without centralizing sensitive data.The framework employs data-quality-aware weighted aggregation to enhance robustness against noisy and malicious client updates.Experimental evaluation on the GeoLife,PeMS-Bay,and SmartHome+datasets demonstrates that EdgeST-Fusion achieves 21.8%improvement in prediction accuracy,35.7%reduction in communication overhead,and 29.4%enhancement in security resilience compared to recent baselines.Real-world deployment across three smart city testbeds validates practical viability with 90.0%average accuracy and sub-250 ms inference latency.The proposed framework remains feasible for deployment on heterogeneous and resource-constrained consumer electronics devices whilemaintaining strong privacy guarantees and scalability for large-scale urban environments.展开更多
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e...In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.展开更多
With the large-scale deployment of the Internet ofThings(IoT)devices,their weak securitymechanisms make them prime targets for malware attacks.Attackers often use Domain Generation Algorithm(DGA)to generate random dom...With the large-scale deployment of the Internet ofThings(IoT)devices,their weak securitymechanisms make them prime targets for malware attacks.Attackers often use Domain Generation Algorithm(DGA)to generate random domain names,hiding the real IP of Command and Control(C&C)servers to build botnets.Due to the randomness and dynamics of DGA,traditional methods struggle to detect them accurately,increasing the difficulty of network defense.This paper proposes a lightweight DGA detection model based on knowledge distillation for resource-constrained IoT environments.Specifically,a teacher model combining CharacterBERT,a bidirectional long short-term memory(BiLSTM)network,and attention mechanism(ATT)is constructed:it extracts character-level semantic features viaCharacterBERT,captures sequence dependencieswith the BiLSTM,and integrates theATT for key feature weighting,formingmulti-granularity feature fusion.An improved knowledge distillation approach transfers the teacher model’s learned knowledge to the simplified DistilBERT student model.Experimental results show the teacher model achieves 98.68%detection accuracy.The student modelmaintains slightly improved accuracy while significantly compressing parameters to approximately 38.4%of the teacher model’s scale,greatly reducing computational overhead for IoT deployment.展开更多
Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monit...Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monitoring and automate the communication process.In recent decades,researchers have made many efforts to propose autonomous systems for manipulating network data and providing on-time responses in critical operations.However,the widespread use of IoT devices in resource-constrained applications and mobile sensor networks introduces significant research challenges for cybersecurity.These systems are vulnerable to a variety of cyberattacks,including unauthorized access,denial-of-service attacks,and data leakage,which compromise the network’s security.Additionally,uneven load balancing between mobile IoT devices,which frequently experience link interferences,compromises the trustworthiness of the system.This paper introduces a Multi-Agent secured framework using lightweight edge computing to enhance cybersecurity for sensor networks,aiming to leverage artificial intelligence for adaptive routing and multi-metric trust evaluation to achieve data privacy and mitigate potential threats.Moreover,it enhances the efficiency of distributed sensors for energy consumption through intelligent data analytics techniques,resulting in highly consistent and low-latency network communication.Using simulations,the proposed framework reveals its significant performance compared to state-of-the-art approaches for energy consumption by 43%,latency by 46%,network throughput by 51%,packet loss rate by 40%,and denial of service attacks by 42%.展开更多
The intelligent environmental sensing systems are quickly transforming the sparse and retrospective monitoring to dense and decision-oriented environmental intelligence.This review brings together the manner in which ...The intelligent environmental sensing systems are quickly transforming the sparse and retrospective monitoring to dense and decision-oriented environmental intelligence.This review brings together the manner in which integration of Internet of Things(IoT)sensing,edge computing,and real-time analytics facilitates timely detection,interpretation,and prediction of the environmental conditions across the applications,such as urban air quality,watershed and coastal surveillance,industrial safety,agriculture,and disaster response.We define end-to-end architectural patterns to organize devices,edge nodes,and cloud services to satisfy latency,reliability,bandwidth,and governance constraints with emphasis on event-time processing,adaptive offloading,and hierarchical aggregation.Then we look at sensing and infrastructure foundations,emphasizing the effects of sensor modality and power autonomy,connectivity,and the practices of calibration on the practicable analytics and eventual plausibility.It is on this basis that we examine real-time analytics pipelines and Artificial Intelligence(AI)techniques to preprocess,sensor combine,anomaly detect,and short-horizon forecast,with a focus on edge-deployable models,quantification of uncertainties,and query resistance to drift and domain shift.Lastly,we address the realities of deployment that condition operational success,such as lifecycle engineering,provenance-aware data management,security and privacy risks,ethical governance,and evaluation methodologies,which place end-to-end latency and field generalization as a priority.This review offers cohesion to algorithmic capabilities and systems engineering and governance to define an overall framework,show open areas of research directions,and provide practical recommendations on how to design trustworthy,scalable,and sustainable environmental monitoring systems.展开更多
The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness dimin...The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness diminishes in few-shot reasoning scenarios due to the low data efficiency of conventional supervised fine-tuning,which leads to excessive communication overhead.To address this,we propose Language-Empowered Split Fine-Tuning(LESFT),a framework that integrates split architectures with a contrastive-inspired fine-tuning paradigm.LESFT simultaneously learns frommultiple logically equivalent but linguistically diverse reasoning chains,providing richer supervisory signals and improving data efficiency.This process-oriented training allows more effective reasoning adaptation with fewer samples.Extensive experiments demonstrate that LESFT consistently outperforms strong baselines such as SplitLoRA in task accuracy.LESFT consistently outperforms strong baselines on GSM8K,CommonsenseQA,and AQUA_RAT,with the largest gains observed on Qwen2.5-3B.These results indicate that LESFT can effectively adapt large language models for reasoning tasks under the computational and communication constraints of edge environments.展开更多
Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources...Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources of computation and communication.Multiaccess edge computing(MEC)can offload computing-intensive tasks to the nearby edge servers,which alleviates the pressure of devices.Ultra-dense network(UDN)can provide effective spectrum resources by deploying a large number of micro base stations.Furthermore,network slicing can support various applications in different communication scenarios.Therefore,this paper integrates the ultra-dense network slicing and the MEC technology,and introduces a hybrid computing offloading strategy in order to satisfy various quality of service(QoS)of edge devices.In order to dynamically allocate limited resources,the above problem is formulated as multiagent distributed deep reinforcement learning(DRL),which will achieve low overhead computation offloading strategy and real-time resource allocation decisions.In this context,federated learning is added to train DRL agents in a distributed manner,where each agent is dedicated to exploring actions composed of offloading decisions and allocating resources,so as to jointly optimize system delay and energy consumption.Simulation results show that the proposed learning algorithm has better performance compared with other strategies in literature.展开更多
This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagno...This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.展开更多
With the rapid development of power Internet of Things(IoT)scenarios such as smart factories and smart homes,numerous intelligent terminal devices and real-time interactive applications impose higher demands on comput...With the rapid development of power Internet of Things(IoT)scenarios such as smart factories and smart homes,numerous intelligent terminal devices and real-time interactive applications impose higher demands on computing latency and resource supply efficiency.Multi-access edge computing technology deploys cloud computing capabilities at the network edge;constructs distributed computing nodes and multi-access systems and offers infrastructure support for services with low latency and high reliability.Existing research relies on a strong assumption that the environmental state is fully observable and fails to thoroughly consider the continuous time-varying features of edge server load fluctuations,leading to insufficient adaptability of the model in a heterogeneous dynamic environment.Thus,this paper establishes a framework for end-edge collaborative task offloading based on a partially observable Markov decision-making process(POMDP)and proposes a method for end-edge collaborative task offloading in heterogeneous scenarios.It achieves time-series modeling of the historical load characteristics of edge servers and endows the agent with the ability to be aware of the load in dynamic environmental states.Moreover,by dynamically assessing the exploration value of historical trajectories in the central trajectory pool and adjusting the sample weight distribution,directional exploration and strategy optimization of high-value trajectories are realized.Experimental results indicate that the proposed method exhibits distinct advantages compared with existing methods in terms of average delay and task failure rate and also verifies the method’s robustness in a dynamic environment.展开更多
As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays...As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.展开更多
Deploying Large LanguageModel(LLM)-based agents in the Industrial Internet ofThings(IIoT)presents significant challenges,including high latency from cloud-based APIs,data privacy concerns,and the infeasibility of depl...Deploying Large LanguageModel(LLM)-based agents in the Industrial Internet ofThings(IIoT)presents significant challenges,including high latency from cloud-based APIs,data privacy concerns,and the infeasibility of deploying monolithic models on resource-constrained edge devices.While smaller models(SLMs)are suitable for edge deployment,they often lack the reasoning power for complex,multi-step tasks.To address these issues,this paper introduces LEAF,a Lightweight Edge Agent Framework designed for efficiently executing complex tasks at the edge.LEAF employs a novel architecture where multiple expert SLMs—specialized for planning,execution,and interaction—work in concert,decomposing complex problems into manageable sub-tasks.To mitigate the resource overhead of this multi-model approach,LEAF implements an efficient parameter-sharing scheme based on Scalable Low-Rank Adaptation(S-LoRA).We introduce a two-stage training strategy combining Supervised Fine-Tuning(SFT)and Group Relative Policy Optimization(GRPO)to significantly enhance each expert’s capabilities.Furthermore,a Finite StateMachine(FSM)-based decision engine orchestrates the workflow,uniquely balancing deterministic control with intelligent flexibility,making it ideal for industrial environments that demand both reliability and adaptability.Experiments across diverse IIoT scenarios demonstrate that LEAF significantly outperforms baseline methods in both task success rate and user satisfaction.Notably,our fine-tuned 4-billion-parameter model achieves a task success rate over 90%in complex IIoT scenarios,demonstrating LEAF’s ability to deliver powerful and efficient autonomy at the industrial edge.展开更多
The exponential growth of Internet of Things(IoT)devices,autonomous systems,and digital services is generating massive volumes of big data,projected to exceed 291 zettabytes by 2027.Conventional cloud computing,despit...The exponential growth of Internet of Things(IoT)devices,autonomous systems,and digital services is generating massive volumes of big data,projected to exceed 291 zettabytes by 2027.Conventional cloud computing,despite its high processing and storage capacity,suffers from increased network latency,network congestion,and high operational costs,making it unsuitable for latency-sensitive applications.Edge computing addresses these issues by processing data near the source but faces scalability challenges and elevated Total Cost of Ownership(TCO).Hybrid solutions,such as fog computing,cloudlets,and Mobile Edge Computing(MEC),attempt to balance cost and performance;however,they still struggle with limited resource sharing and high deployment expenses.This paper proposes Public Edge as a Service(PEaaS),a novel paradigm that utilizes idle resources contributed by universities,enterprises,cellular operators,and individuals under a collaborative service model.By decentralizing computation and enabling multi-tenant resource sharing,PEaaS reduces reliance on centralized cloud infrastructure,minimizes communication costs,and enhances scalability.The proposed framework is evaluated using EdgeCloudSim under varying workloads,for keymetrics such as latency,communication cost,server utilization,and task failure rate.Results reveal that while cloud has a task failure rate rising sharply to 12.3%at 2000 devices,PEaaS maintains a low rate of 2.5%,closely matching edge computing.Furthermore,communication costs remain 25% lower than cloud and latency remains below 0.3,even under peak load.These findings demonstrate that PEaaS achieves near-edge performance with reduced costs and enhanced scalability,offering a sustainable and economically viable solution for next-generation computing environments.展开更多
Edge computation offloading has made some progress in the fifth generation mobile network(5G).However,load balancing in edge computation offloading is still a challenging problem.Meanwhile,with the continuous pursuit ...Edge computation offloading has made some progress in the fifth generation mobile network(5G).However,load balancing in edge computation offloading is still a challenging problem.Meanwhile,with the continuous pursuit of low execution latency in 5G multi-scenario,the functional requirements of edge computation offloading are further exacerbated.Given the above challenges,we raise a unique edge computation offloading method in 5G multi-scenario,and consider user satisfaction.The method consists of three functional parts:offloading strategy generation,offloading strategy update,and offloading strategy optimization.First,the offloading strategy is generated by means of a deep neural network(DNN),then update the offloading strategy by updating the DNN parameters.Finally,we optimize the offloading strategy based on changes in user satisfaction.In summary,compared to existing optimization methods,our proposal can achieve performance close to the optimum.Massive simulation results indicate the latency of the execution of our method on the CPU is under 0.1 seconds while improving the average computation rate by about 10%.展开更多
By Mobile Edge Computing(MEC), computation-intensive tasks are offloaded from mobile devices to cloud servers, and thus the energy consumption of mobile devices can be notably reduced. In this paper, we study task off...By Mobile Edge Computing(MEC), computation-intensive tasks are offloaded from mobile devices to cloud servers, and thus the energy consumption of mobile devices can be notably reduced. In this paper, we study task offloading in multi-user MEC systems with heterogeneous clouds, including edge clouds and remote clouds. Tasks are forwarded from mobile devices to edge clouds via wireless channels, and they can be further forwarded to remote clouds via the Internet. Our objective is to minimize the total energy consumption of multiple mobile devices, subject to bounded-delay requirements of tasks. Based on dynamic programming, we propose an algorithm that minimizes the energy consumption, by jointly allocating bandwidth and computational resources to mobile devices. The algorithm is of pseudo-polynomial complexity. To further reduce the complexity, we propose an approximation algorithm with energy discretization, and its total energy consumption is proved to be within a bounded gap from the optimum. Simulation results show that, nearly 82.7% energy of mobile devices can be saved by task offloading compared with mobile device execution.展开更多
Multi-access Edge Computing(MEC)is one of the key technologies of the future 5G network.By deploying edge computing centers at the edge of wireless access network,the computation tasks can be offloaded to edge servers...Multi-access Edge Computing(MEC)is one of the key technologies of the future 5G network.By deploying edge computing centers at the edge of wireless access network,the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios.Meanwhile,with the development of IOV(Internet of Vehicles)technology,various delay-sensitive and compute-intensive in-vehicle applications continue to appear.Compared with traditional Internet business,these computation tasks have higher processing priority and lower delay requirements.In this paper,we design a 5G-based vehicle-aware Multi-access Edge Computing network(VAMECN)and propose a joint optimization problem of minimizing total system cost.In view of the problem,a deep reinforcement learningbased joint computation offloading and task migration optimization(JCOTM)algorithm is proposed,considering the influences of multiple factors such as concurrent multiple computation tasks,system computing resources distribution,and network communication bandwidth.And,the mixed integer nonlinear programming problem is described as a Markov Decision Process.Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption,optimize computing offloading and resource allocation schemes,and improve system resource utilization,compared with other computing offloading policies.展开更多
This article establishes a three-tier mobile edge computing(MEC) network, which takes into account the cooperation between unmanned aerial vehicles(UAVs). In this MEC network, we aim to minimize the processing delay o...This article establishes a three-tier mobile edge computing(MEC) network, which takes into account the cooperation between unmanned aerial vehicles(UAVs). In this MEC network, we aim to minimize the processing delay of tasks by jointly optimizing the deployment of UAVs and offloading decisions,while meeting the computing capacity constraint of UAVs. However, the resulting optimization problem is nonconvex, which cannot be solved by general optimization tools in an effective and efficient way. To this end, we propose a two-layer optimization algorithm to tackle the non-convexity of the problem by capitalizing on alternating optimization. In the upper level algorithm, we rely on differential evolution(DE) learning algorithm to solve the deployment of the UAVs. In the lower level algorithm, we exploit distributed deep neural network(DDNN) to generate offloading decisions. Numerical results demonstrate that the two-layer optimization algorithm can effectively obtain the near-optimal deployment of UAVs and offloading strategy with low complexity.展开更多
The unmanned aerial vehicle(UAV)-enabled mobile edge computing(MEC) architecture is expected to be a powerful technique to facilitate 5 G and beyond ubiquitous wireless connectivity and diverse vertical applications a...The unmanned aerial vehicle(UAV)-enabled mobile edge computing(MEC) architecture is expected to be a powerful technique to facilitate 5 G and beyond ubiquitous wireless connectivity and diverse vertical applications and services, anytime and anywhere. Wireless power transfer(WPT) is another promising technology to prolong the operation time of low-power wireless devices in the era of Internet of Things(IoT). However, the integration of WPT and UAV-enabled MEC systems is far from being well studied, especially in dynamic environments. In order to tackle this issue, this paper aims to investigate the stochastic computation offloading and trajectory scheduling for the UAV-enabled wireless powered MEC system. A UAV offers both RF wireless power transmission and computation services for IoT devices. Considering the stochastic task arrivals and random channel conditions, a long-term average energyefficiency(EE) minimization problem is formulated.Due to non-convexity and the time domain coupling of the variables in the formulated problem, a lowcomplexity online computation offloading and trajectory scheduling algorithm(OCOTSA) is proposed by exploiting Lyapunov optimization. Simulation results verify that there exists a balance between EE and the service delay, and demonstrate that the system EE performance obtained by the proposed scheme outperforms other benchmark schemes.展开更多
基金This work was supported in part by the National Science Foundation of China under Grant No.61972432the Program for Guangdong Introduc⁃ing Innovative and Entrepreneurial Teams under Grant No.2017ZT07X355.
文摘Edge computation offloading allows mobile end devices to execute compute-inten?sive tasks on edge servers. End devices can decide whether the tasks are offloaded to edge servers, cloud servers or executed locally according to current network condition and devic?es'profiles in an online manner. In this paper, we propose an edge computation offloading framework based on deep imitation learning (DIL) and knowledge distillation (KD), which assists end devices to quickly make fine-grained decisions to optimize the delay of computa?tion tasks online. We formalize a computation offloading problem into a multi-label classifi?cation problem. Training samples for our DIL model are generated in an offline manner. Af?ter the model is trained, we leverage KD to obtain a lightweight DIL model, by which we fur?ther reduce the model's inference delay. Numerical experiment shows that the offloading de?cisions made by our model not only outperform those made by other related policies in laten?cy metric, but also have the shortest inference delay among all policies.
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.
基金in part by the National Natural Science Foundation of China(NSFC)under Grant 62371012in part by the Beijing Natural Science Foundation under Grant 4252001.
文摘As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.
基金supported by the University of Tabuk,Saudi Arabia。
文摘Multimodal spatiotemporal data from smart city consumer electronics present critical challenges including cross-modal temporal misalignment,unreliable data quality,limited joint modeling of spatial and temporal dependencies,and weak resilience to adversarial updates.To address these limitations,EdgeST-Fusion is introduced as a cross-modal federated graph transformer framework for context-aware smart city analytics.The architecture integrates cross-modal embedding networks for modality alignment,graph transformer encoders for spatial dependency modeling,temporal self-attention for dynamic pattern learning,and adaptive anomaly detection to ensure data quality and security during aggregation.A privacy-preserving federated learning protocol with differential privacy guarantees enables collaborative model training without centralizing sensitive data.The framework employs data-quality-aware weighted aggregation to enhance robustness against noisy and malicious client updates.Experimental evaluation on the GeoLife,PeMS-Bay,and SmartHome+datasets demonstrates that EdgeST-Fusion achieves 21.8%improvement in prediction accuracy,35.7%reduction in communication overhead,and 29.4%enhancement in security resilience compared to recent baselines.Real-world deployment across three smart city testbeds validates practical viability with 90.0%average accuracy and sub-250 ms inference latency.The proposed framework remains feasible for deployment on heterogeneous and resource-constrained consumer electronics devices whilemaintaining strong privacy guarantees and scalability for large-scale urban environments.
基金supported by the National Natural Science Foundation of China(62202215)Liaoning Province Applied Basic Research Program(Youth Special Project,2023JH2/101600038)+4 种基金Shenyang Youth Science and Technology Innovation Talent Support Program(RC220458)Guangxuan Program of Shenyang Ligong University(SYLUGXRC202216)the Basic Research Special Funds for Undergraduate Universities in Liaoning Province(LJ212410144067)the Natural Science Foundation of Liaoning Province(2024-MS-113)the science and technology funds from Liaoning Education Department(LJKZ0242).
文摘In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.
基金supported by the following projects:National Natural Science Foundation of China(62461041)Natural Science Foundation of Jiangxi Province China(20242BAB25068).
文摘With the large-scale deployment of the Internet ofThings(IoT)devices,their weak securitymechanisms make them prime targets for malware attacks.Attackers often use Domain Generation Algorithm(DGA)to generate random domain names,hiding the real IP of Command and Control(C&C)servers to build botnets.Due to the randomness and dynamics of DGA,traditional methods struggle to detect them accurately,increasing the difficulty of network defense.This paper proposes a lightweight DGA detection model based on knowledge distillation for resource-constrained IoT environments.Specifically,a teacher model combining CharacterBERT,a bidirectional long short-term memory(BiLSTM)network,and attention mechanism(ATT)is constructed:it extracts character-level semantic features viaCharacterBERT,captures sequence dependencieswith the BiLSTM,and integrates theATT for key feature weighting,formingmulti-granularity feature fusion.An improved knowledge distillation approach transfers the teacher model’s learned knowledge to the simplified DistilBERT student model.Experimental results show the teacher model achieves 98.68%detection accuracy.The student modelmaintains slightly improved accuracy while significantly compressing parameters to approximately 38.4%of the teacher model’s scale,greatly reducing computational overhead for IoT deployment.
基金supported by the Deanship of Graduate Studies and Scientific Research at Jouf University.
文摘Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monitoring and automate the communication process.In recent decades,researchers have made many efforts to propose autonomous systems for manipulating network data and providing on-time responses in critical operations.However,the widespread use of IoT devices in resource-constrained applications and mobile sensor networks introduces significant research challenges for cybersecurity.These systems are vulnerable to a variety of cyberattacks,including unauthorized access,denial-of-service attacks,and data leakage,which compromise the network’s security.Additionally,uneven load balancing between mobile IoT devices,which frequently experience link interferences,compromises the trustworthiness of the system.This paper introduces a Multi-Agent secured framework using lightweight edge computing to enhance cybersecurity for sensor networks,aiming to leverage artificial intelligence for adaptive routing and multi-metric trust evaluation to achieve data privacy and mitigate potential threats.Moreover,it enhances the efficiency of distributed sensors for energy consumption through intelligent data analytics techniques,resulting in highly consistent and low-latency network communication.Using simulations,the proposed framework reveals its significant performance compared to state-of-the-art approaches for energy consumption by 43%,latency by 46%,network throughput by 51%,packet loss rate by 40%,and denial of service attacks by 42%.
基金supported by Jiangxi Polytechnic Institute Key Research Topics in Educational Reform 2025-JGJG-07.
文摘The intelligent environmental sensing systems are quickly transforming the sparse and retrospective monitoring to dense and decision-oriented environmental intelligence.This review brings together the manner in which integration of Internet of Things(IoT)sensing,edge computing,and real-time analytics facilitates timely detection,interpretation,and prediction of the environmental conditions across the applications,such as urban air quality,watershed and coastal surveillance,industrial safety,agriculture,and disaster response.We define end-to-end architectural patterns to organize devices,edge nodes,and cloud services to satisfy latency,reliability,bandwidth,and governance constraints with emphasis on event-time processing,adaptive offloading,and hierarchical aggregation.Then we look at sensing and infrastructure foundations,emphasizing the effects of sensor modality and power autonomy,connectivity,and the practices of calibration on the practicable analytics and eventual plausibility.It is on this basis that we examine real-time analytics pipelines and Artificial Intelligence(AI)techniques to preprocess,sensor combine,anomaly detect,and short-horizon forecast,with a focus on edge-deployable models,quantification of uncertainties,and query resistance to drift and domain shift.Lastly,we address the realities of deployment that condition operational success,such as lifecycle engineering,provenance-aware data management,security and privacy risks,ethical governance,and evaluation methodologies,which place end-to-end latency and field generalization as a priority.This review offers cohesion to algorithmic capabilities and systems engineering and governance to define an overall framework,show open areas of research directions,and provide practical recommendations on how to design trustworthy,scalable,and sustainable environmental monitoring systems.
基金supported in part by the National Natural Science Foundation of China(NSFC)under Grant 62276109The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through the Research Group Project number(ORF-2025-585).
文摘The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness diminishes in few-shot reasoning scenarios due to the low data efficiency of conventional supervised fine-tuning,which leads to excessive communication overhead.To address this,we propose Language-Empowered Split Fine-Tuning(LESFT),a framework that integrates split architectures with a contrastive-inspired fine-tuning paradigm.LESFT simultaneously learns frommultiple logically equivalent but linguistically diverse reasoning chains,providing richer supervisory signals and improving data efficiency.This process-oriented training allows more effective reasoning adaptation with fewer samples.Extensive experiments demonstrate that LESFT consistently outperforms strong baselines such as SplitLoRA in task accuracy.LESFT consistently outperforms strong baselines on GSM8K,CommonsenseQA,and AQUA_RAT,with the largest gains observed on Qwen2.5-3B.These results indicate that LESFT can effectively adapt large language models for reasoning tasks under the computational and communication constraints of edge environments.
文摘Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources of computation and communication.Multiaccess edge computing(MEC)can offload computing-intensive tasks to the nearby edge servers,which alleviates the pressure of devices.Ultra-dense network(UDN)can provide effective spectrum resources by deploying a large number of micro base stations.Furthermore,network slicing can support various applications in different communication scenarios.Therefore,this paper integrates the ultra-dense network slicing and the MEC technology,and introduces a hybrid computing offloading strategy in order to satisfy various quality of service(QoS)of edge devices.In order to dynamically allocate limited resources,the above problem is formulated as multiagent distributed deep reinforcement learning(DRL),which will achieve low overhead computation offloading strategy and real-time resource allocation decisions.In this context,federated learning is added to train DRL agents in a distributed manner,where each agent is dedicated to exploring actions composed of offloading decisions and allocating resources,so as to jointly optimize system delay and energy consumption.Simulation results show that the proposed learning algorithm has better performance compared with other strategies in literature.
文摘This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.
基金funded by the State Grid Corporation Science and Technology Project“Research and Application of Key Technologies for Integrated Sensing and Computing for Intelligent Operation of Power Grid”(Grant No.5700-202318596A-3-2-ZN).
文摘With the rapid development of power Internet of Things(IoT)scenarios such as smart factories and smart homes,numerous intelligent terminal devices and real-time interactive applications impose higher demands on computing latency and resource supply efficiency.Multi-access edge computing technology deploys cloud computing capabilities at the network edge;constructs distributed computing nodes and multi-access systems and offers infrastructure support for services with low latency and high reliability.Existing research relies on a strong assumption that the environmental state is fully observable and fails to thoroughly consider the continuous time-varying features of edge server load fluctuations,leading to insufficient adaptability of the model in a heterogeneous dynamic environment.Thus,this paper establishes a framework for end-edge collaborative task offloading based on a partially observable Markov decision-making process(POMDP)and proposes a method for end-edge collaborative task offloading in heterogeneous scenarios.It achieves time-series modeling of the historical load characteristics of edge servers and endows the agent with the ability to be aware of the load in dynamic environmental states.Moreover,by dynamically assessing the exploration value of historical trajectories in the central trajectory pool and adjusting the sample weight distribution,directional exploration and strategy optimization of high-value trajectories are realized.Experimental results indicate that the proposed method exhibits distinct advantages compared with existing methods in terms of average delay and task failure rate and also verifies the method’s robustness in a dynamic environment.
基金supported by Youth Talent Project of Scientific Research Program of Hubei Provincial Department of Education under Grant Q20241809Doctoral Scientific Research Foundation of Hubei University of Automotive Technology under Grant 202404.
文摘As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.
文摘Deploying Large LanguageModel(LLM)-based agents in the Industrial Internet ofThings(IIoT)presents significant challenges,including high latency from cloud-based APIs,data privacy concerns,and the infeasibility of deploying monolithic models on resource-constrained edge devices.While smaller models(SLMs)are suitable for edge deployment,they often lack the reasoning power for complex,multi-step tasks.To address these issues,this paper introduces LEAF,a Lightweight Edge Agent Framework designed for efficiently executing complex tasks at the edge.LEAF employs a novel architecture where multiple expert SLMs—specialized for planning,execution,and interaction—work in concert,decomposing complex problems into manageable sub-tasks.To mitigate the resource overhead of this multi-model approach,LEAF implements an efficient parameter-sharing scheme based on Scalable Low-Rank Adaptation(S-LoRA).We introduce a two-stage training strategy combining Supervised Fine-Tuning(SFT)and Group Relative Policy Optimization(GRPO)to significantly enhance each expert’s capabilities.Furthermore,a Finite StateMachine(FSM)-based decision engine orchestrates the workflow,uniquely balancing deterministic control with intelligent flexibility,making it ideal for industrial environments that demand both reliability and adaptability.Experiments across diverse IIoT scenarios demonstrate that LEAF significantly outperforms baseline methods in both task success rate and user satisfaction.Notably,our fine-tuned 4-billion-parameter model achieves a task success rate over 90%in complex IIoT scenarios,demonstrating LEAF’s ability to deliver powerful and efficient autonomy at the industrial edge.
文摘The exponential growth of Internet of Things(IoT)devices,autonomous systems,and digital services is generating massive volumes of big data,projected to exceed 291 zettabytes by 2027.Conventional cloud computing,despite its high processing and storage capacity,suffers from increased network latency,network congestion,and high operational costs,making it unsuitable for latency-sensitive applications.Edge computing addresses these issues by processing data near the source but faces scalability challenges and elevated Total Cost of Ownership(TCO).Hybrid solutions,such as fog computing,cloudlets,and Mobile Edge Computing(MEC),attempt to balance cost and performance;however,they still struggle with limited resource sharing and high deployment expenses.This paper proposes Public Edge as a Service(PEaaS),a novel paradigm that utilizes idle resources contributed by universities,enterprises,cellular operators,and individuals under a collaborative service model.By decentralizing computation and enabling multi-tenant resource sharing,PEaaS reduces reliance on centralized cloud infrastructure,minimizes communication costs,and enhances scalability.The proposed framework is evaluated using EdgeCloudSim under varying workloads,for keymetrics such as latency,communication cost,server utilization,and task failure rate.Results reveal that while cloud has a task failure rate rising sharply to 12.3%at 2000 devices,PEaaS maintains a low rate of 2.5%,closely matching edge computing.Furthermore,communication costs remain 25% lower than cloud and latency remains below 0.3,even under peak load.These findings demonstrate that PEaaS achieves near-edge performance with reduced costs and enhanced scalability,offering a sustainable and economically viable solution for next-generation computing environments.
基金This work was supported in part by the Science and Technology Project of North China University of Science and Technology under Grant ZD-YG-202317-23。
文摘Edge computation offloading has made some progress in the fifth generation mobile network(5G).However,load balancing in edge computation offloading is still a challenging problem.Meanwhile,with the continuous pursuit of low execution latency in 5G multi-scenario,the functional requirements of edge computation offloading are further exacerbated.Given the above challenges,we raise a unique edge computation offloading method in 5G multi-scenario,and consider user satisfaction.The method consists of three functional parts:offloading strategy generation,offloading strategy update,and offloading strategy optimization.First,the offloading strategy is generated by means of a deep neural network(DNN),then update the offloading strategy by updating the DNN parameters.Finally,we optimize the offloading strategy based on changes in user satisfaction.In summary,compared to existing optimization methods,our proposal can achieve performance close to the optimum.Massive simulation results indicate the latency of the execution of our method on the CPU is under 0.1 seconds while improving the average computation rate by about 10%.
基金the National Key R&D Program of China 2018YFB1800804the Nature Science Foundation of China (No. 61871254,No. 61861136003,No. 91638204)Hitachi Ltd.
文摘By Mobile Edge Computing(MEC), computation-intensive tasks are offloaded from mobile devices to cloud servers, and thus the energy consumption of mobile devices can be notably reduced. In this paper, we study task offloading in multi-user MEC systems with heterogeneous clouds, including edge clouds and remote clouds. Tasks are forwarded from mobile devices to edge clouds via wireless channels, and they can be further forwarded to remote clouds via the Internet. Our objective is to minimize the total energy consumption of multiple mobile devices, subject to bounded-delay requirements of tasks. Based on dynamic programming, we propose an algorithm that minimizes the energy consumption, by jointly allocating bandwidth and computational resources to mobile devices. The algorithm is of pseudo-polynomial complexity. To further reduce the complexity, we propose an approximation algorithm with energy discretization, and its total energy consumption is proved to be within a bounded gap from the optimum. Simulation results show that, nearly 82.7% energy of mobile devices can be saved by task offloading compared with mobile device execution.
基金supported in part by the National Key R&D Program of China under Grant 2018YFC0831502.
文摘Multi-access Edge Computing(MEC)is one of the key technologies of the future 5G network.By deploying edge computing centers at the edge of wireless access network,the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios.Meanwhile,with the development of IOV(Internet of Vehicles)technology,various delay-sensitive and compute-intensive in-vehicle applications continue to appear.Compared with traditional Internet business,these computation tasks have higher processing priority and lower delay requirements.In this paper,we design a 5G-based vehicle-aware Multi-access Edge Computing network(VAMECN)and propose a joint optimization problem of minimizing total system cost.In view of the problem,a deep reinforcement learningbased joint computation offloading and task migration optimization(JCOTM)algorithm is proposed,considering the influences of multiple factors such as concurrent multiple computation tasks,system computing resources distribution,and network communication bandwidth.And,the mixed integer nonlinear programming problem is described as a Markov Decision Process.Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption,optimize computing offloading and resource allocation schemes,and improve system resource utilization,compared with other computing offloading policies.
基金supported in part by National Natural Science Foundation of China (Grant No. 62101277)in part by the Natural Science Foundation of Jiangsu Province (Grant No. BK20200822)+1 种基金in part by the Natural Science Foundation of Jiangsu Higher Education Institutions of China (Grant No. 20KJB510036)in part by the Guangxi Key Laboratory of Multimedia Communications and Network Technology (Grant No. KLF-2020-03)。
文摘This article establishes a three-tier mobile edge computing(MEC) network, which takes into account the cooperation between unmanned aerial vehicles(UAVs). In this MEC network, we aim to minimize the processing delay of tasks by jointly optimizing the deployment of UAVs and offloading decisions,while meeting the computing capacity constraint of UAVs. However, the resulting optimization problem is nonconvex, which cannot be solved by general optimization tools in an effective and efficient way. To this end, we propose a two-layer optimization algorithm to tackle the non-convexity of the problem by capitalizing on alternating optimization. In the upper level algorithm, we rely on differential evolution(DE) learning algorithm to solve the deployment of the UAVs. In the lower level algorithm, we exploit distributed deep neural network(DDNN) to generate offloading decisions. Numerical results demonstrate that the two-layer optimization algorithm can effectively obtain the near-optimal deployment of UAVs and offloading strategy with low complexity.
基金supported in part by the U.S. National Science Foundation under Grant CNS-2007995in part by the National Natural Science Foundation of China under Grant 92067201,62171231in part by Jiangsu Provincial Key Research and Development Program under Grant BE2020084-1。
文摘The unmanned aerial vehicle(UAV)-enabled mobile edge computing(MEC) architecture is expected to be a powerful technique to facilitate 5 G and beyond ubiquitous wireless connectivity and diverse vertical applications and services, anytime and anywhere. Wireless power transfer(WPT) is another promising technology to prolong the operation time of low-power wireless devices in the era of Internet of Things(IoT). However, the integration of WPT and UAV-enabled MEC systems is far from being well studied, especially in dynamic environments. In order to tackle this issue, this paper aims to investigate the stochastic computation offloading and trajectory scheduling for the UAV-enabled wireless powered MEC system. A UAV offers both RF wireless power transmission and computation services for IoT devices. Considering the stochastic task arrivals and random channel conditions, a long-term average energyefficiency(EE) minimization problem is formulated.Due to non-convexity and the time domain coupling of the variables in the formulated problem, a lowcomplexity online computation offloading and trajectory scheduling algorithm(OCOTSA) is proposed by exploiting Lyapunov optimization. Simulation results verify that there exists a balance between EE and the service delay, and demonstrate that the system EE performance obtained by the proposed scheme outperforms other benchmark schemes.