Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monit...Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monitoring and automate the communication process.In recent decades,researchers have made many efforts to propose autonomous systems for manipulating network data and providing on-time responses in critical operations.However,the widespread use of IoT devices in resource-constrained applications and mobile sensor networks introduces significant research challenges for cybersecurity.These systems are vulnerable to a variety of cyberattacks,including unauthorized access,denial-of-service attacks,and data leakage,which compromise the network’s security.Additionally,uneven load balancing between mobile IoT devices,which frequently experience link interferences,compromises the trustworthiness of the system.This paper introduces a Multi-Agent secured framework using lightweight edge computing to enhance cybersecurity for sensor networks,aiming to leverage artificial intelligence for adaptive routing and multi-metric trust evaluation to achieve data privacy and mitigate potential threats.Moreover,it enhances the efficiency of distributed sensors for energy consumption through intelligent data analytics techniques,resulting in highly consistent and low-latency network communication.Using simulations,the proposed framework reveals its significant performance compared to state-of-the-art approaches for energy consumption by 43%,latency by 46%,network throughput by 51%,packet loss rate by 40%,and denial of service attacks by 42%.展开更多
This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagno...This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.展开更多
Our prior study focused on development of internet of things(IoT)and edge-compute enabled crop physiology sensing system(CPSS)for apple sunburn monitoring.Edge compute algorithm on CPSS estimated sunburn susceptibilit...Our prior study focused on development of internet of things(IoT)and edge-compute enabled crop physiology sensing system(CPSS)for apple sunburn monitoring.Edge compute algorithm on CPSS estimated sunburn susceptibility as fruit surface temperature(FST)through pixel-by-pixel multiplication of captured thermal infrared images with segmented fruits binary mask.The segmentation was performed using color-based K means clustering approach.This limited CPSS applicability to monitor sunburn of red colored cultivars only and when fruits develop color,typically late growing season.This is a key research gap as recent weather patterns have shown that sunburn can occur during early growing season when fruits are green to yellow.Therefore,aim of this study was to develop and field evaluate cultivar and color independent mask region-convolution neural network(R-CNN)aided fruit segmentation model and edge compute compatible FST estimation algorithm.Season long field data were collected in 2021 using eight CPSS nodes(three in cv.WA38[Cosmic crisp]and five in cv.Honeycrisp).Collected data were used to develop and validate mask R-CNN based fruit segmentation model.Developed mask R-CNN based model was able to segment fruits of two apple cultivars and of varying colors with 91.4%average precision.In orchard evaluations(2022 season),the resulting algorithm ported on CPSS was able to accurately segment(dice similarity coefficient=0.89)and estimate apple FST with<0.5℃error compared to ground truth data.With compute time of about 37 s,data processing time was reduced by 22%over previous algorithm.High ambient temperature(>35℃)on a warmer day resulted in multiple throttling errors caused by excessive CPU temperature;however,the CPSS performance was uncompromised in FST estimation.Ambient air temperature did not affect RAM utilization and CPU clock frequency.Overall,developed FST algorithm can potentially be used as input to actuate water-based cooling system.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta...Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.展开更多
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the...As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.展开更多
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc...The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.展开更多
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the ...The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the systems.There is a lack of explainability,challenges with imbalanced attack classes,and limited consideration of practical edge–cloud deployment strategies in prior works.In the proposed study,we suggest an Impact-Aware Taxonomy-Driven Machine Learning Framework with Edge Deployment and SHapley Additive exPlanations(SHAP)-based Explainable AI(XAI)to attack detection and classification in IIoT-CPS settings.It includes not only unsupervised clustering(K-Means and DBSCAN)to extract latent traffic patterns but also supervised classification based on taxonomy to classify 33 different kinds of attacks into seven high-level categories:Flood Attacks,Botnet/Mirai,Reconnaissance,Spoofing/Man-In-The-Middle(MITM),Injection Attacks,Backdoors/Exploits,and Benign.The three machine learning algorithms,Random Forest,XGBoost,and Multi-Layer Perceptron(MLP),were trained on a realworld dataset of more than 1 million network traffic records,with overall accuracy of 99.4%(RF),99.5%(XGBoost),and 99.1%(MLP).Rare types of attacks,such as injection attacks and backdoors,were examined even in the case of extreme imbalance between the classes.SHAP-based XAI was performed on every model to help gain transparency and trust in the model and identify important features that drive the classification decisions,such as inter-arrival time,TCP flags,and protocol type.A workable edge-computing implementation strategy is proposed,whereby lightweight computing is performed at the edge devices and heavy,computation-intensive analytics is performed at the cloud.This framework is highly accurate,interpretable,and has real-time application,hence a robust and scalable solution to securing IIoT-CPS infrastructure against dynamic cyber-attacks.展开更多
The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing pr...The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing presents substantial security challenges,rendering it a prominent target for sophisticated malware attacks.Existing signature-based and behavior-based detection methods are ineffective against the swiftly evolving nature of malware threats and are constrained by the availability of resources.This paper suggests the Genetic Encoding for Novel Optimization of Malware Evaluation(GENOME)framework,a novel solution that is intended to improve the performance of malware detection and classification in peripheral computing environments.GENOME optimizes data storage and computa-tional efficiency by converting malware artifacts into compact,structured sequences through a Deoxyribonucleic Acid(DNA)encoding mechanism.The framework employs two DNA encoding algorithms,standard and compressed,which substantially reduce data size while preserving high detection accuracy.The Edge-IIoTset dataset was used to conduct experiments that showed that GENOME was able to achieve high classification performance using models such as Random Forest and Logistic Regression,resulting in a reduction of data size by up to 42%.Further evaluations with the CIC-IoT-23 dataset and Deep Learning models confirmed GENOME’s scalability and adaptability across diverse datasets and algorithms.The potential of GENOME to address critical challenges,such as the rapid mutation of malware,real-time processing demands,and resource limitations,is emphasized in this study.GENOME offers comprehensive protection for peripheral computing environments by offering a security solution that is both efficient and scalable.展开更多
An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This ...An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This paper presents a robust solution in the form of a fast flutter suppression digital control logic of edge computing aileron mechatronics(ECAM).We have effectively eliminated passive and active oscillating response biases by integrating nonlinear functional parameters and an antiphase hysteresis Schmitt trigger.Our findings demonstrate that self-tuning nonlinear parameters can optimize stability,robustness,and accuracy.At the same time,the antiphase hysteresis Schmitt trigger effectively rejects flutters without the need for collaborative navigation and guidance.Our hardware-in-the-loop simulation results confirm that this approach can eliminate aircraft jitter and shaking while ensuring expected stability and maneuverability.In conclusion,this nonlinear aileron mechatronics with a Schmitt positive feedback mechanism is a highly effective solution for distributed flight control and active flutter rejection.展开更多
With the rapid advancement of ICT and IoT technologies,the integration of Edge and Fog Computing has become essential to meet the increasing demands for real-time data processing and network efficiency.However,these t...With the rapid advancement of ICT and IoT technologies,the integration of Edge and Fog Computing has become essential to meet the increasing demands for real-time data processing and network efficiency.However,these technologies face critical security challenges,exacerbated by the emergence of quantum computing,which threatens traditional encryption methods.The rise in cyber-attacks targeting IoT and Edge/Fog networks underscores the need for robust,quantum-resistant security solutions.To address these challenges,researchers are focusing on Quantum Key Distribution and Post-Quantum Cryptography,which utilize quantum-resistant algorithms and the principles of quantum mechanics to ensure data confidentiality and integrity.This paper reviews the current security practices in IoT and Edge/Fog environments,explores the latest advancements in QKD and PQC technologies,and discusses their integration into distributed computing systems.Additionally,this paper proposes an enhanced QKD protocol combining the Cascade protocol and Kyber algorithm to address existing limitations.Finally,we highlight future research directions aimed at improving the scalability,efficiency,and practicality of QKD and PQC for securing IoT and Edge/Fog networks against evolving quantum threats.展开更多
Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC ha...Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks.展开更多
WITH the rapid development of technologies such as Artificial Intelligence(AI),edge computing,and cloud intelligence,the medical field is undergoing a fundamental transformation[1].These technologies significantly enh...WITH the rapid development of technologies such as Artificial Intelligence(AI),edge computing,and cloud intelligence,the medical field is undergoing a fundamental transformation[1].These technologies significantly enhance the medical system's capability to process complex data and also improve the real-time response rate to patient needs.In this wave of technological innovation,parallel intelligence,along with Artificial systems,Computational experiments,and Parallel execution(ACP)approach[2]will play a crucial role.Through parallel interactions between virtual and real systems,this approach optimizes the functionality of medical devices and instruments,enhancing the accuracy of diagnoses and treatments while enabling the autonomous evolution and adaptive adjustment of medical systems.展开更多
Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing...Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework.展开更多
The Internet of Things(IoT)and allied applications have made real-time responsiveness for massive devices over the Internet essential.Cloud-edge/fog ensembles handle such applications'computations.For Beyond 5 th ...The Internet of Things(IoT)and allied applications have made real-time responsiveness for massive devices over the Internet essential.Cloud-edge/fog ensembles handle such applications'computations.For Beyond 5 th Generation(B5G)communication paradigms,Edge Servers(ESs)must be placed within Information Communication Technology infrastructures to meet Quality of Service requirements like response time and resource utilisation.Due to the large number of Base Stations(BSs)and ESs and the possibility of significant variations in placing the ESs within the IoTs geographical expanse for optimising multiple objectives,the Edge Server Placement Problem(ESPP)is NP-hard.Thus,stochastic evolutionary metaheuristics are natural.This work addresses the ESPP using a Particle Swarm Optimization that initialises particles as BS positions within the geography to maintain the workload while scanning through all feasible sets of BSs as an encoded sequence.The Workload-Threshold Aware Sequence Encoding(WTASE)Scheme for ESPP provides the number of ESs to be deployed,similar to existing methodologies and exact locations for their placements without the overhead of maintaining a prohibitively large distance matrix.Simulation tests using open-source datasets show that the suggested technique improves ESs utilisation rate,workload balance,and average energy consumption by 36%,17%,and 32%,respectively,compared to prior works.展开更多
The Internet ofThings(IoT)and edge computing have substantially contributed to the development and growth of smart cities.It handled time-constrained services and mobile devices to capture the observing environment fo...The Internet ofThings(IoT)and edge computing have substantially contributed to the development and growth of smart cities.It handled time-constrained services and mobile devices to capture the observing environment for surveillance applications.These systems are composed of wireless cameras,digital devices,and tiny sensors to facilitate the operations of crucial healthcare services.Recently,many interactive applications have been proposed,including integrating intelligent systems to handle data processing and enable dynamic communication functionalities for crucial IoT services.Nonetheless,most solutions lack optimizing relayingmethods and impose excessive overheads for maintaining devices’connectivity.Alternatively,data integrity and trust are another vital consideration for nextgeneration networks.This research proposed a load-balanced trusted surveillance routing model with collaborative decisions at network edges to enhance energymanagement and resource balancing.It leverages graph-based optimization to enable reliable analysis of decision-making parameters.Furthermore,mobile devices integratewith the proposed model to sustain trusted routes with lightweight privacy-preserving and authentication.The proposed model analyzed its performance results in a simulation-based environment and illustrated an exceptional improvement in packet loss ratio,energy consumption,detection anomaly,and blockchain overhead than related solutions.展开更多
With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further impro...With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks.展开更多
Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of conge...Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of congestion as well as the need for more resources and more dedicated caching schemes.Recently,Multi-access Edge Computing(MEC)-enabled heterogeneous networks,which leverage edge caches for proximity delivery,have emerged as a promising solution to all of these problems.Designing an effective edge caching scheme is critical to its success,however,in the face of limited resources.We propose a novel Knowledge Graph(KG)-based Dueling Deep Q-Network(KG-DDQN)for cooperative caching in MEC-enabled heterogeneous networks.The KGDDQN scheme leverages a KG to uncover video relations,providing valuable insights into user preferences for the caching scheme.Specifically,the KG guides the selection of related videos as caching candidates(i.e.,actions in the DDQN),thus providing a rich reference for implementing a personalized caching scheme while also improving the decision efficiency of the DDQN.Extensive simulation results validate the convergence effectiveness of the KG-DDQN,and it also outperforms baselines regarding cache hit rate and service delay.展开更多
This paper investigates mobility-aware online optimization for digital twin(DT)-assisted task execution in edge computing environments.In such systems,DTs,hosted on edge servers(ESs),require proactive migration to mai...This paper investigates mobility-aware online optimization for digital twin(DT)-assisted task execution in edge computing environments.In such systems,DTs,hosted on edge servers(ESs),require proactive migration to maintain proximity to their mobile physical twin(PT)counterparts.To minimize task response latency under a stringent energy consumption constraint,we jointly optimize three key components:the status data uploading frequency fromthe PT,theDT migration decisions,and the allocation of computational and communication resources.To address the asynchronous nature of these decisions,we propose a novel two-timescale mobility-aware online optimization(TMO)framework.The TMO scheme leverages an extended two-timescale Lyapunov optimization framework to decompose the long-term problem into sequential subproblems.At the larger timescale,a multi-armed bandit(MAB)algorithm is employed to dynamically learn the optimal status data uploading frequency.Within each shorter timescale,we first employ a gated recurrent unit(GRU)-based predictor to forecast the PT’s trajectory.Based on this prediction,an alternate minimization(AM)algorithm is then utilized to solve for the DT migration and resource allocation variables.Theoretical analysis confirms that the proposed TMO scheme is asymptotically optimal.Furthermore,simulation results demonstrate its significant performance gains over existing benchmark methods.展开更多
基金supported by the Deanship of Graduate Studies and Scientific Research at Jouf University.
文摘Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monitoring and automate the communication process.In recent decades,researchers have made many efforts to propose autonomous systems for manipulating network data and providing on-time responses in critical operations.However,the widespread use of IoT devices in resource-constrained applications and mobile sensor networks introduces significant research challenges for cybersecurity.These systems are vulnerable to a variety of cyberattacks,including unauthorized access,denial-of-service attacks,and data leakage,which compromise the network’s security.Additionally,uneven load balancing between mobile IoT devices,which frequently experience link interferences,compromises the trustworthiness of the system.This paper introduces a Multi-Agent secured framework using lightweight edge computing to enhance cybersecurity for sensor networks,aiming to leverage artificial intelligence for adaptive routing and multi-metric trust evaluation to achieve data privacy and mitigate potential threats.Moreover,it enhances the efficiency of distributed sensors for energy consumption through intelligent data analytics techniques,resulting in highly consistent and low-latency network communication.Using simulations,the proposed framework reveals its significant performance compared to state-of-the-art approaches for energy consumption by 43%,latency by 46%,network throughput by 51%,packet loss rate by 40%,and denial of service attacks by 42%.
文摘This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.
基金funded in part by USDA-NIFA/NSF Cyber-Physical Systems program,Washington Tree Fruit Research Commission,and WNP0745 projects.
文摘Our prior study focused on development of internet of things(IoT)and edge-compute enabled crop physiology sensing system(CPSS)for apple sunburn monitoring.Edge compute algorithm on CPSS estimated sunburn susceptibility as fruit surface temperature(FST)through pixel-by-pixel multiplication of captured thermal infrared images with segmented fruits binary mask.The segmentation was performed using color-based K means clustering approach.This limited CPSS applicability to monitor sunburn of red colored cultivars only and when fruits develop color,typically late growing season.This is a key research gap as recent weather patterns have shown that sunburn can occur during early growing season when fruits are green to yellow.Therefore,aim of this study was to develop and field evaluate cultivar and color independent mask region-convolution neural network(R-CNN)aided fruit segmentation model and edge compute compatible FST estimation algorithm.Season long field data were collected in 2021 using eight CPSS nodes(three in cv.WA38[Cosmic crisp]and five in cv.Honeycrisp).Collected data were used to develop and validate mask R-CNN based fruit segmentation model.Developed mask R-CNN based model was able to segment fruits of two apple cultivars and of varying colors with 91.4%average precision.In orchard evaluations(2022 season),the resulting algorithm ported on CPSS was able to accurately segment(dice similarity coefficient=0.89)and estimate apple FST with<0.5℃error compared to ground truth data.With compute time of about 37 s,data processing time was reduced by 22%over previous algorithm.High ambient temperature(>35℃)on a warmer day resulted in multiple throttling errors caused by excessive CPU temperature;however,the CPSS performance was uncompromised in FST estimation.Ambient air temperature did not affect RAM utilization and CPU clock frequency.Overall,developed FST algorithm can potentially be used as input to actuate water-based cooling system.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
基金supported in part by the National Natural Science Foundation of China under Grant No.61473066in part by the Natural Science Foundation of Hebei Province under Grant No.F2021501020+2 种基金in part by the S&T Program of Qinhuangdao under Grant No.202401A195in part by the Science Research Project of Hebei Education Department under Grant No.QN2025008in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant No.22567637H
文摘Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.
基金funded by the Fundamental Research Funds for the Central Universities(J2023-024,J2023-027).
文摘As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.
基金the National Research Foundation(NRF)Singapore mid-sized center grant(NRF-MSG-2023-0002)FrontierCRP grant(NRF-F-CRP-2024-0006)+2 种基金A*STAR Singapore MTC RIE2025 project(M24W1NS005)IAF-PP project(M23M5a0069)Ministry of Education(MOE)Singapore Tier 2 project(MOE-T2EP50220-0014).
文摘The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.
基金funded by the Committee of Science of the Ministry of Science and Higher Education of the Republic of Kazakhstan(Grant No.AP23489127)。
文摘The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the systems.There is a lack of explainability,challenges with imbalanced attack classes,and limited consideration of practical edge–cloud deployment strategies in prior works.In the proposed study,we suggest an Impact-Aware Taxonomy-Driven Machine Learning Framework with Edge Deployment and SHapley Additive exPlanations(SHAP)-based Explainable AI(XAI)to attack detection and classification in IIoT-CPS settings.It includes not only unsupervised clustering(K-Means and DBSCAN)to extract latent traffic patterns but also supervised classification based on taxonomy to classify 33 different kinds of attacks into seven high-level categories:Flood Attacks,Botnet/Mirai,Reconnaissance,Spoofing/Man-In-The-Middle(MITM),Injection Attacks,Backdoors/Exploits,and Benign.The three machine learning algorithms,Random Forest,XGBoost,and Multi-Layer Perceptron(MLP),were trained on a realworld dataset of more than 1 million network traffic records,with overall accuracy of 99.4%(RF),99.5%(XGBoost),and 99.1%(MLP).Rare types of attacks,such as injection attacks and backdoors,were examined even in the case of extreme imbalance between the classes.SHAP-based XAI was performed on every model to help gain transparency and trust in the model and identify important features that drive the classification decisions,such as inter-arrival time,TCP flags,and protocol type.A workable edge-computing implementation strategy is proposed,whereby lightweight computing is performed at the edge devices and heavy,computation-intensive analytics is performed at the cloud.This framework is highly accurate,interpretable,and has real-time application,hence a robust and scalable solution to securing IIoT-CPS infrastructure against dynamic cyber-attacks.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)(Project Nos.RS-2024-00438551,30%,2022-11220701,30%,2021-0-01816,30%)the National Research Foundation of Korea(NRF)grant funded by the Korean Government(Project No.RS2023-00208460,10%).
文摘The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing presents substantial security challenges,rendering it a prominent target for sophisticated malware attacks.Existing signature-based and behavior-based detection methods are ineffective against the swiftly evolving nature of malware threats and are constrained by the availability of resources.This paper suggests the Genetic Encoding for Novel Optimization of Malware Evaluation(GENOME)framework,a novel solution that is intended to improve the performance of malware detection and classification in peripheral computing environments.GENOME optimizes data storage and computa-tional efficiency by converting malware artifacts into compact,structured sequences through a Deoxyribonucleic Acid(DNA)encoding mechanism.The framework employs two DNA encoding algorithms,standard and compressed,which substantially reduce data size while preserving high detection accuracy.The Edge-IIoTset dataset was used to conduct experiments that showed that GENOME was able to achieve high classification performance using models such as Random Forest and Logistic Regression,resulting in a reduction of data size by up to 42%.Further evaluations with the CIC-IoT-23 dataset and Deep Learning models confirmed GENOME’s scalability and adaptability across diverse datasets and algorithms.The potential of GENOME to address critical challenges,such as the rapid mutation of malware,real-time processing demands,and resource limitations,is emphasized in this study.GENOME offers comprehensive protection for peripheral computing environments by offering a security solution that is both efficient and scalable.
基金supported in part by the Aeronautical Science Foundation of China under Grant 2022Z005057001the Joint Research Fund of Shanghai Commercial Aircraft System Engineering Science and Technology Innovation Center under CASEF-2023-M19.
文摘An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This paper presents a robust solution in the form of a fast flutter suppression digital control logic of edge computing aileron mechatronics(ECAM).We have effectively eliminated passive and active oscillating response biases by integrating nonlinear functional parameters and an antiphase hysteresis Schmitt trigger.Our findings demonstrate that self-tuning nonlinear parameters can optimize stability,robustness,and accuracy.At the same time,the antiphase hysteresis Schmitt trigger effectively rejects flutters without the need for collaborative navigation and guidance.Our hardware-in-the-loop simulation results confirm that this approach can eliminate aircraft jitter and shaking while ensuring expected stability and maneuverability.In conclusion,this nonlinear aileron mechatronics with a Schmitt positive feedback mechanism is a highly effective solution for distributed flight control and active flutter rejection.
基金supported by the National Research Foundation of Korea(NRF)funded by theMinistry of Science and ICT(2022K1A3A1A61014825)。
文摘With the rapid advancement of ICT and IoT technologies,the integration of Edge and Fog Computing has become essential to meet the increasing demands for real-time data processing and network efficiency.However,these technologies face critical security challenges,exacerbated by the emergence of quantum computing,which threatens traditional encryption methods.The rise in cyber-attacks targeting IoT and Edge/Fog networks underscores the need for robust,quantum-resistant security solutions.To address these challenges,researchers are focusing on Quantum Key Distribution and Post-Quantum Cryptography,which utilize quantum-resistant algorithms and the principles of quantum mechanics to ensure data confidentiality and integrity.This paper reviews the current security practices in IoT and Edge/Fog environments,explores the latest advancements in QKD and PQC technologies,and discusses their integration into distributed computing systems.Additionally,this paper proposes an enhanced QKD protocol combining the Cascade protocol and Kyber algorithm to address existing limitations.Finally,we highlight future research directions aimed at improving the scalability,efficiency,and practicality of QKD and PQC for securing IoT and Edge/Fog networks against evolving quantum threats.
基金The authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through the Large Group Project under grant number(RGP2/337/46)The research team thanks the Deanship of Graduate Studies and Scientific Research at Najran University for supporting the research project through the Nama’a program,with the project code NU/GP/SERC/13/352-4.
文摘Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks.
基金supported by the Science and Technology Development Fund,Macao Special Administrative Region(SAR)(0093/2023/RIA2,0145/2023/RIA3).
文摘WITH the rapid development of technologies such as Artificial Intelligence(AI),edge computing,and cloud intelligence,the medical field is undergoing a fundamental transformation[1].These technologies significantly enhance the medical system's capability to process complex data and also improve the real-time response rate to patient needs.In this wave of technological innovation,parallel intelligence,along with Artificial systems,Computational experiments,and Parallel execution(ACP)approach[2]will play a crucial role.Through parallel interactions between virtual and real systems,this approach optimizes the functionality of medical devices and instruments,enhancing the accuracy of diagnoses and treatments while enabling the autonomous evolution and adaptive adjustment of medical systems.
文摘Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework.
基金the Deanship of Research and Graduate Studies at King Khalid University for funding this work through the Large Research Project under grant number RGP2/603/46。
文摘The Internet of Things(IoT)and allied applications have made real-time responsiveness for massive devices over the Internet essential.Cloud-edge/fog ensembles handle such applications'computations.For Beyond 5 th Generation(B5G)communication paradigms,Edge Servers(ESs)must be placed within Information Communication Technology infrastructures to meet Quality of Service requirements like response time and resource utilisation.Due to the large number of Base Stations(BSs)and ESs and the possibility of significant variations in placing the ESs within the IoTs geographical expanse for optimising multiple objectives,the Edge Server Placement Problem(ESPP)is NP-hard.Thus,stochastic evolutionary metaheuristics are natural.This work addresses the ESPP using a Particle Swarm Optimization that initialises particles as BS positions within the geography to maintain the workload while scanning through all feasible sets of BSs as an encoded sequence.The Workload-Threshold Aware Sequence Encoding(WTASE)Scheme for ESPP provides the number of ESs to be deployed,similar to existing methodologies and exact locations for their placements without the overhead of maintaining a prohibitively large distance matrix.Simulation tests using open-source datasets show that the suggested technique improves ESs utilisation rate,workload balance,and average energy consumption by 36%,17%,and 32%,respectively,compared to prior works.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2024-02-02090).
文摘The Internet ofThings(IoT)and edge computing have substantially contributed to the development and growth of smart cities.It handled time-constrained services and mobile devices to capture the observing environment for surveillance applications.These systems are composed of wireless cameras,digital devices,and tiny sensors to facilitate the operations of crucial healthcare services.Recently,many interactive applications have been proposed,including integrating intelligent systems to handle data processing and enable dynamic communication functionalities for crucial IoT services.Nonetheless,most solutions lack optimizing relayingmethods and impose excessive overheads for maintaining devices’connectivity.Alternatively,data integrity and trust are another vital consideration for nextgeneration networks.This research proposed a load-balanced trusted surveillance routing model with collaborative decisions at network edges to enhance energymanagement and resource balancing.It leverages graph-based optimization to enable reliable analysis of decision-making parameters.Furthermore,mobile devices integratewith the proposed model to sustain trusted routes with lightweight privacy-preserving and authentication.The proposed model analyzed its performance results in a simulation-based environment and illustrated an exceptional improvement in packet loss ratio,energy consumption,detection anomaly,and blockchain overhead than related solutions.
基金supported in part by National Natural Science Foundation of China(No.62071393)Fundamental Research Funds for the Central Universities(2682023ZTPY058).
文摘With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks.
基金supported by the National Natural Science Foundation of China(Nos.62201419,62372357)the Natural Science Foundation of Chongqing(CSTB2023NSCQ-LMX0032)the ISN State Key Laboratory.
文摘Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of congestion as well as the need for more resources and more dedicated caching schemes.Recently,Multi-access Edge Computing(MEC)-enabled heterogeneous networks,which leverage edge caches for proximity delivery,have emerged as a promising solution to all of these problems.Designing an effective edge caching scheme is critical to its success,however,in the face of limited resources.We propose a novel Knowledge Graph(KG)-based Dueling Deep Q-Network(KG-DDQN)for cooperative caching in MEC-enabled heterogeneous networks.The KGDDQN scheme leverages a KG to uncover video relations,providing valuable insights into user preferences for the caching scheme.Specifically,the KG guides the selection of related videos as caching candidates(i.e.,actions in the DDQN),thus providing a rich reference for implementing a personalized caching scheme while also improving the decision efficiency of the DDQN.Extensive simulation results validate the convergence effectiveness of the KG-DDQN,and it also outperforms baselines regarding cache hit rate and service delay.
基金funded by the State Key Laboratory of Massive Personalized Customization System and Technology,grant No.H&C-MPC-2023-04-01.
文摘This paper investigates mobility-aware online optimization for digital twin(DT)-assisted task execution in edge computing environments.In such systems,DTs,hosted on edge servers(ESs),require proactive migration to maintain proximity to their mobile physical twin(PT)counterparts.To minimize task response latency under a stringent energy consumption constraint,we jointly optimize three key components:the status data uploading frequency fromthe PT,theDT migration decisions,and the allocation of computational and communication resources.To address the asynchronous nature of these decisions,we propose a novel two-timescale mobility-aware online optimization(TMO)framework.The TMO scheme leverages an extended two-timescale Lyapunov optimization framework to decompose the long-term problem into sequential subproblems.At the larger timescale,a multi-armed bandit(MAB)algorithm is employed to dynamically learn the optimal status data uploading frequency.Within each shorter timescale,we first employ a gated recurrent unit(GRU)-based predictor to forecast the PT’s trajectory.Based on this prediction,an alternate minimization(AM)algorithm is then utilized to solve for the DT migration and resource allocation variables.Theoretical analysis confirms that the proposed TMO scheme is asymptotically optimal.Furthermore,simulation results demonstrate its significant performance gains over existing benchmark methods.