期刊文献+
共找到5,305篇文章
< 1 2 250 >
每页显示 20 50 100
Lightweight YOLOv5 with ShuffleNetV2 for Rice Disease Detection in Edge Computing
1
作者 Qingtao Meng Sang-Hyun Lee 《Computers, Materials & Continua》 2026年第1期1395-1409,共15页
This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagno... This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements. 展开更多
关键词 Lightweight object detection YOLOv5-V2 ShuffleNet V2 edge computing rice disease detection
在线阅读 下载PDF
Lightweight Multi-Agent Edge Framework for Cybersecurity and Resource Optimization in Mobile Sensor Networks
2
作者 Fatima Al-Quayed 《Computers, Materials & Continua》 2026年第1期919-934,共16页
Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monit... Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monitoring and automate the communication process.In recent decades,researchers have made many efforts to propose autonomous systems for manipulating network data and providing on-time responses in critical operations.However,the widespread use of IoT devices in resource-constrained applications and mobile sensor networks introduces significant research challenges for cybersecurity.These systems are vulnerable to a variety of cyberattacks,including unauthorized access,denial-of-service attacks,and data leakage,which compromise the network’s security.Additionally,uneven load balancing between mobile IoT devices,which frequently experience link interferences,compromises the trustworthiness of the system.This paper introduces a Multi-Agent secured framework using lightweight edge computing to enhance cybersecurity for sensor networks,aiming to leverage artificial intelligence for adaptive routing and multi-metric trust evaluation to achieve data privacy and mitigate potential threats.Moreover,it enhances the efficiency of distributed sensors for energy consumption through intelligent data analytics techniques,resulting in highly consistent and low-latency network communication.Using simulations,the proposed framework reveals its significant performance compared to state-of-the-art approaches for energy consumption by 43%,latency by 46%,network throughput by 51%,packet loss rate by 40%,and denial of service attacks by 42%. 展开更多
关键词 Artificial intelligence CYBERSecURITY edge computing Internet of Things threat detection
在线阅读 下载PDF
A Knowledge-Distilled CharacterBERT-BiLSTM-ATT Framework for Lightweight DGA Detection in IoT Devices
3
作者 Chengqi Liu YongtaoLi +1 位作者 Weiping Zou Deyu Lin 《Computers, Materials & Continua》 2026年第4期2049-2068,共20页
With the large-scale deployment of the Internet ofThings(IoT)devices,their weak securitymechanisms make them prime targets for malware attacks.Attackers often use Domain Generation Algorithm(DGA)to generate random dom... With the large-scale deployment of the Internet ofThings(IoT)devices,their weak securitymechanisms make them prime targets for malware attacks.Attackers often use Domain Generation Algorithm(DGA)to generate random domain names,hiding the real IP of Command and Control(C&C)servers to build botnets.Due to the randomness and dynamics of DGA,traditional methods struggle to detect them accurately,increasing the difficulty of network defense.This paper proposes a lightweight DGA detection model based on knowledge distillation for resource-constrained IoT environments.Specifically,a teacher model combining CharacterBERT,a bidirectional long short-term memory(BiLSTM)network,and attention mechanism(ATT)is constructed:it extracts character-level semantic features viaCharacterBERT,captures sequence dependencieswith the BiLSTM,and integrates theATT for key feature weighting,formingmulti-granularity feature fusion.An improved knowledge distillation approach transfers the teacher model’s learned knowledge to the simplified DistilBERT student model.Experimental results show the teacher model achieves 98.68%detection accuracy.The student modelmaintains slightly improved accuracy while significantly compressing parameters to approximately 38.4%of the teacher model’s scale,greatly reducing computational overhead for IoT deployment. 展开更多
关键词 IoT security DGA detection knowledge distillation lightweight model edge computing
在线阅读 下载PDF
Intelligent Resource Allocation for Multiaccess Edge Computing in 5G Ultra-Dense Slicing Network Using Federated Multiagent DDPG Algorithm
4
作者 Gong Yu Gong Pengwei +3 位作者 Jiang He Xie Wen Wang Chenxi Xu Peijun 《China Communications》 2026年第1期273-289,共17页
Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources... Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources of computation and communication.Multiaccess edge computing(MEC)can offload computing-intensive tasks to the nearby edge servers,which alleviates the pressure of devices.Ultra-dense network(UDN)can provide effective spectrum resources by deploying a large number of micro base stations.Furthermore,network slicing can support various applications in different communication scenarios.Therefore,this paper integrates the ultra-dense network slicing and the MEC technology,and introduces a hybrid computing offloading strategy in order to satisfy various quality of service(QoS)of edge devices.In order to dynamically allocate limited resources,the above problem is formulated as multiagent distributed deep reinforcement learning(DRL),which will achieve low overhead computation offloading strategy and real-time resource allocation decisions.In this context,federated learning is added to train DRL agents in a distributed manner,where each agent is dedicated to exploring actions composed of offloading decisions and allocating resources,so as to jointly optimize system delay and energy consumption.Simulation results show that the proposed learning algorithm has better performance compared with other strategies in literature. 展开更多
关键词 federated learning multiaccess edge computing mutiagent deep reinforcement learning resource allocation ultra-dense slicing network
在线阅读 下载PDF
AMulti-Objective Joint Task Offloading Scheme for Vehicular Edge Computing
5
作者 Yiwei Zhang Xin Cui Qinghui Zhao 《Computers, Materials & Continua》 2025年第8期2355-2373,共19页
The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial veh... The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial vehicles-assisted mobile edge computing(UAV-MEC)has gained attention in providing computing resources to vehicles and optimizing system costs.We model the computing offloading problem as a multi-objective optimization challenge aimed at minimizing both task processing delay and energy consumption.We propose a three-stage hybrid offloading scheme called Dynamic Vehicle Clustering Game-based Multi-objective Whale Optimization Algorithm(DVCG-MWOA)to address this problem.A novel dynamic clustering algorithm is designed based on vehiclemobility and task offloading efficiency requirements,where each UAV independently serves as the cluster head for a vehicle cluster and adjusts its position at the end of each timeslot in response to vehiclemovement.Within eachUAV-led cluster,cooperative game theory is applied to allocate computing resourceswhile respecting delay constraints,ensuring efficient resource utilization.To enhance offloading efficiency,we improve the multi-objective whale optimization algorithm(MOWOA),resulting in the MWOA.This enhanced algorithm determines the optimal allocation of pending tasks to different edge computing devices and the resource utilization ratio of each device,ultimately achieving a Pareto-optimal solution set for delay and energy consumption.Experimental results demonstrate that the proposed joint offloading scheme significantly reduces both delay and energy consumption compared to existing approaches,offering superior performance for vehicular networks. 展开更多
关键词 Vehicular edge computing cooperative game theory multi-objective optimization computation offloading
在线阅读 下载PDF
An Impact-Aware and Taxonomy-Driven Explainable Machine Learning Framework with Edge Computing for Security in Industrial IoT–Cyber Physical Systems
6
作者 Tamara Zhukabayeva Zulfiqar Ahmad +4 位作者 Nurbolat Tasbolatuly Makpal Zhartybayeva Yerik Mardenov Nurdaulet Karabayev Dilaram Baumuratova 《Computer Modeling in Engineering & Sciences》 2025年第11期2573-2599,共27页
The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the ... The Industrial Internet of Things(IIoT),combined with the Cyber-Physical Systems(CPS),is transforming industrial automation but also poses great cybersecurity threats because of the complexity and connectivity of the systems.There is a lack of explainability,challenges with imbalanced attack classes,and limited consideration of practical edge–cloud deployment strategies in prior works.In the proposed study,we suggest an Impact-Aware Taxonomy-Driven Machine Learning Framework with Edge Deployment and SHapley Additive exPlanations(SHAP)-based Explainable AI(XAI)to attack detection and classification in IIoT-CPS settings.It includes not only unsupervised clustering(K-Means and DBSCAN)to extract latent traffic patterns but also supervised classification based on taxonomy to classify 33 different kinds of attacks into seven high-level categories:Flood Attacks,Botnet/Mirai,Reconnaissance,Spoofing/Man-In-The-Middle(MITM),Injection Attacks,Backdoors/Exploits,and Benign.The three machine learning algorithms,Random Forest,XGBoost,and Multi-Layer Perceptron(MLP),were trained on a realworld dataset of more than 1 million network traffic records,with overall accuracy of 99.4%(RF),99.5%(XGBoost),and 99.1%(MLP).Rare types of attacks,such as injection attacks and backdoors,were examined even in the case of extreme imbalance between the classes.SHAP-based XAI was performed on every model to help gain transparency and trust in the model and identify important features that drive the classification decisions,such as inter-arrival time,TCP flags,and protocol type.A workable edge-computing implementation strategy is proposed,whereby lightweight computing is performed at the edge devices and heavy,computation-intensive analytics is performed at the cloud.This framework is highly accurate,interpretable,and has real-time application,hence a robust and scalable solution to securing IIoT-CPS infrastructure against dynamic cyber-attacks. 展开更多
关键词 Industrial IoT CPS edge computing machine learning XAI attack taxonomy
在线阅读 下载PDF
Joint offloading decision and resource allocation in vehicular edge computing networks
7
作者 Shumo Wang Xiaoqin Song +3 位作者 Han Xu Tiecheng Song Guowei Zhang Yang Yang 《Digital Communications and Networks》 2025年第1期71-82,共12页
With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications a... With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay. 展开更多
关键词 Computation offloading Resource allocation Vehicular edge computing Potential game Multi-agent deep deterministic policy gradient
在线阅读 下载PDF
Edge computing aileron mechatronics using antiphase hysteresis Schmitt trigger for fast flutter suppression
8
作者 Tangwen Yin Dan Huang Xiaochun Zhang 《Control Theory and Technology》 2025年第1期153-160,共8页
An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This ... An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This paper presents a robust solution in the form of a fast flutter suppression digital control logic of edge computing aileron mechatronics(ECAM).We have effectively eliminated passive and active oscillating response biases by integrating nonlinear functional parameters and an antiphase hysteresis Schmitt trigger.Our findings demonstrate that self-tuning nonlinear parameters can optimize stability,robustness,and accuracy.At the same time,the antiphase hysteresis Schmitt trigger effectively rejects flutters without the need for collaborative navigation and guidance.Our hardware-in-the-loop simulation results confirm that this approach can eliminate aircraft jitter and shaking while ensuring expected stability and maneuverability.In conclusion,this nonlinear aileron mechatronics with a Schmitt positive feedback mechanism is a highly effective solution for distributed flight control and active flutter rejection. 展开更多
关键词 AILERON edge computing Flutter suppression MecHATRONICS Nonlinear hysteresis control Positive feedback
原文传递
GENOME:Genetic Encoding for Novel Optimization of Malware Detection and Classification in Edge Computing
9
作者 Sang-Hoon Choi Ki-Woong Park 《Computers, Materials & Continua》 2025年第3期4021-4039,共19页
The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing pr... The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing presents substantial security challenges,rendering it a prominent target for sophisticated malware attacks.Existing signature-based and behavior-based detection methods are ineffective against the swiftly evolving nature of malware threats and are constrained by the availability of resources.This paper suggests the Genetic Encoding for Novel Optimization of Malware Evaluation(GENOME)framework,a novel solution that is intended to improve the performance of malware detection and classification in peripheral computing environments.GENOME optimizes data storage and computa-tional efficiency by converting malware artifacts into compact,structured sequences through a Deoxyribonucleic Acid(DNA)encoding mechanism.The framework employs two DNA encoding algorithms,standard and compressed,which substantially reduce data size while preserving high detection accuracy.The Edge-IIoTset dataset was used to conduct experiments that showed that GENOME was able to achieve high classification performance using models such as Random Forest and Logistic Regression,resulting in a reduction of data size by up to 42%.Further evaluations with the CIC-IoT-23 dataset and Deep Learning models confirmed GENOME’s scalability and adaptability across diverse datasets and algorithms.The potential of GENOME to address critical challenges,such as the rapid mutation of malware,real-time processing demands,and resource limitations,is emphasized in this study.GENOME offers comprehensive protection for peripheral computing environments by offering a security solution that is both efficient and scalable. 展开更多
关键词 edge computing IoT security MALWARE machine learning malware classification malware detection
在线阅读 下载PDF
Multi-Objective Optimization for Non-Panoramic VR in Mobile Edge Computing
10
作者 Ren Ruimin Li Fukang +1 位作者 Dang Yaping Yang Shouyi 《China Communications》 2025年第11期273-290,共18页
Non-panoramic virtual reality(VR)provides users with immersive experiences involving strong interactivity,thus attracting growing research and development attention.However,the demand for high bandwidth and low latenc... Non-panoramic virtual reality(VR)provides users with immersive experiences involving strong interactivity,thus attracting growing research and development attention.However,the demand for high bandwidth and low latency in VR services presents greater challenges to existing networks.Inspired by mobile edge computing(MEC),VR users can offload rendering tasks to other devices.The main challenge of task offloading is to minimize latency and energy consumption.Yet,in non-panoramic VR scenarios,it is essential to consider the Quality of Perceptual Experience(QOPE)for users.Simultaneously,one must also take into account the diverse requirements of users in real-world scenarios.Therefore,this paper proposes a QOPE model to measure the visual quality of non-panoramic VR users and models the non-panoramic VR task offloading problem based on MEC as a constrained multi-objective optimization problem(CMOP)that minimizes latency and energy consumption while providing a satisfied QOPE.And we propose an evolutionary algorithm(EA),GNSGA-II,to solve the CMOP.Simulation results show that the algorithm can effectively find various trade-off solutions among the objectives,satisfying the requirements of different users. 展开更多
关键词 mobile edge computing multi-objective optimization non-panoramic VR task offloading
在线阅读 下载PDF
Blockchain-Enabled Edge Computing Techniques for Advanced Video Surveillance in Autonomous Vehicles
11
作者 Mohammad Tabrez Quasim Khair Ul Nisa 《Computers, Materials & Continua》 2025年第4期1239-1255,共17页
The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with e... The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with each other.Blockchain-based STS devices need a lot of computer power to change different video feed quality and forms into different versions and structures that meet the needs of different users.On the other hand,existing blockchains can’t support live streaming because they take too long to process and don’t have enough computer power.Large amounts of video data being sent and analyzed put too much stress on networks for vehicles.A video surveillance method is suggested in this paper to improve the performance of the blockchain system’s data and lower the latency across the multiple access edge computing(MEC)system.The integration of MEC and blockchain for video surveillance in autonomous vehicles(IMEC-BVS)framework has been proposed.To deal with this problem,the joint optimization problem is shown using the actor-critical asynchronous advantage(ACAA)method and deep reinforcement training as a Markov Choice Progression(MCP).Simulation results show that the suggested method quickly converges and improves the performance of MEC and blockchain when used together for video surveillance in self-driving cars compared to other methods. 展开更多
关键词 Blockchain multiple access edge computing video surveillance autonomous vehicles
在线阅读 下载PDF
Privacy Preserving Federated Anomaly Detection in IoT Edge Computing Using Bayesian Game Reinforcement Learning
12
作者 Fatima Asiri Wajdan Al Malwi +4 位作者 Fahad Masood Mohammed S.Alshehri Tamara Zhukabayeva Syed Aziz Shah Jawad Ahmad 《Computers, Materials & Continua》 2025年第8期3943-3960,共18页
Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC ha... Edge computing(EC)combined with the Internet of Things(IoT)provides a scalable and efficient solution for smart homes.Therapid proliferation of IoT devices poses real-time data processing and security challenges.EC has become a transformative paradigm for addressing these challenges,particularly in intrusion detection and anomaly mitigation.The widespread connectivity of IoT edge networks has exposed them to various security threats,necessitating robust strategies to detect malicious activities.This research presents a privacy-preserving federated anomaly detection framework combined with Bayesian game theory(BGT)and double deep Q-learning(DDQL).The proposed framework integrates BGT to model attacker and defender interactions for dynamic threat level adaptation and resource availability.It also models a strategic layout between attackers and defenders that takes into account uncertainty.DDQL is incorporated to optimize decision-making and aids in learning optimal defense policies at the edge,thereby ensuring policy and decision optimization.Federated learning(FL)enables decentralized and unshared anomaly detection for sensitive data between devices.Data collection has been performed from various sensors in a real-time EC-IoT network to identify irregularities that occurred due to different attacks.The results reveal that the proposed model achieves high detection accuracy of up to 98%while maintaining low resource consumption.This study demonstrates the synergy between game theory and FL to strengthen anomaly detection in EC-IoT networks. 展开更多
关键词 IOT edge computing smart homes anomaly detection Bayesian game theory reinforcement learning
在线阅读 下载PDF
Task offloading delay minimization in vehicular edge computing based on vehicle trajectory prediction
13
作者 Feng Zeng Zheng Zhang Jinsong Wu 《Digital Communications and Networks》 2025年第2期537-546,共10页
In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements o... In task offloading,the movement of vehicles causes the switching of connected RSUs and servers,which may lead to task offloading failure or high service delay.In this paper,we analyze the impact of vehicle movements on task offloading and reveal that data preparation time for task execution can be minimized via forward-looking scheduling.Then,a Bi-LSTM-based model is proposed to predict the trajectories of vehicles.The service area is divided into several equal-sized grids.If the actual position of the vehicle and the predicted position by the model belong to the same grid,the prediction is considered correct,thereby reducing the difficulty of vehicle trajectory prediction.Moreover,we propose a scheduling strategy for delay optimization based on the vehicle trajectory prediction.Considering the inevitable prediction error,we take some edge servers around the predicted area as candidate execution servers and the data required for task execution are backed up to these candidate servers,thereby reducing the impact of prediction deviations on task offloading and converting the modest increase of resource overheads into delay reduction in task offloading.Simulation results show that,compared with other classical schemes,the proposed strategy has lower average task offloading delays. 展开更多
关键词 Vehicular edge computing Task offloading Vehicle trajectory prediction Delay minimization Bi-LSTM model
在线阅读 下载PDF
Efficient Task Allocation for Energy and Execution Time Trade-Off in Edge Computing Using Multi-Objective IPSO
14
作者 Jafar Aminu Rohaya Latip +2 位作者 Zurina Mohd Hanafi Shafinah Kamarudin Danlami Gabi 《Computers, Materials & Continua》 2025年第8期2989-3011,共23页
As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,fo... As mobile edge computing continues to develop,the demand for resource-intensive applications is steadily increasing,placing a significant strain on edge nodes.These nodes are normally subject to various constraints,for instance,limited processing capability,a few energy sources,and erratic availability being some of the common ones.Correspondingly,these problems require an effective task allocation algorithmto optimize the resources through continued high system performance and dependability in dynamic environments.This paper proposes an improved Particle Swarm Optimization technique,known as IPSO,for multi-objective optimization in edge computing to overcome these issues.To this end,the IPSO algorithm tries to make a trade-off between two important objectives,which are energy consumption minimization and task execution time reduction.Because of global optimal position mutation and dynamic adjustment to inertia weight,the proposed optimization algorithm can effectively distribute tasks among edge nodes.As a result,it reduces the execution time of tasks and energy consumption.In comparative assessments carried out by IPSO with benchmark methods such as Energy-aware Double-fitness Particle Swarm Optimization(EADPSO)and ICBA,IPSO provides better results than these algorithms.For the maximum task size,when compared with the benchmark methods,IPSO reduces the execution time by 17.1%and energy consumption by 31.58%.These results allow the conclusion that IPSO is an efficient and scalable technique for task allocation at the edge environment.It provides peak efficiency while handling scarce resources and variable workloads. 展开更多
关键词 Keyword edge computing energy consumption execution time particle swarm optimization task allocation
在线阅读 下载PDF
Integrating AI, Blockchain, and Edge Computing for Zero-Trust IoT Security:A Comprehensive Review of Advanced Cybersecurity Framework
15
作者 Inam Ullah Khan Fida Muhammad Khan +1 位作者 Zeeshan Ali Haider Fahad Alturise 《Computers, Materials & Continua》 2025年第12期4307-4344,共38页
The rapid expansion of the Internet of Things(IoT)has introduced significant security challenges due to the scale,complexity,and heterogeneity of interconnected devices.The current traditional centralized security mod... The rapid expansion of the Internet of Things(IoT)has introduced significant security challenges due to the scale,complexity,and heterogeneity of interconnected devices.The current traditional centralized security models are deemed irrelevant in dealing with these threats,especially in decentralized applications where the IoT devices may at times operate on minimal resources.The emergence of new technologies,including Artificial Intelligence(AI),blockchain,edge computing,and Zero-Trust-Architecture(ZTA),is offering potential solutions as it helps with additional threat detection,data integrity,and system resilience in real-time.AI offers sophisticated anomaly detection and prediction analytics,and blockchain delivers decentralized and tamper-proof insurance over device communication and exchange of information.Edge computing enables low-latency character processing by distributing and moving the computational workload near the devices.The ZTA enhances security by continuously verifying each device and user on the network,adhering to the“never trust,always verify”ideology.The present research paper is a review of these technologies,finding out how they are used in securing IoT ecosystems,the issues of such integration,and the possibility of developing a multi-layered,adaptive security structure.Major concerns,such as scalability,resource limitations,and interoperability,are identified,and the way to optimize the application of AI,blockchain,and edge computing in zero-trust IoT systems in the future is discussed. 展开更多
关键词 Internet of Things(IoT) artificial intelligence(AI) blockchain edge computing zero-trust-architecture(ZTA) IoT security real-time threat detection
在线阅读 下载PDF
DeAOff:Dependence-Aware Offloading of Decoder-Based Generative Models for Edge Computing
16
作者 Ning Jiahong Yang Tingting +3 位作者 Zheng Ce Wang Xinghan Feng Ping Zhang Xiufeng 《China Communications》 2025年第7期14-29,共16页
This paper presents an algorithm named the dependency-aware offloading framework(DeAOff),which is designed to optimize the deployment of Gen-AI decoder models in mobile edge computing(MEC)environments.These models,suc... This paper presents an algorithm named the dependency-aware offloading framework(DeAOff),which is designed to optimize the deployment of Gen-AI decoder models in mobile edge computing(MEC)environments.These models,such as decoders,pose significant challenges due to their interlayer dependencies and high computational demands,especially under edge resource constraints.To address these challenges,we propose a two-phase optimization algorithm that first handles dependencyaware task allocation and subsequently optimizes energy consumption.By modeling the inference process using directed acyclic graphs(DAGs)and applying constraint relaxation techniques,our approach effectively reduces execution latency and energy usage.Experimental results demonstrate that our method achieves a reduction of up to 20%in task completion time and approximately 30%savings in energy consumption compared to traditional methods.These outcomes underscore our solution’s robustness in managing complex sequential dependencies and dynamic MEC conditions,enhancing quality of service.Thus,our work presents a practical and efficient resource optimization strategy for deploying models in resourceconstrained MEC scenarios. 展开更多
关键词 dependency-aware offloading(DeAOff) directed acyclic graph(DAG) generative AI(Gen-AI) mobile edge computing(Mec)
在线阅读 下载PDF
基于深度强化学习的高速铁路监控视频MEC智能卸载方法
17
作者 陈永 刘骅驹 张冰旺 《铁道学报》 北大核心 2026年第2期96-104,共9页
针对高速铁路沿线视频任务卸载到MEC边缘计算服务器过程中,存在时延和能耗开销大的问题,提出一种高速铁路监控视频MEC智能卸载方法。首先,将高速铁路视频监控处理任务的时延和能耗作为优化目标,构建系统累计时延和能耗最小化卸载模型。... 针对高速铁路沿线视频任务卸载到MEC边缘计算服务器过程中,存在时延和能耗开销大的问题,提出一种高速铁路监控视频MEC智能卸载方法。首先,将高速铁路视频监控处理任务的时延和能耗作为优化目标,构建系统累计时延和能耗最小化卸载模型。然后,将该任务卸载模型转化为马尔科夫决策过程模型,采用动作空间搜索因子,实现对动作决策的自适应搜索。最后,设计一种基于深度强化学习的MEC卸载方法得到最优卸载策略,降低了高速铁路视频处理任务的时延和能耗。仿真结果表明,所提算法相比Q学习算法时延降低了21.59%,能耗降低了9.93%,且QoE指标提高了9.65%,具有更低的时延和能耗开销,能够满足铁路视频传输控制的需求。 展开更多
关键词 移动边缘计算 高速铁路监控视频 视频处理任务 任务卸载 深度强化学习
在线阅读 下载PDF
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
18
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
MEC网络中双延迟深度确定性策略梯度的能效优化算法
19
作者 吴名星 《空天预警研究学报》 2026年第1期52-56,共5页
为解决动态移动边缘计算(MEC)网络中任务卸载与资源分配的能效优化问题,针对传统算法适应性差、强化学习算法稳定性不足的缺陷,提出基于双延迟深度确定性策略梯度(twin delayed DDPG, TD3)的能效优化(TD3-EE)算法.首先,考虑任务异构性... 为解决动态移动边缘计算(MEC)网络中任务卸载与资源分配的能效优化问题,针对传统算法适应性差、强化学习算法稳定性不足的缺陷,提出基于双延迟深度确定性策略梯度(twin delayed DDPG, TD3)的能效优化(TD3-EE)算法.首先,考虑任务异构性与动态资源状态构建了系统模型,建立时延约束下的能效最大化目标函数;然后,将问题转化为马尔可夫决策过程(MDP)模型,并利用TD3算法双Critic网络与延迟更新机制提升决策稳定性.仿真结果表明,该算法在任务完成率、能耗控制及收敛稳定性上优于DDPG-EE、TPBA算法. 展开更多
关键词 移动边缘计算 双延迟深度确定性策略梯度 任务卸载 资源分配
在线阅读 下载PDF
Mobile Edge Computing Towards 5G: Vision, Recent Progress, and Open Challenges 被引量:33
20
作者 Yifan Yu 《China Communications》 SCIE CSCD 2016年第S2期89-99,共11页
Mobile Edge Computing(MEC) is an emerging technology in 5G era which enables the provision of the cloud and IT services within the close proximity of mobile subscribers.It allows the availability of the cloud servers ... Mobile Edge Computing(MEC) is an emerging technology in 5G era which enables the provision of the cloud and IT services within the close proximity of mobile subscribers.It allows the availability of the cloud servers inside or adjacent to the base station.The endto-end latency perceived by the mobile user is therefore reduced with the MEC platform.The context-aware services are able to be served by the application developers by leveraging the real time radio access network information from MEC.The MEC additionally enables the compute intensive applications execution in the resource constraint devices with the collaborative computing involving the cloud servers.This paper presents the architectural description of the MEC platform as well as the key functionalities enabling the above features.The relevant state-of-the-art research efforts are then surveyed.The paper finally discusses and identifies the open research challenges of MEC. 展开更多
关键词 mobile edge computing 5G mobile internet mobile network mobile application
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部