3-D rigid visco-plastic finite element method (FEM) is used in the analysisof metal forming processes, including strip and plate rolling, shape rolling, slab edging, specialstrip rolling. The shifted incomplete Choles...3-D rigid visco-plastic finite element method (FEM) is used in the analysisof metal forming processes, including strip and plate rolling, shape rolling, slab edging, specialstrip rolling. The shifted incomplete Cholesky decomposition of the stiffness matrix with thesolution of the equations for velocity increment by the conjugate gradient method is combined. Thistechnique, termed the shifted ICCG method, is then employed to solve the slab edging problem. Theperformance of this algorithm in terms of the number of iterations, friction variation, shiftedparameter psi and the results of simulation for processing parameters are analysed. Numerical testsand application of this technique verify the efficiency and stability of the shifted ICCG method inthe analysis of slab edging.展开更多
The exponential growth of Internet of Things(IoT)devices,autonomous systems,and digital services is generating massive volumes of big data,projected to exceed 291 zettabytes by 2027.Conventional cloud computing,despit...The exponential growth of Internet of Things(IoT)devices,autonomous systems,and digital services is generating massive volumes of big data,projected to exceed 291 zettabytes by 2027.Conventional cloud computing,despite its high processing and storage capacity,suffers from increased network latency,network congestion,and high operational costs,making it unsuitable for latency-sensitive applications.Edge computing addresses these issues by processing data near the source but faces scalability challenges and elevated Total Cost of Ownership(TCO).Hybrid solutions,such as fog computing,cloudlets,and Mobile Edge Computing(MEC),attempt to balance cost and performance;however,they still struggle with limited resource sharing and high deployment expenses.This paper proposes Public Edge as a Service(PEaaS),a novel paradigm that utilizes idle resources contributed by universities,enterprises,cellular operators,and individuals under a collaborative service model.By decentralizing computation and enabling multi-tenant resource sharing,PEaaS reduces reliance on centralized cloud infrastructure,minimizes communication costs,and enhances scalability.The proposed framework is evaluated using EdgeCloudSim under varying workloads,for keymetrics such as latency,communication cost,server utilization,and task failure rate.Results reveal that while cloud has a task failure rate rising sharply to 12.3%at 2000 devices,PEaaS maintains a low rate of 2.5%,closely matching edge computing.Furthermore,communication costs remain 25% lower than cloud and latency remains below 0.3,even under peak load.These findings demonstrate that PEaaS achieves near-edge performance with reduced costs and enhanced scalability,offering a sustainable and economically viable solution for next-generation computing environments.展开更多
This work presents a systematic analysis of proton-induced total ionizing dose(TID)effects in 1.2 k V silicon carbide(SiC)power devices with various edge termination structures.Three edge terminations including ring-a...This work presents a systematic analysis of proton-induced total ionizing dose(TID)effects in 1.2 k V silicon carbide(SiC)power devices with various edge termination structures.Three edge terminations including ring-assisted junction termination extension(RA-JTE),multiple floating zone JTE(MFZ-JTE),and field limiting rings(FLR)were fabricated and irradiated with45 Me V protons at fluences ranging from 1×10^(12) to 1×10^(14) cm^(-2).Experimental results,supported by TCAD simulations,show that the RA-JTE structure maintained stable breakdown performance with less than 1%variation due to its effective electric field redistribution by multiple P+rings.In contrast,MFZ-JTE and FLR exhibit breakdown voltage shifts of 6.1%and 15.2%,respectively,under the highest fluence.These results demonstrate the superior radiation tolerance of the RA-JTE structure under TID conditions and provide practical design guidance for radiation-hardened Si C power devices in space and other highradiation environments.展开更多
As a fundamental component in computer vision,edges can be categorized into four types based on discontinuities in reflectance,illumination,surface normal,or depth.While deep CNNs have significantly advanced generic e...As a fundamental component in computer vision,edges can be categorized into four types based on discontinuities in reflectance,illumination,surface normal,or depth.While deep CNNs have significantly advanced generic edge detection,real-time multi-class semantic edge detection under resource constraints remains challenging.To address this,we propose a lightweight framework based on PiDiNet that enables fine-grained semantic edge detection.Our model simultaneously predicts background and four edge categories from full-resolution inputs,balancing accuracy and efficiency.Key contributions include:a multi-channel output structure expanding binary edge prediction to five classes,supported by a deep supervision mechanism;a dynamic class-balancing strategy combining adaptive weighting with physical priors to handle extreme class imbalance;and maintained architectural efficiency enabling real-time inference.Extensive evaluations on BSDS-RIND show our approach achieves accuracy competitive with state-of-the-art methods while operating in real time.展开更多
Traffic sign detection is a critical component of driving systems.Single-stage network-based traffic sign detection algorithms,renowned for their fast detection speeds and high accuracy,have become the dominant approa...Traffic sign detection is a critical component of driving systems.Single-stage network-based traffic sign detection algorithms,renowned for their fast detection speeds and high accuracy,have become the dominant approach in current practices.However,in complex and dynamic traffic scenes,particularly with smaller traffic sign objects,challenges such as missed and false detections can lead to reduced overall detection accuracy.To address this issue,this paper proposes a detection algorithm that integrates edge and shape information.Recognizing that traffic signs have specific shapes and distinct edge contours,this paper introduces an edge feature extraction branch within the backbone network,enabling adaptive fusion with features of the same hierarchical level.Additionally,a shape prior convolution module is designed to replaces the first two convolutional modules of the backbone network,aimed at enhancing the model's perception ability for specific shape objects and reducing its sensitivity to background noise.The algorithm was evaluated on the CCTSDB and TT100k datasets,and compared to YOLOv8s,the mAP50 values increased by 3.0%and 10.4%,respectively,demonstrating the effectiveness of the proposed method in improving the accuracy of traffic sign detection.展开更多
Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monit...Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monitoring and automate the communication process.In recent decades,researchers have made many efforts to propose autonomous systems for manipulating network data and providing on-time responses in critical operations.However,the widespread use of IoT devices in resource-constrained applications and mobile sensor networks introduces significant research challenges for cybersecurity.These systems are vulnerable to a variety of cyberattacks,including unauthorized access,denial-of-service attacks,and data leakage,which compromise the network’s security.Additionally,uneven load balancing between mobile IoT devices,which frequently experience link interferences,compromises the trustworthiness of the system.This paper introduces a Multi-Agent secured framework using lightweight edge computing to enhance cybersecurity for sensor networks,aiming to leverage artificial intelligence for adaptive routing and multi-metric trust evaluation to achieve data privacy and mitigate potential threats.Moreover,it enhances the efficiency of distributed sensors for energy consumption through intelligent data analytics techniques,resulting in highly consistent and low-latency network communication.Using simulations,the proposed framework reveals its significant performance compared to state-of-the-art approaches for energy consumption by 43%,latency by 46%,network throughput by 51%,packet loss rate by 40%,and denial of service attacks by 42%.展开更多
With the large-scale deployment of the Internet ofThings(IoT)devices,their weak securitymechanisms make them prime targets for malware attacks.Attackers often use Domain Generation Algorithm(DGA)to generate random dom...With the large-scale deployment of the Internet ofThings(IoT)devices,their weak securitymechanisms make them prime targets for malware attacks.Attackers often use Domain Generation Algorithm(DGA)to generate random domain names,hiding the real IP of Command and Control(C&C)servers to build botnets.Due to the randomness and dynamics of DGA,traditional methods struggle to detect them accurately,increasing the difficulty of network defense.This paper proposes a lightweight DGA detection model based on knowledge distillation for resource-constrained IoT environments.Specifically,a teacher model combining CharacterBERT,a bidirectional long short-term memory(BiLSTM)network,and attention mechanism(ATT)is constructed:it extracts character-level semantic features viaCharacterBERT,captures sequence dependencieswith the BiLSTM,and integrates theATT for key feature weighting,formingmulti-granularity feature fusion.An improved knowledge distillation approach transfers the teacher model’s learned knowledge to the simplified DistilBERT student model.Experimental results show the teacher model achieves 98.68%detection accuracy.The student modelmaintains slightly improved accuracy while significantly compressing parameters to approximately 38.4%of the teacher model’s scale,greatly reducing computational overhead for IoT deployment.展开更多
The growing developments in 5G and 6G wireless communications have revolutionized communications technologies,providing faster speeds with reduced latency and improved connectivity to users.However,it raises significa...The growing developments in 5G and 6G wireless communications have revolutionized communications technologies,providing faster speeds with reduced latency and improved connectivity to users.However,it raises significant security challenges,including impersonation threats,data manipulation,distributed denial of service(DDoS)attacks,and privacy breaches.Traditional security measures are inadequate due to the decentralized and dynamic nature of next-generation networks.This survey provides a comprehensive review of how Federated Learning(FL),Blockchain,and Digital Twin(DT)technologies can collectively enhance the security of 5G and 6G systems.Blockchain offers decentralized,immutable,and transparent mechanisms for securing network transactions,while FL enables privacy-preserving collaborative learning without sharing raw data.Digital Twins create virtual replicas of network components,enabling real-time monitoring,anomaly detection,and predictive threat analysis.The survey examines major security issues in emerging wireless architectures and analyzes recent advancements that integrate FL,Blockchain,and DT to mitigate these threats.Additionally,it presents practical use cases,synthesizes key lessons learned,and identifies ongoing research challenges.Finally,the survey outlines future research directions to support the development of scalable,intelligent,and robust security frameworks for next-generation wireless networks.展开更多
There has been an increasing emphasis on performing deep neural network(DNN)inference locally on edge devices due to challenges such as network congestion and security concerns.However,as DRAM process technology conti...There has been an increasing emphasis on performing deep neural network(DNN)inference locally on edge devices due to challenges such as network congestion and security concerns.However,as DRAM process technology continues to scale down,the bit-flip errors in the memory of edge devices become more frequent,thereby leading to substantial DNN inference accuracy loss.Though several techniques have been proposed to alleviate the accuracy loss in edge environments,they require complex computations and additional parity bits for error correction,thus resulting in significant performance and storage overheads.In this paper,we propose FeatherGuard,a data-driven lightweight error protection scheme for DNN inference on edge devices.FeatherGuard selectively protects critical bit positions(that have a significant impact on DNN inference accuracy)against bit-flip errors,by considering various DNN characteristics(e.g.,data format,layer-wise weight distribution,actually stored logical values).Thus,it achieves high error tolerability during DNN inference.Since FeatherGuard reduces the bit-flip errors based on only a few simple arithmetic operations(e.g.,NOT operations)without parity bits,it causes negligible performance overhead and no storage overhead.Our experimental results show that FeatherGuard improves the error tolerability by up to 6667×and 4000×,compared to the conventional systems and the state-of-the-art error protection technique for edge environments,respectively.展开更多
Industrial operators need reliable communication in high-noise,safety-critical environments where speech or touch input is often impractical.Existing gesture systems either miss real-time deadlines on resourceconstrai...Industrial operators need reliable communication in high-noise,safety-critical environments where speech or touch input is often impractical.Existing gesture systems either miss real-time deadlines on resourceconstrained hardware or lose accuracy under occlusion,vibration,and lighting changes.We introduce Industrial EdgeSign,a dual-path framework that combines hardware-aware neural architecture search(NAS)with large multimodalmodel(LMM)guided semantics to deliver robust,low-latency gesture recognition on edge devices.The searched model uses a truncated ResNet50 front end,a dimensional-reduction network that preserves spatiotemporal structure for tubelet-based attention,and localized Transformer layers tuned for on-device inference.To reduce reliance on gloss annotations and mitigate domain shift,we distill semantics from factory-tuned vision-language models and pre-train with masked language modeling and video-text contrastive objectives,aligning visual features with a shared text space.OnML2HP and SHREC’17,theNAS-derived architecture attains 94.7% accuracywith 86ms inference latency and about 5.9W power on Jetson Nano.Under occlusion,lighting shifts,andmotion blur,accuracy remains above 82%.For safetycritical commands,the emergency-stop gesture achieves 72 ms 99th percentile latency with 99.7% fail-safe triggering.Ablation studies confirm the contribution of the spatiotemporal tubelet extractor and text-side pre-training,and we observe gains in translation quality(BLEU-422.33).These results show that Industrial EdgeSign provides accurate,resource-aware,and safety-aligned gesture recognition suitable for deployment in smart factory settings.展开更多
Underwater images often affect the effectiveness of underwater visual tasks due to problems such as light scattering,color distortion,and detail blurring,limiting their application performance.Existing underwater imag...Underwater images often affect the effectiveness of underwater visual tasks due to problems such as light scattering,color distortion,and detail blurring,limiting their application performance.Existing underwater image enhancement methods,although they can improve the image quality to some extent,often lead to problems such as detail loss and edge blurring.To address these problems,we propose FENet,an efficient underwater image enhancement method.FENet first obtains three different scales of images by image downsampling and then transforms them into the frequency domain to extract the low-frequency and high-frequency spectra,respectively.Then,a distance mask and a mean mask are constructed based on the distance and magnitude mean for enhancing the high-frequency part,thus improving the image details and enhancing the effect by suppressing the noise in the low-frequency part.Affected by the light scattering of underwater images and the fact that some details are lost if they are directly reduced to the spatial domain after the frequency domain operation.For this reason,we propose a multi-stage residual feature aggregation module,which focuses on detail extraction and effectively avoids information loss caused by global enhancement.Finally,we combine the edge guidance strategy to further enhance the edge details of the image.Experimental results indicate that FENet outperforms current state-of-the-art underwater image enhancement methods in quantitative and qualitative evaluations on multiple publicly available datasets.展开更多
In scenarios where ground-based cloud computing infrastructure is unavailable,unmanned aerial vehicles(UAVs)act as mobile edge computing(MEC)servers to provide on-demand computation services for ground terminals.To ad...In scenarios where ground-based cloud computing infrastructure is unavailable,unmanned aerial vehicles(UAVs)act as mobile edge computing(MEC)servers to provide on-demand computation services for ground terminals.To address the challenge of jointly optimizing task scheduling and UAV trajectory under limited resources and high mobility of UAVs,this paper presents PER-MATD3,a multi-agent deep reinforcement learning algorithm with prioritized experience replay(PER)into the Centralized Training with Decentralized Execution(CTDE)framework.Specifically,PER-MATD3 enables each agent to learn a decentralized policy using only local observations during execution,while leveraging a shared replay buffer with prioritized sampling and centralized critic during training to accelerate convergence and improve sample efficiency.Simulation results show that PER-MATD3 reduces average task latency by up to 23%,improves energy efficiency by 21%,and enhances service coverage compared to state-of-the-art baselines,demonstrating its effectiveness and practicality in scenarios without terrestrial networks.展开更多
Traffic at urban intersections frequently encounters unexpected obstructions,resulting in congestion due to uncooperative and priority-based driving behavior.This paper presents an optimal right-turn coordination syst...Traffic at urban intersections frequently encounters unexpected obstructions,resulting in congestion due to uncooperative and priority-based driving behavior.This paper presents an optimal right-turn coordination system for Connected and Automated Vehicles(CAVs)at single-lane intersections,particularly in the context of left-hand side driving on roads.The goal is to facilitate smooth right turns for certain vehicles without creating bottlenecks.We consider that all approaching vehicles share relevant information through vehicular communications.The Intersection Coordination Unit(ICU)processes this information and communicates the optimal crossing or turning times to the vehicles.The primary objective of this coordination is to minimize overall traffic delays,which also helps improve the fuel consumption of vehicles.By considering information from upcoming vehicles at the intersection,the coordination system solves an optimization problem to determine the best timing for executing right turns,ultimately minimizing the total delay for all vehicles.The proposed coordination system is evaluated at a typical urban intersection,and its performance is compared to traditional traffic systems.Numerical simulation results indicate that the proposed coordination system significantly enhances the average traffic speed and fuel consumption compared to the traditional traffic system in various scenarios.展开更多
The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness dimin...The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness diminishes in few-shot reasoning scenarios due to the low data efficiency of conventional supervised fine-tuning,which leads to excessive communication overhead.To address this,we propose Language-Empowered Split Fine-Tuning(LESFT),a framework that integrates split architectures with a contrastive-inspired fine-tuning paradigm.LESFT simultaneously learns frommultiple logically equivalent but linguistically diverse reasoning chains,providing richer supervisory signals and improving data efficiency.This process-oriented training allows more effective reasoning adaptation with fewer samples.Extensive experiments demonstrate that LESFT consistently outperforms strong baselines such as SplitLoRA in task accuracy.LESFT consistently outperforms strong baselines on GSM8K,CommonsenseQA,and AQUA_RAT,with the largest gains observed on Qwen2.5-3B.These results indicate that LESFT can effectively adapt large language models for reasoning tasks under the computational and communication constraints of edge environments.展开更多
In real-world autonomous driving tests,unexpected events such as pedestrians or wild animals suddenly entering the driving path can occur.Conducting actual test drives under various weather conditions may also lead to...In real-world autonomous driving tests,unexpected events such as pedestrians or wild animals suddenly entering the driving path can occur.Conducting actual test drives under various weather conditions may also lead to dangerous situations.Furthermore,autonomous vehicles may operate abnormally in bad weather due to limitations of their sensors and GPS.Driving simulators,which replicate driving conditions nearly identical to those in the real world,can drastically reduce the time and cost required for market entry validation;consequently,they have become widely used.In this paper,we design a virtual driving test environment capable of collecting and verifying SiLS data under adverse weather conditions using multi-source images.The proposed method generates a virtual testing environment that incorporates various events,including weather,time of day,and moving objects,that cannot be easily verified in real-world autonomous driving tests.By setting up scenario-based virtual environment events,multi-source image analysis and verification using real-world DCUs(Data Concentrator Units)with V2X-Car edge cloud can effectively address risk factors that may arise in real-world situations.We tested and validated the proposed method with scenarios employing V2X communication and multi-source image analysis.展开更多
As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays...As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.展开更多
With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods ...With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods face numerous challenges in practical deployment,including scale variation handling,feature degradation,and complex backgrounds.To address these issues,we propose Edge-enhanced and Detail-Capturing You Only Look Once(EHDC-YOLO),a novel framework for object detection in Unmanned Aerial Vehicle(UAV)imagery.Based on the You Only Look Once version 11 nano(YOLOv11n)baseline,EHDC-YOLO systematically introduces several architectural enhancements:(1)a Multi-Scale Edge Enhancement(MSEE)module that leverages multi-scale pooling and edge information to enhance boundary feature extraction;(2)an Enhanced Feature Pyramid Network(EFPN)that integrates P2-level features with Cross Stage Partial(CSP)structures and OmniKernel convolutions for better fine-grained representation;and(3)Dynamic Head(DyHead)with multi-dimensional attention mechanisms for enhanced cross-scale modeling and perspective adaptability.Comprehensive experiments on the Vision meets Drones for Detection(VisDrone-DET)2019 dataset demonstrate that EHDC-YOLO achieves significant improvements,increasing mean Average Precision(mAP)@0.5 from 33.2%to 46.1%(an absolute improvement of 12.9 percentage points)and mAP@0.5:0.95 from 19.5%to 28.0%(an absolute improvement of 8.5 percentage points)compared with the YOLOv11n baseline,while maintaining a reasonable parameter count(2.81 M vs the baseline’s 2.58 M).Further ablation studies confirm the effectiveness of each proposed component,while visualization results highlight EHDC-YOLO’s superior performance in detecting objects and handling occlusions in complex drone scenarios.展开更多
Academic journals,consulting firms,and mainstream media have often published“predictions”about the future develop-ment of medical and health care.These publications often emphasize the potential of cutting-edge scie...Academic journals,consulting firms,and mainstream media have often published“predictions”about the future develop-ment of medical and health care.These publications often emphasize the potential of cutting-edge scientific or technical breakthroughs.Health Care Science looks at the problem from another perspective.We focus on how these changes enter the health system,how they operate in the real world,and how they reshape the organization and governance of medical services.At the beginning of 2026,we envision the following three major shifts that will reshape healthcare.展开更多
Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources...Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources of computation and communication.Multiaccess edge computing(MEC)can offload computing-intensive tasks to the nearby edge servers,which alleviates the pressure of devices.Ultra-dense network(UDN)can provide effective spectrum resources by deploying a large number of micro base stations.Furthermore,network slicing can support various applications in different communication scenarios.Therefore,this paper integrates the ultra-dense network slicing and the MEC technology,and introduces a hybrid computing offloading strategy in order to satisfy various quality of service(QoS)of edge devices.In order to dynamically allocate limited resources,the above problem is formulated as multiagent distributed deep reinforcement learning(DRL),which will achieve low overhead computation offloading strategy and real-time resource allocation decisions.In this context,federated learning is added to train DRL agents in a distributed manner,where each agent is dedicated to exploring actions composed of offloading decisions and allocating resources,so as to jointly optimize system delay and energy consumption.Simulation results show that the proposed learning algorithm has better performance compared with other strategies in literature.展开更多
Federated Learning(FL)provides an effective framework for efficient processing in vehicular edge computing.However,the dynamic and uncertain communication environment,along with the performance variations of vehicular...Federated Learning(FL)provides an effective framework for efficient processing in vehicular edge computing.However,the dynamic and uncertain communication environment,along with the performance variations of vehicular devices,affect the distribution and uploading processes of model parameters.In FL-assisted Internet of Vehicles(IoV)scenarios,challenges such as data heterogeneity,limited device resources,and unstable communication environments become increasingly prominent.These issues necessitate intelligent vehicle selection schemes to enhance training efficiency.Given this context,we propose a new scenario involving FL-assisted IoV systems under dynamic and uncertain communication conditions,and develop a dynamic interval multi-objective optimization algorithm to jointly optimize various factors including training experiments,system energy consumption,and bandwidth utilization to meet multi-criteria resource optimization requirements.For the problem at hand,we design a dynamic interval multi-objective optimization algorithm based on interval overlap detection.Simulation results demonstrate that our method outperforms other solutions in terms of accuracy,training cost,and server utilization.It effectively enhances training efficiency under wireless channel environments while rationally utilizing bandwidth resources,thus possessing significant scientific value and application potential in the field of IoV.展开更多
基金supported by Huo Yingdong Young Teachers Foundation,Ministry of State Education of ChinaNational Natural Science Foundation of China(No.59904003).
文摘3-D rigid visco-plastic finite element method (FEM) is used in the analysisof metal forming processes, including strip and plate rolling, shape rolling, slab edging, specialstrip rolling. The shifted incomplete Cholesky decomposition of the stiffness matrix with thesolution of the equations for velocity increment by the conjugate gradient method is combined. Thistechnique, termed the shifted ICCG method, is then employed to solve the slab edging problem. Theperformance of this algorithm in terms of the number of iterations, friction variation, shiftedparameter psi and the results of simulation for processing parameters are analysed. Numerical testsand application of this technique verify the efficiency and stability of the shifted ICCG method inthe analysis of slab edging.
文摘The exponential growth of Internet of Things(IoT)devices,autonomous systems,and digital services is generating massive volumes of big data,projected to exceed 291 zettabytes by 2027.Conventional cloud computing,despite its high processing and storage capacity,suffers from increased network latency,network congestion,and high operational costs,making it unsuitable for latency-sensitive applications.Edge computing addresses these issues by processing data near the source but faces scalability challenges and elevated Total Cost of Ownership(TCO).Hybrid solutions,such as fog computing,cloudlets,and Mobile Edge Computing(MEC),attempt to balance cost and performance;however,they still struggle with limited resource sharing and high deployment expenses.This paper proposes Public Edge as a Service(PEaaS),a novel paradigm that utilizes idle resources contributed by universities,enterprises,cellular operators,and individuals under a collaborative service model.By decentralizing computation and enabling multi-tenant resource sharing,PEaaS reduces reliance on centralized cloud infrastructure,minimizes communication costs,and enhances scalability.The proposed framework is evaluated using EdgeCloudSim under varying workloads,for keymetrics such as latency,communication cost,server utilization,and task failure rate.Results reveal that while cloud has a task failure rate rising sharply to 12.3%at 2000 devices,PEaaS maintains a low rate of 2.5%,closely matching edge computing.Furthermore,communication costs remain 25% lower than cloud and latency remains below 0.3,even under peak load.These findings demonstrate that PEaaS achieves near-edge performance with reduced costs and enhanced scalability,offering a sustainable and economically viable solution for next-generation computing environments.
基金supported by the IITP(Institute for Information&Communications Technology Planning&Evaluation)under the ITRC(Information Technology Research Center)support program(IITP-2025-RS-2024-00438288)grant funded by the Korea government(MSIT)+1 种基金National Research Council of Science&Technology(NST)grant by the MSIT(Aerospace Semiconductor Strategy Research Project No.GTL25051-000)supported by the IC Design Education Center(IDEC),Korea。
文摘This work presents a systematic analysis of proton-induced total ionizing dose(TID)effects in 1.2 k V silicon carbide(SiC)power devices with various edge termination structures.Three edge terminations including ring-assisted junction termination extension(RA-JTE),multiple floating zone JTE(MFZ-JTE),and field limiting rings(FLR)were fabricated and irradiated with45 Me V protons at fluences ranging from 1×10^(12) to 1×10^(14) cm^(-2).Experimental results,supported by TCAD simulations,show that the RA-JTE structure maintained stable breakdown performance with less than 1%variation due to its effective electric field redistribution by multiple P+rings.In contrast,MFZ-JTE and FLR exhibit breakdown voltage shifts of 6.1%and 15.2%,respectively,under the highest fluence.These results demonstrate the superior radiation tolerance of the RA-JTE structure under TID conditions and provide practical design guidance for radiation-hardened Si C power devices in space and other highradiation environments.
基金supported by the National Natural Science Foundation of China 62402171.
文摘As a fundamental component in computer vision,edges can be categorized into four types based on discontinuities in reflectance,illumination,surface normal,or depth.While deep CNNs have significantly advanced generic edge detection,real-time multi-class semantic edge detection under resource constraints remains challenging.To address this,we propose a lightweight framework based on PiDiNet that enables fine-grained semantic edge detection.Our model simultaneously predicts background and four edge categories from full-resolution inputs,balancing accuracy and efficiency.Key contributions include:a multi-channel output structure expanding binary edge prediction to five classes,supported by a deep supervision mechanism;a dynamic class-balancing strategy combining adaptive weighting with physical priors to handle extreme class imbalance;and maintained architectural efficiency enabling real-time inference.Extensive evaluations on BSDS-RIND show our approach achieves accuracy competitive with state-of-the-art methods while operating in real time.
基金supported by the National Natural Science Foundation of China(Grant Nos.62572057,62272049,U24A20331)Beijing Natural Science Foundation(Grant Nos.4232026,4242020)Academic Research Projects of Beijing Union University(Grant No.ZK10202404).
文摘Traffic sign detection is a critical component of driving systems.Single-stage network-based traffic sign detection algorithms,renowned for their fast detection speeds and high accuracy,have become the dominant approach in current practices.However,in complex and dynamic traffic scenes,particularly with smaller traffic sign objects,challenges such as missed and false detections can lead to reduced overall detection accuracy.To address this issue,this paper proposes a detection algorithm that integrates edge and shape information.Recognizing that traffic signs have specific shapes and distinct edge contours,this paper introduces an edge feature extraction branch within the backbone network,enabling adaptive fusion with features of the same hierarchical level.Additionally,a shape prior convolution module is designed to replaces the first two convolutional modules of the backbone network,aimed at enhancing the model's perception ability for specific shape objects and reducing its sensitivity to background noise.The algorithm was evaluated on the CCTSDB and TT100k datasets,and compared to YOLOv8s,the mAP50 values increased by 3.0%and 10.4%,respectively,demonstrating the effectiveness of the proposed method in improving the accuracy of traffic sign detection.
基金supported by the Deanship of Graduate Studies and Scientific Research at Jouf University.
文摘Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monitoring and automate the communication process.In recent decades,researchers have made many efforts to propose autonomous systems for manipulating network data and providing on-time responses in critical operations.However,the widespread use of IoT devices in resource-constrained applications and mobile sensor networks introduces significant research challenges for cybersecurity.These systems are vulnerable to a variety of cyberattacks,including unauthorized access,denial-of-service attacks,and data leakage,which compromise the network’s security.Additionally,uneven load balancing between mobile IoT devices,which frequently experience link interferences,compromises the trustworthiness of the system.This paper introduces a Multi-Agent secured framework using lightweight edge computing to enhance cybersecurity for sensor networks,aiming to leverage artificial intelligence for adaptive routing and multi-metric trust evaluation to achieve data privacy and mitigate potential threats.Moreover,it enhances the efficiency of distributed sensors for energy consumption through intelligent data analytics techniques,resulting in highly consistent and low-latency network communication.Using simulations,the proposed framework reveals its significant performance compared to state-of-the-art approaches for energy consumption by 43%,latency by 46%,network throughput by 51%,packet loss rate by 40%,and denial of service attacks by 42%.
基金supported by the following projects:National Natural Science Foundation of China(62461041)Natural Science Foundation of Jiangxi Province China(20242BAB25068).
文摘With the large-scale deployment of the Internet ofThings(IoT)devices,their weak securitymechanisms make them prime targets for malware attacks.Attackers often use Domain Generation Algorithm(DGA)to generate random domain names,hiding the real IP of Command and Control(C&C)servers to build botnets.Due to the randomness and dynamics of DGA,traditional methods struggle to detect them accurately,increasing the difficulty of network defense.This paper proposes a lightweight DGA detection model based on knowledge distillation for resource-constrained IoT environments.Specifically,a teacher model combining CharacterBERT,a bidirectional long short-term memory(BiLSTM)network,and attention mechanism(ATT)is constructed:it extracts character-level semantic features viaCharacterBERT,captures sequence dependencieswith the BiLSTM,and integrates theATT for key feature weighting,formingmulti-granularity feature fusion.An improved knowledge distillation approach transfers the teacher model’s learned knowledge to the simplified DistilBERT student model.Experimental results show the teacher model achieves 98.68%detection accuracy.The student modelmaintains slightly improved accuracy while significantly compressing parameters to approximately 38.4%of the teacher model’s scale,greatly reducing computational overhead for IoT deployment.
基金derived from a research grant“Cybersecurity Research and Innovation Pioneers Grants Initiative”funded by The National Program for RDI in Cybersecurity(National Cybersecurity Authority)-Kingdom of Saudi Arabia-with grant number(CRPG-25-3168)supported by EIAS Data Science and Blockchain Lab,CCIS,Prince Sultan University.
文摘The growing developments in 5G and 6G wireless communications have revolutionized communications technologies,providing faster speeds with reduced latency and improved connectivity to users.However,it raises significant security challenges,including impersonation threats,data manipulation,distributed denial of service(DDoS)attacks,and privacy breaches.Traditional security measures are inadequate due to the decentralized and dynamic nature of next-generation networks.This survey provides a comprehensive review of how Federated Learning(FL),Blockchain,and Digital Twin(DT)technologies can collectively enhance the security of 5G and 6G systems.Blockchain offers decentralized,immutable,and transparent mechanisms for securing network transactions,while FL enables privacy-preserving collaborative learning without sharing raw data.Digital Twins create virtual replicas of network components,enabling real-time monitoring,anomaly detection,and predictive threat analysis.The survey examines major security issues in emerging wireless architectures and analyzes recent advancements that integrate FL,Blockchain,and DT to mitigate these threats.Additionally,it presents practical use cases,synthesizes key lessons learned,and identifies ongoing research challenges.Finally,the survey outlines future research directions to support the development of scalable,intelligent,and robust security frameworks for next-generation wireless networks.
基金the“Convergence and Open sharing System”Project,supported by the Ministry of Education and National Research Foundation of Korea.
文摘There has been an increasing emphasis on performing deep neural network(DNN)inference locally on edge devices due to challenges such as network congestion and security concerns.However,as DRAM process technology continues to scale down,the bit-flip errors in the memory of edge devices become more frequent,thereby leading to substantial DNN inference accuracy loss.Though several techniques have been proposed to alleviate the accuracy loss in edge environments,they require complex computations and additional parity bits for error correction,thus resulting in significant performance and storage overheads.In this paper,we propose FeatherGuard,a data-driven lightweight error protection scheme for DNN inference on edge devices.FeatherGuard selectively protects critical bit positions(that have a significant impact on DNN inference accuracy)against bit-flip errors,by considering various DNN characteristics(e.g.,data format,layer-wise weight distribution,actually stored logical values).Thus,it achieves high error tolerability during DNN inference.Since FeatherGuard reduces the bit-flip errors based on only a few simple arithmetic operations(e.g.,NOT operations)without parity bits,it causes negligible performance overhead and no storage overhead.Our experimental results show that FeatherGuard improves the error tolerability by up to 6667×and 4000×,compared to the conventional systems and the state-of-the-art error protection technique for edge environments,respectively.
文摘Industrial operators need reliable communication in high-noise,safety-critical environments where speech or touch input is often impractical.Existing gesture systems either miss real-time deadlines on resourceconstrained hardware or lose accuracy under occlusion,vibration,and lighting changes.We introduce Industrial EdgeSign,a dual-path framework that combines hardware-aware neural architecture search(NAS)with large multimodalmodel(LMM)guided semantics to deliver robust,low-latency gesture recognition on edge devices.The searched model uses a truncated ResNet50 front end,a dimensional-reduction network that preserves spatiotemporal structure for tubelet-based attention,and localized Transformer layers tuned for on-device inference.To reduce reliance on gloss annotations and mitigate domain shift,we distill semantics from factory-tuned vision-language models and pre-train with masked language modeling and video-text contrastive objectives,aligning visual features with a shared text space.OnML2HP and SHREC’17,theNAS-derived architecture attains 94.7% accuracywith 86ms inference latency and about 5.9W power on Jetson Nano.Under occlusion,lighting shifts,andmotion blur,accuracy remains above 82%.For safetycritical commands,the emergency-stop gesture achieves 72 ms 99th percentile latency with 99.7% fail-safe triggering.Ablation studies confirm the contribution of the spatiotemporal tubelet extractor and text-side pre-training,and we observe gains in translation quality(BLEU-422.33).These results show that Industrial EdgeSign provides accurate,resource-aware,and safety-aligned gesture recognition suitable for deployment in smart factory settings.
基金supported in part by the National Natural Science Foundation of China[Grant number 62471075]the Major Science and Technology Project Grant of the Chongqing Municipal Education Commission[Grant number KJZD-M202301901].
文摘Underwater images often affect the effectiveness of underwater visual tasks due to problems such as light scattering,color distortion,and detail blurring,limiting their application performance.Existing underwater image enhancement methods,although they can improve the image quality to some extent,often lead to problems such as detail loss and edge blurring.To address these problems,we propose FENet,an efficient underwater image enhancement method.FENet first obtains three different scales of images by image downsampling and then transforms them into the frequency domain to extract the low-frequency and high-frequency spectra,respectively.Then,a distance mask and a mean mask are constructed based on the distance and magnitude mean for enhancing the high-frequency part,thus improving the image details and enhancing the effect by suppressing the noise in the low-frequency part.Affected by the light scattering of underwater images and the fact that some details are lost if they are directly reduced to the spatial domain after the frequency domain operation.For this reason,we propose a multi-stage residual feature aggregation module,which focuses on detail extraction and effectively avoids information loss caused by global enhancement.Finally,we combine the edge guidance strategy to further enhance the edge details of the image.Experimental results indicate that FENet outperforms current state-of-the-art underwater image enhancement methods in quantitative and qualitative evaluations on multiple publicly available datasets.
基金supported by the National Natural Science Foundation of China under Grant No.61701100.
文摘In scenarios where ground-based cloud computing infrastructure is unavailable,unmanned aerial vehicles(UAVs)act as mobile edge computing(MEC)servers to provide on-demand computation services for ground terminals.To address the challenge of jointly optimizing task scheduling and UAV trajectory under limited resources and high mobility of UAVs,this paper presents PER-MATD3,a multi-agent deep reinforcement learning algorithm with prioritized experience replay(PER)into the Centralized Training with Decentralized Execution(CTDE)framework.Specifically,PER-MATD3 enables each agent to learn a decentralized policy using only local observations during execution,while leveraging a shared replay buffer with prioritized sampling and centralized critic during training to accelerate convergence and improve sample efficiency.Simulation results show that PER-MATD3 reduces average task latency by up to 23%,improves energy efficiency by 21%,and enhances service coverage compared to state-of-the-art baselines,demonstrating its effectiveness and practicality in scenarios without terrestrial networks.
基金supported by the Japan Society for the Promotion of Science(JSPS)Grants-in-Aid for Scientific Research(C)23K03898.
文摘Traffic at urban intersections frequently encounters unexpected obstructions,resulting in congestion due to uncooperative and priority-based driving behavior.This paper presents an optimal right-turn coordination system for Connected and Automated Vehicles(CAVs)at single-lane intersections,particularly in the context of left-hand side driving on roads.The goal is to facilitate smooth right turns for certain vehicles without creating bottlenecks.We consider that all approaching vehicles share relevant information through vehicular communications.The Intersection Coordination Unit(ICU)processes this information and communicates the optimal crossing or turning times to the vehicles.The primary objective of this coordination is to minimize overall traffic delays,which also helps improve the fuel consumption of vehicles.By considering information from upcoming vehicles at the intersection,the coordination system solves an optimization problem to determine the best timing for executing right turns,ultimately minimizing the total delay for all vehicles.The proposed coordination system is evaluated at a typical urban intersection,and its performance is compared to traditional traffic systems.Numerical simulation results indicate that the proposed coordination system significantly enhances the average traffic speed and fuel consumption compared to the traditional traffic system in various scenarios.
基金supported in part by the National Natural Science Foundation of China(NSFC)under Grant 62276109The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through the Research Group Project number(ORF-2025-585).
文摘The personalized fine-tuning of large languagemodels(LLMs)on edge devices is severely constrained by limited computation resources.Although split federated learning alleviates on-device burdens,its effectiveness diminishes in few-shot reasoning scenarios due to the low data efficiency of conventional supervised fine-tuning,which leads to excessive communication overhead.To address this,we propose Language-Empowered Split Fine-Tuning(LESFT),a framework that integrates split architectures with a contrastive-inspired fine-tuning paradigm.LESFT simultaneously learns frommultiple logically equivalent but linguistically diverse reasoning chains,providing richer supervisory signals and improving data efficiency.This process-oriented training allows more effective reasoning adaptation with fewer samples.Extensive experiments demonstrate that LESFT consistently outperforms strong baselines such as SplitLoRA in task accuracy.LESFT consistently outperforms strong baselines on GSM8K,CommonsenseQA,and AQUA_RAT,with the largest gains observed on Qwen2.5-3B.These results indicate that LESFT can effectively adapt large language models for reasoning tasks under the computational and communication constraints of edge environments.
基金supported by Institute of Information and Communications Technology Planning and Evaluation(IITP)grant funded by the Korean government(MSIT)(No.2019-0-01842,Artificial Intelligence Graduate School Program(GIST))supported by Korea Planning&Evaluation Institute of Industrial Technology(KEIT)grant funded by the Ministry of Trade,Industry&Energy(MOTIE,Republic of Korea)(RS-2025-25448249+1 种基金Automotive Industry Technology Development(R&D)Program)supported by the Regional Innovation System&Education(RISE)programthrough the(Gwangju RISE Center),funded by the Ministry of Education(MOE)and the Gwangju Metropolitan City,Republic of Korea(2025-RISE-05-001).
文摘In real-world autonomous driving tests,unexpected events such as pedestrians or wild animals suddenly entering the driving path can occur.Conducting actual test drives under various weather conditions may also lead to dangerous situations.Furthermore,autonomous vehicles may operate abnormally in bad weather due to limitations of their sensors and GPS.Driving simulators,which replicate driving conditions nearly identical to those in the real world,can drastically reduce the time and cost required for market entry validation;consequently,they have become widely used.In this paper,we design a virtual driving test environment capable of collecting and verifying SiLS data under adverse weather conditions using multi-source images.The proposed method generates a virtual testing environment that incorporates various events,including weather,time of day,and moving objects,that cannot be easily verified in real-world autonomous driving tests.By setting up scenario-based virtual environment events,multi-source image analysis and verification using real-world DCUs(Data Concentrator Units)with V2X-Car edge cloud can effectively address risk factors that may arise in real-world situations.We tested and validated the proposed method with scenarios employing V2X communication and multi-source image analysis.
基金supported by Youth Talent Project of Scientific Research Program of Hubei Provincial Department of Education under Grant Q20241809Doctoral Scientific Research Foundation of Hubei University of Automotive Technology under Grant 202404.
文摘As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.
文摘With the rapid expansion of drone applications,accurate detection of objects in aerial imagery has become crucial for intelligent transportation,urban management,and emergency rescue missions.However,existing methods face numerous challenges in practical deployment,including scale variation handling,feature degradation,and complex backgrounds.To address these issues,we propose Edge-enhanced and Detail-Capturing You Only Look Once(EHDC-YOLO),a novel framework for object detection in Unmanned Aerial Vehicle(UAV)imagery.Based on the You Only Look Once version 11 nano(YOLOv11n)baseline,EHDC-YOLO systematically introduces several architectural enhancements:(1)a Multi-Scale Edge Enhancement(MSEE)module that leverages multi-scale pooling and edge information to enhance boundary feature extraction;(2)an Enhanced Feature Pyramid Network(EFPN)that integrates P2-level features with Cross Stage Partial(CSP)structures and OmniKernel convolutions for better fine-grained representation;and(3)Dynamic Head(DyHead)with multi-dimensional attention mechanisms for enhanced cross-scale modeling and perspective adaptability.Comprehensive experiments on the Vision meets Drones for Detection(VisDrone-DET)2019 dataset demonstrate that EHDC-YOLO achieves significant improvements,increasing mean Average Precision(mAP)@0.5 from 33.2%to 46.1%(an absolute improvement of 12.9 percentage points)and mAP@0.5:0.95 from 19.5%to 28.0%(an absolute improvement of 8.5 percentage points)compared with the YOLOv11n baseline,while maintaining a reasonable parameter count(2.81 M vs the baseline’s 2.58 M).Further ablation studies confirm the effectiveness of each proposed component,while visualization results highlight EHDC-YOLO’s superior performance in detecting objects and handling occlusions in complex drone scenarios.
文摘Academic journals,consulting firms,and mainstream media have often published“predictions”about the future develop-ment of medical and health care.These publications often emphasize the potential of cutting-edge scientific or technical breakthroughs.Health Care Science looks at the problem from another perspective.We focus on how these changes enter the health system,how they operate in the real world,and how they reshape the organization and governance of medical services.At the beginning of 2026,we envision the following three major shifts that will reshape healthcare.
文摘Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources of computation and communication.Multiaccess edge computing(MEC)can offload computing-intensive tasks to the nearby edge servers,which alleviates the pressure of devices.Ultra-dense network(UDN)can provide effective spectrum resources by deploying a large number of micro base stations.Furthermore,network slicing can support various applications in different communication scenarios.Therefore,this paper integrates the ultra-dense network slicing and the MEC technology,and introduces a hybrid computing offloading strategy in order to satisfy various quality of service(QoS)of edge devices.In order to dynamically allocate limited resources,the above problem is formulated as multiagent distributed deep reinforcement learning(DRL),which will achieve low overhead computation offloading strategy and real-time resource allocation decisions.In this context,federated learning is added to train DRL agents in a distributed manner,where each agent is dedicated to exploring actions composed of offloading decisions and allocating resources,so as to jointly optimize system delay and energy consumption.Simulation results show that the proposed learning algorithm has better performance compared with other strategies in literature.
基金supported in part by the Central Guidance for Local Science and Technology Development Funds under Grant No.YDZJSX2025D049Shanxi Provincial Graduate Innovation Research Program under Grant No.2024KY652.
文摘Federated Learning(FL)provides an effective framework for efficient processing in vehicular edge computing.However,the dynamic and uncertain communication environment,along with the performance variations of vehicular devices,affect the distribution and uploading processes of model parameters.In FL-assisted Internet of Vehicles(IoV)scenarios,challenges such as data heterogeneity,limited device resources,and unstable communication environments become increasingly prominent.These issues necessitate intelligent vehicle selection schemes to enhance training efficiency.Given this context,we propose a new scenario involving FL-assisted IoV systems under dynamic and uncertain communication conditions,and develop a dynamic interval multi-objective optimization algorithm to jointly optimize various factors including training experiments,system energy consumption,and bandwidth utilization to meet multi-criteria resource optimization requirements.For the problem at hand,we design a dynamic interval multi-objective optimization algorithm based on interval overlap detection.Simulation results demonstrate that our method outperforms other solutions in terms of accuracy,training cost,and server utilization.It effectively enhances training efficiency under wireless channel environments while rationally utilizing bandwidth resources,thus possessing significant scientific value and application potential in the field of IoV.