Benefited from wireless power transfer(WPT)and mobile-edge computing(MEC),wireless powered MEC systems have attracted widespread attention.Specifically,we design an online offloading scheme based on deep reinforcement...Benefited from wireless power transfer(WPT)and mobile-edge computing(MEC),wireless powered MEC systems have attracted widespread attention.Specifically,we design an online offloading scheme based on deep reinforcement learning that maximizes the computation rate and minimizes the energy consumption of all wireless devices(WDs).Extensive results validate that the proposed scheme can achieve better tradeoff between energy consumption and computation delay.展开更多
Mobile-edge computing(MEC),enabling to offload computing tasks on mobile devices towards edge servers,can reduce the terminals cost.However,a single MEC sever usually has limited computing capabilities,which can not m...Mobile-edge computing(MEC),enabling to offload computing tasks on mobile devices towards edge servers,can reduce the terminals cost.However,a single MEC sever usually has limited computing capabilities,which can not meet a large number of terminals’requirements.In this paper,we consider an ultra-dense networks(UDN)scenario where the macro base stations(MBSs)are assisted by MEC severs.In particular,we first construct system model for MEC assisted UDN,and build the system overhead minimization.Next,in order to solve the problem,we transform the problem into three sub-problems,i.e.,offloading strategies subproblem,channel assignments subproblem,and power allocation subproblem.Then,employing joint offloading and resource allocation algorithms,we obtain the optimal joint strategy for the MEC assisted UDNs.Finally,simulations are conducted to evaluate the performance of our proposed algorithms.Numerical results show that obtained algorithms can effectively reduce the energy consumption of the system and improve the overall performance of the system.展开更多
The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,fle...The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,flexible memristors exhibit great application potential in emulating artificial synapses for highefficiency and low power consumption neuromorphic computing.This paper provides comprehensive overview of flexible memristors from perspectives of development history,material system,device structure,mechanical deformation method,device performance analysis,stress simulation during deformation,and neuromorphic computing applications.The recent advances in flexible electronics are summarized,including single device,device array and integration.The challenges and future perspectives of flexible memristor for neuromorphic computing are discussed deeply,paving the way for constructing wearable smart electronics and applications in large-scale neuromorphic computing and high-order intelligent robotics.展开更多
This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagno...This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.展开更多
Under the situations of energy dilemma, energy Internet has become one of the most important technologies in international academic and industrial areas. However, massive small data from users, which are too scattered...Under the situations of energy dilemma, energy Internet has become one of the most important technologies in international academic and industrial areas. However, massive small data from users, which are too scattered and unsuitable for compression, can easily exhaust computational resources and lower random access possibility, thereby reducing system performance. Moreover, electric substations are sensitive to transmission latency of user data, such as controlling information. However, the traditional energy Internet usually could not meet requirements. Integrating mobile-edge computing makes energy Internet convenient for data acquisition,processing, management, and accessing. In this paper, we propose a novel framework for energy Internet to improve random access possibility and reduce transmission latency. This framework utilizes the local area network to collect data from users and makes conducting data compression for energy Internet possible. Simulation results show that this architecture can enhance random access possibility by a large margin and reduce transmission latency without extra energy consumption overhead.展开更多
Mobile-edge computing casts the computation-intensive and delay-sensitive applications of mobile devices onto network edges.Task offloading incurs extra communication latency and energy cost,and extensive efforts have...Mobile-edge computing casts the computation-intensive and delay-sensitive applications of mobile devices onto network edges.Task offloading incurs extra communication latency and energy cost,and extensive efforts have focused on offloading schemes.Many metrics of the system utility are defined to achieve satisfactory quality of experience.However,most existing works overlook the balance between throughput and fairness.This study investigates the problem of finding an optimal offloading scheme in which the objective of optimization aims to maximize the system utility for leveraging between throughput and fairness.Based on Karush-Kuhn-Tucker condition,the expectation of time complexity is analyzed to derive the optimal scheme.A gradient-based approach for utility-aware task offloading is given.Furthermore,we provide an increment-based greedy approximation algorithm with 1+1/(e-1)ratio.Experimental results show that the proposed algorithms can achieve effective performance in utility and accuracy.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic ...Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic functions,i.e.,excita-tory post-synaptic current and pair-pulse facilitation are successfully mimicked with the memristor under electrical and optical stimulations.More importantly,the device exhibited distinguishable response currents by adjusting 4-bit input electrical/opti-cal signals.A multi-mode reservoir computing(RC)system is constructed with the optoelectronic memristors to emulate human tactile-visual fusion recognition and an accuracy of 98.7%is achieved.The optoelectronic memristor provides potential for developing multi-mode RC system.展开更多
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the...As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.展开更多
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc...The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.展开更多
To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the vario...To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the various materials inves-tigated for the fabrication of synaptic devices,silicon carbide(SiC)has emerged as a preferred choices due to its high electron mobility,superior thermal conductivity,and excellent thermal stability,which exhibits promising potential for neuromorphic applications in harsh environments.In this review,the recent progress in SiC-based synaptic devices is summarized.Firstly,an in-depth discussion is conducted regarding the categories,working mechanisms,and structural designs of these devices.Subse-quently,several application scenarios for SiC-based synaptic devices are presented.Finally,a few perspectives and directions for their future development are outlined.展开更多
The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi...The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.展开更多
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta...Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.展开更多
Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different ...Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed.展开更多
Neuromorphic devices have shown great potential in simulating the function of biological neurons due to their efficient parallel information processing and low energy consumption.MXene-Ti_(3)C_(2)T_(x),an emerging two...Neuromorphic devices have shown great potential in simulating the function of biological neurons due to their efficient parallel information processing and low energy consumption.MXene-Ti_(3)C_(2)T_(x),an emerging twodimensional material,stands out as an ideal candidate for fabricating neuromorphic devices.Its exceptional electrical performance and robust mechanical properties make it an ideal choice for this purpose.This review aims to uncover the advantages and properties of MXene-Ti_(3)C_(2)T_(x)in neuromorphic devices and to promote its further development.Firstly,we categorize several core physical mechanisms present in MXene-Ti_(3)C_(2)T_(x)neuromorphic devices and summarize in detail the reasons for their formation.Then,this work systematically summarizes and classifies advanced techniques for the three main optimization pathways of MXene-Ti_(3)C_(2)T_(x),such as doping engineering,interface engineering,and structural engineering.Significantly,this work highlights innovative applications of MXene-Ti_(3)C_(2)T_(x)neuromorphic devices in cutting-edge computing paradigms,particularly near-sensor computing and in-sensor computing.Finally,this review carefully compiles a table that integrates almost all research results involving MXene-Ti_(3)C_(2)T_(x)neuromorphic devices and discusses the challenges,development prospects,and feasibility of MXene-Ti_(3)C_(2)T_(x)-based neuromorphic devices in practical applications,aiming to lay a solid theoretical foundation and provide technical support for further exploration and application of MXene-Ti_(3)C_(2)T_(x)in the field of neuromorphic devices.展开更多
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up ...The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up new industrial opportunities in both air and space,with satellite-powered computing emerging as a new computing paradigm:Orbital Edge Computing(OEC).Compared to terrestrial edge computing,the mobility of LEO satellites and their limited communication,computation,and storage resources pose challenges in designing task-specific scheduling algorithms.Previous survey papers have largely focused on terrestrial edge computing or the integration of space and ground technologies,lacking a comprehensive summary of OEC architecture,algorithms,and case studies.This paper conducts a comprehensive survey and analysis of OEC's system architecture,applications,algorithms,and simulation tools,providing a solid background for researchers in the field.By discussing OEC use cases and the challenges faced,potential research directions for future OEC research are proposed.展开更多
The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machin...The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.展开更多
Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasti...Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology.展开更多
基金National Natural Science Foundation of China(No.61902060)Fundamental Research Fund for the Central Universities,China(No.2232019D3-51)Shanghai Sailing Program,China(No.19YF1402100).
文摘Benefited from wireless power transfer(WPT)and mobile-edge computing(MEC),wireless powered MEC systems have attracted widespread attention.Specifically,we design an online offloading scheme based on deep reinforcement learning that maximizes the computation rate and minimizes the energy consumption of all wireless devices(WDs).Extensive results validate that the proposed scheme can achieve better tradeoff between energy consumption and computation delay.
基金the Natural Science Foundation of Henan(202300410292)the Key Scientific Projects of Henan Higher Education Institutions(19A510018)+5 种基金the Key Scientific Projects of Henan Higher Education Institutions(20A510008)the Key Scientific Projects of Henan Higher Education Institutions(21A510008)the Key Scientific and Technological Projects(202102210120)the Key Scientific and Technological Projects(212102210553)the Foundation for Young Backbone Teachers in Higher Education Institutions(2018GGJS126)Henan key Laboratory for Big Data Processing and Analytics of Electronic Commerce(2020-KF-6)。
文摘Mobile-edge computing(MEC),enabling to offload computing tasks on mobile devices towards edge servers,can reduce the terminals cost.However,a single MEC sever usually has limited computing capabilities,which can not meet a large number of terminals’requirements.In this paper,we consider an ultra-dense networks(UDN)scenario where the macro base stations(MBSs)are assisted by MEC severs.In particular,we first construct system model for MEC assisted UDN,and build the system overhead minimization.Next,in order to solve the problem,we transform the problem into three sub-problems,i.e.,offloading strategies subproblem,channel assignments subproblem,and power allocation subproblem.Then,employing joint offloading and resource allocation algorithms,we obtain the optimal joint strategy for the MEC assisted UDNs.Finally,simulations are conducted to evaluate the performance of our proposed algorithms.Numerical results show that obtained algorithms can effectively reduce the energy consumption of the system and improve the overall performance of the system.
基金supported by the NSFC(12474071)Natural Science Foundation of Shandong Province(ZR2024YQ051)+5 种基金Open Research Fund of State Key Laboratory of Materials for Integrated Circuits(SKLJC-K2024-12)the Shanghai Sailing Program(23YF1402200,23YF1402400)Natural Science Foundation of Jiangsu Province(BK20240424)Taishan Scholar Foundation of Shandong Province(tsqn202408006)Young Talent of Lifting engineering for Science and Technology in Shandong,China(SDAST2024QTB002)the Qilu Young Scholar Program of Shandong University.
文摘The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,flexible memristors exhibit great application potential in emulating artificial synapses for highefficiency and low power consumption neuromorphic computing.This paper provides comprehensive overview of flexible memristors from perspectives of development history,material system,device structure,mechanical deformation method,device performance analysis,stress simulation during deformation,and neuromorphic computing applications.The recent advances in flexible electronics are summarized,including single device,device array and integration.The challenges and future perspectives of flexible memristor for neuromorphic computing are discussed deeply,paving the way for constructing wearable smart electronics and applications in large-scale neuromorphic computing and high-order intelligent robotics.
文摘This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.
基金supported by the Beijing Municipal Science and Technology Commission Research (No. Z171100005217001)the National Science and Technology Major Project (No. 2018ZX03001016)+4 种基金the Fundamental Research Funds for the Central Universities (No. 2018RC06)the National Key R&D Program of China (No. 2017YFC0112802)the Beijing Laboratory of Advanced Information Networksthe Beijing Key Laboratory of Network System Architecture and Convergencethe 111 project B17007
文摘Under the situations of energy dilemma, energy Internet has become one of the most important technologies in international academic and industrial areas. However, massive small data from users, which are too scattered and unsuitable for compression, can easily exhaust computational resources and lower random access possibility, thereby reducing system performance. Moreover, electric substations are sensitive to transmission latency of user data, such as controlling information. However, the traditional energy Internet usually could not meet requirements. Integrating mobile-edge computing makes energy Internet convenient for data acquisition,processing, management, and accessing. In this paper, we propose a novel framework for energy Internet to improve random access possibility and reduce transmission latency. This framework utilizes the local area network to collect data from users and makes conducting data compression for energy Internet possible. Simulation results show that this architecture can enhance random access possibility by a large margin and reduce transmission latency without extra energy consumption overhead.
基金supported in part by the National Natural Science Foundation of China(Nos.61602084,61761136019,U1808206,61772112,and 61972083)the Post-Doctoral Science Foundation of China(No.2016M600202)+1 种基金the Doctoral Scientific Research Foundation of Liaoning Province(No.201601041)the Fundamental Research Fund for the Central Universities(No.DUT19JC53)。
文摘Mobile-edge computing casts the computation-intensive and delay-sensitive applications of mobile devices onto network edges.Task offloading incurs extra communication latency and energy cost,and extensive efforts have focused on offloading schemes.Many metrics of the system utility are defined to achieve satisfactory quality of experience.However,most existing works overlook the balance between throughput and fairness.This study investigates the problem of finding an optimal offloading scheme in which the objective of optimization aims to maximize the system utility for leveraging between throughput and fairness.Based on Karush-Kuhn-Tucker condition,the expectation of time complexity is analyzed to derive the optimal scheme.A gradient-based approach for utility-aware task offloading is given.Furthermore,we provide an increment-based greedy approximation algorithm with 1+1/(e-1)ratio.Experimental results show that the proposed algorithms can achieve effective performance in utility and accuracy.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
基金supported by the"Science and Technology Development Plan Project of Jilin Province,China"(Grant No.20240101018JJ)the Fundamental Research Funds for the Central Universities(Grant No.2412023YQ004)the National Natural Science Foundation of China(Grant Nos.52072065,52272140,52372137,and U23A20568).
文摘Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic functions,i.e.,excita-tory post-synaptic current and pair-pulse facilitation are successfully mimicked with the memristor under electrical and optical stimulations.More importantly,the device exhibited distinguishable response currents by adjusting 4-bit input electrical/opti-cal signals.A multi-mode reservoir computing(RC)system is constructed with the optoelectronic memristors to emulate human tactile-visual fusion recognition and an accuracy of 98.7%is achieved.The optoelectronic memristor provides potential for developing multi-mode RC system.
基金funded by the Fundamental Research Funds for the Central Universities(J2023-024,J2023-027).
文摘As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.
基金the National Research Foundation(NRF)Singapore mid-sized center grant(NRF-MSG-2023-0002)FrontierCRP grant(NRF-F-CRP-2024-0006)+2 种基金A*STAR Singapore MTC RIE2025 project(M24W1NS005)IAF-PP project(M23M5a0069)Ministry of Education(MOE)Singapore Tier 2 project(MOE-T2EP50220-0014).
文摘The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.
基金supported by the Natural Science Foundation of Zhejiang Province(Grant No.LQ24F040007)the National Natural Science Foundation of China(Grant No.U22A2075)the Opening Project of State Key Laboratory of Polymer Materials Engineering(Sichuan University)(Grant No.sklpme2024-1-21).
文摘To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the various materials inves-tigated for the fabrication of synaptic devices,silicon carbide(SiC)has emerged as a preferred choices due to its high electron mobility,superior thermal conductivity,and excellent thermal stability,which exhibits promising potential for neuromorphic applications in harsh environments.In this review,the recent progress in SiC-based synaptic devices is summarized.Firstly,an in-depth discussion is conducted regarding the categories,working mechanisms,and structural designs of these devices.Subse-quently,several application scenarios for SiC-based synaptic devices are presented.Finally,a few perspectives and directions for their future development are outlined.
文摘The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.
基金supported in part by the National Natural Science Foundation of China under Grant No.61473066in part by the Natural Science Foundation of Hebei Province under Grant No.F2021501020+2 种基金in part by the S&T Program of Qinhuangdao under Grant No.202401A195in part by the Science Research Project of Hebei Education Department under Grant No.QN2025008in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant No.22567637H
文摘Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.
基金supported by the ONR Vannevar Bush Faculty Fellowship(Grant No.N00014-22-1-2795).
文摘Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed.
基金supported by the National Science Foundation for Distinguished Young Scholars of China(Grant No.12425209)the National Natural Science Foundation of China(Grant No.U20A20390,11827803,12172034,11822201,62004056,62104058,62271269).
文摘Neuromorphic devices have shown great potential in simulating the function of biological neurons due to their efficient parallel information processing and low energy consumption.MXene-Ti_(3)C_(2)T_(x),an emerging twodimensional material,stands out as an ideal candidate for fabricating neuromorphic devices.Its exceptional electrical performance and robust mechanical properties make it an ideal choice for this purpose.This review aims to uncover the advantages and properties of MXene-Ti_(3)C_(2)T_(x)in neuromorphic devices and to promote its further development.Firstly,we categorize several core physical mechanisms present in MXene-Ti_(3)C_(2)T_(x)neuromorphic devices and summarize in detail the reasons for their formation.Then,this work systematically summarizes and classifies advanced techniques for the three main optimization pathways of MXene-Ti_(3)C_(2)T_(x),such as doping engineering,interface engineering,and structural engineering.Significantly,this work highlights innovative applications of MXene-Ti_(3)C_(2)T_(x)neuromorphic devices in cutting-edge computing paradigms,particularly near-sensor computing and in-sensor computing.Finally,this review carefully compiles a table that integrates almost all research results involving MXene-Ti_(3)C_(2)T_(x)neuromorphic devices and discusses the challenges,development prospects,and feasibility of MXene-Ti_(3)C_(2)T_(x)-based neuromorphic devices in practical applications,aiming to lay a solid theoretical foundation and provide technical support for further exploration and application of MXene-Ti_(3)C_(2)T_(x)in the field of neuromorphic devices.
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.
基金funded by the Hong Kong-Macao-Taiwan Science and Technology Cooperation Project of the Science and Technology Innovation Action Plan in Shanghai,China(23510760200)the Oriental Talent Youth Program of Shanghai,China(No.Y3DFRCZL01)+1 种基金the Outstanding Program of the Youth Innovation Promotion Association of the Chinese Academy of Sciences(No.Y2023080)the Strategic Priority Research Program of the Chinese Academy of Sciences Category A(No.XDA0360404).
文摘The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up new industrial opportunities in both air and space,with satellite-powered computing emerging as a new computing paradigm:Orbital Edge Computing(OEC).Compared to terrestrial edge computing,the mobility of LEO satellites and their limited communication,computation,and storage resources pose challenges in designing task-specific scheduling algorithms.Previous survey papers have largely focused on terrestrial edge computing or the integration of space and ground technologies,lacking a comprehensive summary of OEC architecture,algorithms,and case studies.This paper conducts a comprehensive survey and analysis of OEC's system architecture,applications,algorithms,and simulation tools,providing a solid background for researchers in the field.By discussing OEC use cases and the challenges faced,potential research directions for future OEC research are proposed.
文摘The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.
基金supported by National Key Research and Development Program of China(Grant No.2022YFA1405600)Beijing Natural Science Foundation(Grant No.Z210006)+3 种基金National Natural Science Foundation of China—Young Scientists Fund(Grant No.12104051,62122004)Hong Kong Research Grant Council(Grant Nos.27206321,17205922,17212923 and C1009-22GF)Shenzhen Science and Technology Innovation Commission(SGDX20220530111405040)partially supported by ACCESS—AI Chip Center for Emerging Smart Systems,sponsored by Innovation and Technology Fund(ITF),Hong Kong SAR。
文摘Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology.