Memristors, as memristive devices, have received a great deal of interest since being fabricated by HP labs. The forgetting effect that has significant influences on memristors' performance has to be taken into accou...Memristors, as memristive devices, have received a great deal of interest since being fabricated by HP labs. The forgetting effect that has significant influences on memristors' performance has to be taken into account when they are employed. It is significant to build a good model that can express the forgetting effect well for application researches due to its promising prospects in brain-inspired computing. Some models are proposed to represent the forgetting effect but do not work well. In this paper, we present a novel window function, which has good performance in a drift model. We analyze the deficiencies of the previous drift diffusion models for the forgetting effect and propose an improved model. Moreover,the improved model is exploited as a synapse model in spiking neural networks to recognize digit images. Simulation results show that the improved model overcomes the defects of the previous models and can be used as a synapse model in brain-inspired computing due to its synaptic characteristics. The results also indicate that the improved model can express the forgetting effect better when it is employed in spiking neural networks, which means that more appropriate evaluations can be obtained in applications.展开更多
By definition, bionics is the application of biological mechanisms found in nature to artificial systems in order to achieve specific functional goals. Successful examples range from Velcro, the touch fastener inspire...By definition, bionics is the application of biological mechanisms found in nature to artificial systems in order to achieve specific functional goals. Successful examples range from Velcro, the touch fastener inspired by the hooks of burrs, to self-cleaning material, inspired by the surface of the lotus leaf. Recently, a new trend in bionics i Brain-Inspired Computing (BIC) - has captured increasing attention. Instead of learning from burrs and leaves, BIC aims to understand the brain and then utilize its operating principles to achieve powerful and efficient information processing.展开更多
Brain-inspired computing is a new technology that draws on the principles of brain science and is oriented to the efficient development of artificial general intelligence(AGI),and a brain-inspired computing system is ...Brain-inspired computing is a new technology that draws on the principles of brain science and is oriented to the efficient development of artificial general intelligence(AGI),and a brain-inspired computing system is a hierarchical system composed of neuromorphic chips,basic software and hardware,and algorithms/applications that embody this tech-nology.While the system is developing rapidly,it faces various challenges and opportunities brought by interdisciplinary research,including the issue of software and hardware fragmentation.This paper analyzes the status quo of brain-inspired computing systems.Enlightened by some design principle and methodology of general-purpose computers,it is proposed to construct"general-purpose"brain-inspired computing systems.A general-purpose brain-inspired computing system refers to a brain-inspired computing hierarchy constructed based on the design philosophy of decoupling software and hardware,which can flexibly support various brain-inspired computing applications and neuromorphic chips with different architec-tures.Further,this paper introduces our recent work in these aspects,including the ANN(artificial neural network)/SNN(spiking neural network)development tools,the hardware agnostic compilation infrastructure,and the chip micro-archi-tecture with high flexibility of programming and high performance;these studies show that the"general-purpose"system can remarkably improve the efficiency of application development and enhance the productivity of basic software,thereby being conductive to accelerating the advancement of various brain-inspired algorithms and applications.We believe that this is the key to the collaborative research and development,and the evolution of applications,basic software and chips in this field,and conducive to building a favorable software/hardware ecosystem of brain-inspired computing.展开更多
Brain-inspired computing is a popular research area with the potential to advance our understanding of brain function,artificial intelligence,and next-generation computing machinery.Often referred to as"neuromorp...Brain-inspired computing is a popular research area with the potential to advance our understanding of brain function,artificial intelligence,and next-generation computing machinery.Often referred to as"neuromorphic",these systems and algorithms hope to harness mechanisms present in brains to make step changes in perfor-mance over regular von Neumann based approaches[1].展开更多
Recently, memristors have garnered widespread attention as neuromorphic devices that can simulate synaptic behavior, holding promise for future commercial applications in neuromorphic computing. In this paper, we pres...Recently, memristors have garnered widespread attention as neuromorphic devices that can simulate synaptic behavior, holding promise for future commercial applications in neuromorphic computing. In this paper, we present a memristor with an Au/Bi_(3.2)La_(0.8)Ti_(3)O_(12) (BLTO)/ITO structure, demonstrating a switching ratio of nearly 103 over a duration of 104 s. It successfully simulates a range of synaptic behaviors, including long-term potentiation and depression, paired-pulse facilitation, spike-timing-dependent plasticity, spike-rate-dependent plasticity etc. Interestingly, we also employ it to simulate pain threshold, sensitization, and desensitization behaviors of pain-perceptual nociceptor (PPN). Lastly, by introducing memristor differential pairs (1T1R-1T1R), we train a neural network, effectively simplifying the learning process, reducing training time, and achieving a handwriting digit recognition accuracy of up to 97.19 %. Overall, the proposed device holds immense potential in the field of neuromorphic computing, offering possibilities for the next generation of high-performance neuromorphic computing chips.展开更多
Brain-inspired computing refers to computational models,methods,and systems,that are mainly inspired by the processing mode or structure of brain.A recent study proposed the concept of"neuromorphic completeness&q...Brain-inspired computing refers to computational models,methods,and systems,that are mainly inspired by the processing mode or structure of brain.A recent study proposed the concept of"neuromorphic completeness"and the corresponding system hierarchy,which is helpful to determine the capability boundary of brain-inspired computing system and to judge whether hardware and software of brain-inspired computing are compatible with each other.As a position paper,this article analyzes the existing brain-inspired chips design characteristics and the current so-called"general purpose"application development frameworks for brain-inspired computing,as well as introduces the background and the potential of this proposal.Further,some key features of this concept are presented through the comparison with the Turing completeness and approximate computation,and the analyses of the relationship with"general-purpose"brain-inspired computing systems(it means that computing systems can support all computable applications).In the end,a promising technical approach to realize such computing systems is introduced,as well as the on-going research and the work foundation.We believe that this work is conducive to the design of extensible neuromorphic complete hardware-primitives and the corresponding chips.On this basis,it is expected to gradually realize"general purpose"brain-inspired computing system,in order to take into account the functionality completeness and application efficiency.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Neuromorphic computing has the potential to overcome limitations of traditional silicon technology in machine learning tasks.Recent advancements in large crossbar arrays and silicon-based asynchronous spiking neural n...Neuromorphic computing has the potential to overcome limitations of traditional silicon technology in machine learning tasks.Recent advancements in large crossbar arrays and silicon-based asynchronous spiking neural networks have led to promising neuromorphic systems.However,developing compact parallel computing technology for integrating artificial neural networks into traditional hardware remains a challenge.Organic computational materials offer affordable,biocompatible neuromorphic devices with exceptional adjustability and energy-efficient switching.Here,the review investigates the advancements made in the development of organic neuromorphic devices.This review explores resistive switching mechanisms such as interface-regulated filament growth,molecular-electronic dynamics,nanowire-confined filament growth,and vacancy-assisted ion migration,while proposing methodologies to enhance state retention and conductance adjustment.The survey examines the challenges faced in implementing low-power neuromorphic computing,e.g.,reducing device size and improving switching time.The review analyses the potential of these materials in adjustable,flexible,and low-power consumption applications,viz.biohybrid spiking circuits interacting with biological systems,systems that respond to specific events,robotics,intelligent agents,neuromorphic computing,neuromorphic bioelectronics,neuroscience,and other applications,and prospects of this technology.展开更多
Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic ...Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic functions,i.e.,excita-tory post-synaptic current and pair-pulse facilitation are successfully mimicked with the memristor under electrical and optical stimulations.More importantly,the device exhibited distinguishable response currents by adjusting 4-bit input electrical/opti-cal signals.A multi-mode reservoir computing(RC)system is constructed with the optoelectronic memristors to emulate human tactile-visual fusion recognition and an accuracy of 98.7%is achieved.The optoelectronic memristor provides potential for developing multi-mode RC system.展开更多
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the...As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.展开更多
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc...The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.展开更多
To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the vario...To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the various materials inves-tigated for the fabrication of synaptic devices,silicon carbide(SiC)has emerged as a preferred choices due to its high electron mobility,superior thermal conductivity,and excellent thermal stability,which exhibits promising potential for neuromorphic applications in harsh environments.In this review,the recent progress in SiC-based synaptic devices is summarized.Firstly,an in-depth discussion is conducted regarding the categories,working mechanisms,and structural designs of these devices.Subse-quently,several application scenarios for SiC-based synaptic devices are presented.Finally,a few perspectives and directions for their future development are outlined.展开更多
The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi...The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.展开更多
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta...Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.展开更多
Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different ...Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed.展开更多
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up ...The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up new industrial opportunities in both air and space,with satellite-powered computing emerging as a new computing paradigm:Orbital Edge Computing(OEC).Compared to terrestrial edge computing,the mobility of LEO satellites and their limited communication,computation,and storage resources pose challenges in designing task-specific scheduling algorithms.Previous survey papers have largely focused on terrestrial edge computing or the integration of space and ground technologies,lacking a comprehensive summary of OEC architecture,algorithms,and case studies.This paper conducts a comprehensive survey and analysis of OEC's system architecture,applications,algorithms,and simulation tools,providing a solid background for researchers in the field.By discussing OEC use cases and the challenges faced,potential research directions for future OEC research are proposed.展开更多
The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machin...The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.展开更多
Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasti...Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology.展开更多
基金Project supported by the National Natural Science Foundation of China(Grant No.61332003)High Performance Computing Laboratory,China(Grant No.201501-02)
文摘Memristors, as memristive devices, have received a great deal of interest since being fabricated by HP labs. The forgetting effect that has significant influences on memristors' performance has to be taken into account when they are employed. It is significant to build a good model that can express the forgetting effect well for application researches due to its promising prospects in brain-inspired computing. Some models are proposed to represent the forgetting effect but do not work well. In this paper, we present a novel window function, which has good performance in a drift model. We analyze the deficiencies of the previous drift diffusion models for the forgetting effect and propose an improved model. Moreover,the improved model is exploited as a synapse model in spiking neural networks to recognize digit images. Simulation results show that the improved model overcomes the defects of the previous models and can be used as a synapse model in brain-inspired computing due to its synaptic characteristics. The results also indicate that the improved model can express the forgetting effect better when it is employed in spiking neural networks, which means that more appropriate evaluations can be obtained in applications.
文摘By definition, bionics is the application of biological mechanisms found in nature to artificial systems in order to achieve specific functional goals. Successful examples range from Velcro, the touch fastener inspired by the hooks of burrs, to self-cleaning material, inspired by the surface of the lotus leaf. Recently, a new trend in bionics i Brain-Inspired Computing (BIC) - has captured increasing attention. Instead of learning from burrs and leaves, BIC aims to understand the brain and then utilize its operating principles to achieve powerful and efficient information processing.
基金This work was supported by the National Natural Science Foundation of China under Grant Nos.62250006,62072266,and 61836004the National Natural Science Foundation of China Youth Fund under Grant No.62202254,Beijing National Research Center for Information Science and Technology under Grant No.BNR2022RC01003+1 种基金the Tsinghua University Initiative Scientific Research Programthe Suzhou-Tsinghua Innovation Leadership Program.
文摘Brain-inspired computing is a new technology that draws on the principles of brain science and is oriented to the efficient development of artificial general intelligence(AGI),and a brain-inspired computing system is a hierarchical system composed of neuromorphic chips,basic software and hardware,and algorithms/applications that embody this tech-nology.While the system is developing rapidly,it faces various challenges and opportunities brought by interdisciplinary research,including the issue of software and hardware fragmentation.This paper analyzes the status quo of brain-inspired computing systems.Enlightened by some design principle and methodology of general-purpose computers,it is proposed to construct"general-purpose"brain-inspired computing systems.A general-purpose brain-inspired computing system refers to a brain-inspired computing hierarchy constructed based on the design philosophy of decoupling software and hardware,which can flexibly support various brain-inspired computing applications and neuromorphic chips with different architec-tures.Further,this paper introduces our recent work in these aspects,including the ANN(artificial neural network)/SNN(spiking neural network)development tools,the hardware agnostic compilation infrastructure,and the chip micro-archi-tecture with high flexibility of programming and high performance;these studies show that the"general-purpose"system can remarkably improve the efficiency of application development and enhance the productivity of basic software,thereby being conductive to accelerating the advancement of various brain-inspired algorithms and applications.We believe that this is the key to the collaborative research and development,and the evolution of applications,basic software and chips in this field,and conducive to building a favorable software/hardware ecosystem of brain-inspired computing.
文摘Brain-inspired computing is a popular research area with the potential to advance our understanding of brain function,artificial intelligence,and next-generation computing machinery.Often referred to as"neuromorphic",these systems and algorithms hope to harness mechanisms present in brains to make step changes in perfor-mance over regular von Neumann based approaches[1].
基金This work was financially supported by the National Natural Science Foundation of China(Grant Nos.11574057 and 12172093)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2021A1515012607).
文摘Recently, memristors have garnered widespread attention as neuromorphic devices that can simulate synaptic behavior, holding promise for future commercial applications in neuromorphic computing. In this paper, we present a memristor with an Au/Bi_(3.2)La_(0.8)Ti_(3)O_(12) (BLTO)/ITO structure, demonstrating a switching ratio of nearly 103 over a duration of 104 s. It successfully simulates a range of synaptic behaviors, including long-term potentiation and depression, paired-pulse facilitation, spike-timing-dependent plasticity, spike-rate-dependent plasticity etc. Interestingly, we also employ it to simulate pain threshold, sensitization, and desensitization behaviors of pain-perceptual nociceptor (PPN). Lastly, by introducing memristor differential pairs (1T1R-1T1R), we train a neural network, effectively simplifying the learning process, reducing training time, and achieving a handwriting digit recognition accuracy of up to 97.19 %. Overall, the proposed device holds immense potential in the field of neuromorphic computing, offering possibilities for the next generation of high-performance neuromorphic computing chips.
基金partly supported by the National Natural Science Foundation of China(Nos.62072266 and 62050340)Beijing Academy of Artificial Intelligence(No.BAAI2019ZD0403)。
文摘Brain-inspired computing refers to computational models,methods,and systems,that are mainly inspired by the processing mode or structure of brain.A recent study proposed the concept of"neuromorphic completeness"and the corresponding system hierarchy,which is helpful to determine the capability boundary of brain-inspired computing system and to judge whether hardware and software of brain-inspired computing are compatible with each other.As a position paper,this article analyzes the existing brain-inspired chips design characteristics and the current so-called"general purpose"application development frameworks for brain-inspired computing,as well as introduces the background and the potential of this proposal.Further,some key features of this concept are presented through the comparison with the Turing completeness and approximate computation,and the analyses of the relationship with"general-purpose"brain-inspired computing systems(it means that computing systems can support all computable applications).In the end,a promising technical approach to realize such computing systems is introduced,as well as the on-going research and the work foundation.We believe that this work is conducive to the design of extensible neuromorphic complete hardware-primitives and the corresponding chips.On this basis,it is expected to gradually realize"general purpose"brain-inspired computing system,in order to take into account the functionality completeness and application efficiency.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
基金financially supported by the Ministry of Education(Singapore)(MOE-T2EP50220-0022)SUTD-MIT International Design Center(Singapore)+3 种基金SUTD-ZJU IDEA Grant Program(SUTD-ZJU(VP)201903)SUTD Kickstarter Initiative(SKI 2021_02_03,SKI 2021_02_17,SKI 2021_01_04)Agency of Science,Technology and Research(Singapore)(A20G9b0135)National Supercomputing Centre(Singapore)(15001618)。
文摘Neuromorphic computing has the potential to overcome limitations of traditional silicon technology in machine learning tasks.Recent advancements in large crossbar arrays and silicon-based asynchronous spiking neural networks have led to promising neuromorphic systems.However,developing compact parallel computing technology for integrating artificial neural networks into traditional hardware remains a challenge.Organic computational materials offer affordable,biocompatible neuromorphic devices with exceptional adjustability and energy-efficient switching.Here,the review investigates the advancements made in the development of organic neuromorphic devices.This review explores resistive switching mechanisms such as interface-regulated filament growth,molecular-electronic dynamics,nanowire-confined filament growth,and vacancy-assisted ion migration,while proposing methodologies to enhance state retention and conductance adjustment.The survey examines the challenges faced in implementing low-power neuromorphic computing,e.g.,reducing device size and improving switching time.The review analyses the potential of these materials in adjustable,flexible,and low-power consumption applications,viz.biohybrid spiking circuits interacting with biological systems,systems that respond to specific events,robotics,intelligent agents,neuromorphic computing,neuromorphic bioelectronics,neuroscience,and other applications,and prospects of this technology.
基金supported by the"Science and Technology Development Plan Project of Jilin Province,China"(Grant No.20240101018JJ)the Fundamental Research Funds for the Central Universities(Grant No.2412023YQ004)the National Natural Science Foundation of China(Grant Nos.52072065,52272140,52372137,and U23A20568).
文摘Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic functions,i.e.,excita-tory post-synaptic current and pair-pulse facilitation are successfully mimicked with the memristor under electrical and optical stimulations.More importantly,the device exhibited distinguishable response currents by adjusting 4-bit input electrical/opti-cal signals.A multi-mode reservoir computing(RC)system is constructed with the optoelectronic memristors to emulate human tactile-visual fusion recognition and an accuracy of 98.7%is achieved.The optoelectronic memristor provides potential for developing multi-mode RC system.
基金funded by the Fundamental Research Funds for the Central Universities(J2023-024,J2023-027).
文摘As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.
基金the National Research Foundation(NRF)Singapore mid-sized center grant(NRF-MSG-2023-0002)FrontierCRP grant(NRF-F-CRP-2024-0006)+2 种基金A*STAR Singapore MTC RIE2025 project(M24W1NS005)IAF-PP project(M23M5a0069)Ministry of Education(MOE)Singapore Tier 2 project(MOE-T2EP50220-0014).
文摘The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment.
基金supported by the Natural Science Foundation of Zhejiang Province(Grant No.LQ24F040007)the National Natural Science Foundation of China(Grant No.U22A2075)the Opening Project of State Key Laboratory of Polymer Materials Engineering(Sichuan University)(Grant No.sklpme2024-1-21).
文摘To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the various materials inves-tigated for the fabrication of synaptic devices,silicon carbide(SiC)has emerged as a preferred choices due to its high electron mobility,superior thermal conductivity,and excellent thermal stability,which exhibits promising potential for neuromorphic applications in harsh environments.In this review,the recent progress in SiC-based synaptic devices is summarized.Firstly,an in-depth discussion is conducted regarding the categories,working mechanisms,and structural designs of these devices.Subse-quently,several application scenarios for SiC-based synaptic devices are presented.Finally,a few perspectives and directions for their future development are outlined.
文摘The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks.
基金supported in part by the National Natural Science Foundation of China under Grant No.61473066in part by the Natural Science Foundation of Hebei Province under Grant No.F2021501020+2 种基金in part by the S&T Program of Qinhuangdao under Grant No.202401A195in part by the Science Research Project of Hebei Education Department under Grant No.QN2025008in part by the Innovation Capability Improvement Plan Project of Hebei Province under Grant No.22567637H
文摘Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework.
基金supported by the ONR Vannevar Bush Faculty Fellowship(Grant No.N00014-22-1-2795).
文摘Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed.
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.
基金funded by the Hong Kong-Macao-Taiwan Science and Technology Cooperation Project of the Science and Technology Innovation Action Plan in Shanghai,China(23510760200)the Oriental Talent Youth Program of Shanghai,China(No.Y3DFRCZL01)+1 种基金the Outstanding Program of the Youth Innovation Promotion Association of the Chinese Academy of Sciences(No.Y2023080)the Strategic Priority Research Program of the Chinese Academy of Sciences Category A(No.XDA0360404).
文摘The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up new industrial opportunities in both air and space,with satellite-powered computing emerging as a new computing paradigm:Orbital Edge Computing(OEC).Compared to terrestrial edge computing,the mobility of LEO satellites and their limited communication,computation,and storage resources pose challenges in designing task-specific scheduling algorithms.Previous survey papers have largely focused on terrestrial edge computing or the integration of space and ground technologies,lacking a comprehensive summary of OEC architecture,algorithms,and case studies.This paper conducts a comprehensive survey and analysis of OEC's system architecture,applications,algorithms,and simulation tools,providing a solid background for researchers in the field.By discussing OEC use cases and the challenges faced,potential research directions for future OEC research are proposed.
文摘The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.
基金supported by National Key Research and Development Program of China(Grant No.2022YFA1405600)Beijing Natural Science Foundation(Grant No.Z210006)+3 种基金National Natural Science Foundation of China—Young Scientists Fund(Grant No.12104051,62122004)Hong Kong Research Grant Council(Grant Nos.27206321,17205922,17212923 and C1009-22GF)Shenzhen Science and Technology Innovation Commission(SGDX20220530111405040)partially supported by ACCESS—AI Chip Center for Emerging Smart Systems,sponsored by Innovation and Technology Fund(ITF),Hong Kong SAR。
文摘Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology.