AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ...AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.展开更多
The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where su...The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where substantial data transfers are necessitated by the generation of extensive information and the need for frame-by-frame analysis. Herein, we present a novel approach for dynamic motion recognition, leveraging a spatial-temporal in-sensor computing system rooted in multiframe integration by employing photodetector. Our approach introduced a retinomorphic MoS_(2) photodetector device for motion detection and analysis. The device enables the generation of informative final states, nonlinearly embedding both past and present frames. Subsequent multiply-accumulate (MAC) calculations are efficiently performed as the classifier. When evaluating our devices for target detection and direction classification, we achieved an impressive recognition accuracy of 93.5%. By eliminating the need for frame-by-frame analysis, our system not only achieves high precision but also facilitates energy-efficient in-sensor computing.展开更多
Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasti...Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology.展开更多
The increasing popularity of quantum computing has resulted in a considerable rise in demand for cloud quantum computing usage in recent years.Nevertheless,the rapid surge in demand for cloud-based quantum computing r...The increasing popularity of quantum computing has resulted in a considerable rise in demand for cloud quantum computing usage in recent years.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity.In order to meet the needs of an increasing number of researchers,it is imperative to facilitate efficient and flexible access to computing resources in a cloud environment.In this paper,we propose a novel quantum computing paradigm,Virtual QPU(VQPU),which addresses this issue and enhances quantum cloud throughput with guaranteed circuit fidelity.The proposal introduces three innovative concepts:(1)The integration of virtualization technology into the field of quantum computing to enhance quantum cloud throughput.(2)The introduction of an asynchronous execution of circuits methodology to improve quantum computing flexibility.(3)The development of a virtual QPU allocation scheme for quantum tasks in a cloud environment to improve circuit fidelity.The concepts have been validated through the utilization of a self-built simulated quantum cloud platform.展开更多
Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal sca...Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal scale tasks.In this study,we report an OECT-based neuromorphic device with tunable relaxation time(τ)by introducing an additional vertical back-gate electrode into a planar structure.The dual-gate design enablesτreconfiguration from 93 to 541 ms.The tunable relaxation behaviors can be attributed to the combined effects of planar-gate induced electrochemical doping and back-gateinduced electrostatic coupling,as verified by electrochemical impedance spectroscopy analysis.Furthermore,we used theτ-tunable OECT devices as physical reservoirs in the RC system for intelligent driving trajectory prediction,achieving a significant improvement in prediction accuracy from below 69%to 99%.The results demonstrate that theτ-tunable OECT shows a promising candidate for multi-temporal scale neuromorphic computing applications.展开更多
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c...The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.展开更多
In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task schedul...In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.展开更多
As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and el...As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and electrochemical characteristics,MXenes have shown great potential in brain-inspired neuromorphic computing electronics,including neuromorphic gas sensors,pressure sensors and photodetectors.This paper provides a forward-looking review of the research progress regarding MXenes in the neuromorphic sensing domain and discussed the critical challenges that need to be resolved.Key bottlenecks such as insufficient long-term stability under environmental exposure,high costs,scalability limitations in large-scale production,and mechanical mismatch in wearable integration hinder their practical deployment.Furthermore,unresolved issues like interfacial compatibility in heterostructures and energy inefficiency in neu-romorphic signal conversion demand urgent attention.The review offers insights into future research directions enhance the fundamental understanding of MXene properties and promote further integration into neuromorphic computing applications through the convergence with various emerging technologies.展开更多
The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,fle...The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,flexible memristors exhibit great application potential in emulating artificial synapses for highefficiency and low power consumption neuromorphic computing.This paper provides comprehensive overview of flexible memristors from perspectives of development history,material system,device structure,mechanical deformation method,device performance analysis,stress simulation during deformation,and neuromorphic computing applications.The recent advances in flexible electronics are summarized,including single device,device array and integration.The challenges and future perspectives of flexible memristor for neuromorphic computing are discussed deeply,paving the way for constructing wearable smart electronics and applications in large-scale neuromorphic computing and high-order intelligent robotics.展开更多
High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic f...High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic features enable forming-free resistive switching,multilevel conductance modulation,and synaptic plasticity,making HEOs attractive for neuromorphic computing.This review outlines recent progress in HEO-based memristors across materials engineering,switching mechanisms,and synaptic emulation.Particular attention is given to vacancy migration,phase transitions,and valence-state dynamics—mechanisms that underlie the switching behaviors observed in both amorphous and crystalline systems.Their relevance to neuromorphic functions such as short-term plasticity and spike-timing-dependent learning is also examined.While encouraging results have been achieved at the device level,challenges remain in conductance precision,variability control,and scalable integration.Addressing these demands a concerted effort across materials design,interface optimization,and task-aware modeling.With such integration,HEO memristors offer a compelling pathway toward energy-efficient and adaptable brain-inspired electronics.展开更多
Neuromorphic devices have garnered significant attention as potential building blocks for energy-efficient hardware systems owing to their capacity to emulate the computational efficiency of the brain.In this regard,r...Neuromorphic devices have garnered significant attention as potential building blocks for energy-efficient hardware systems owing to their capacity to emulate the computational efficiency of the brain.In this regard,reservoir computing(RC)framework,which leverages straightforward training methods and efficient temporal signal processing,has emerged as a promising scheme.While various physical reservoir devices,including ferroelectric,optoelectronic,and memristor-based systems,have been demonstrated,many still face challenges related to compatibility with mainstream complementary metal oxide semiconductor(CMOS)integration processes.This study introduced a silicon-based schottky barrier metal-oxide-semiconductor field effect transistor(SB-MOSFET),which was fabricated under low thermal budget and compatible with back-end-of-line(BEOL).The device demonstrated short-term memory characteristics,facilitated by the modulation of schottky barriers and charge trapping.Utilizing these characteristics,a RC system for temporal data processing was constructed,and its performance was validated in a 5×4 digital classification task,achieving an accuracy exceeding 98%after 50 training epochs.Furthermore,the system successfully processed temporal signal in waveform classification and prediction tasks using time-division multiplexing.Overall,the SB-MOSFET's high compatibility with CMOS technology provides substantial advantages for large-scale integration,enabling the development of energy-efficient reservoir computing hardware.展开更多
The rapid development of internet of things(loT)urgently needs edge miniaturized computing devices with high efficiency and low-power consumption.In-sensor computing has emerged as a promising technology to enable in-...The rapid development of internet of things(loT)urgently needs edge miniaturized computing devices with high efficiency and low-power consumption.In-sensor computing has emerged as a promising technology to enable in-situ data processing within the sensor array.Here,we report an optoelectronic array for in-sensor computing by integrating photodiodes(PDs)with resistive random-access memories(RRAMs).The PD-RRAM unit cell exhibits reconfigurable optoelectronic output and photo-responsivity by programming RRAMs into different resistance states.Furthermore,a 3×3 PD-RRAM array is fabricated to demonstrate optical image recognition,achieving a universal architecture with ultralow latency and low power consumption.This study highlights the great potential of the PD-RRAM optoelectronic array as an energy-effcient in-sensor computing primitive for future IoT applications.展开更多
Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources...Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources of computation and communication.Multiaccess edge computing(MEC)can offload computing-intensive tasks to the nearby edge servers,which alleviates the pressure of devices.Ultra-dense network(UDN)can provide effective spectrum resources by deploying a large number of micro base stations.Furthermore,network slicing can support various applications in different communication scenarios.Therefore,this paper integrates the ultra-dense network slicing and the MEC technology,and introduces a hybrid computing offloading strategy in order to satisfy various quality of service(QoS)of edge devices.In order to dynamically allocate limited resources,the above problem is formulated as multiagent distributed deep reinforcement learning(DRL),which will achieve low overhead computation offloading strategy and real-time resource allocation decisions.In this context,federated learning is added to train DRL agents in a distributed manner,where each agent is dedicated to exploring actions composed of offloading decisions and allocating resources,so as to jointly optimize system delay and energy consumption.Simulation results show that the proposed learning algorithm has better performance compared with other strategies in literature.展开更多
This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagno...This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.展开更多
Rapid developments in the Internet of Things and Artificial Intelligence trigger higher requirements for image perception and learning of external environments through visual systems.However,limited by von Neumann'...Rapid developments in the Internet of Things and Artificial Intelligence trigger higher requirements for image perception and learning of external environments through visual systems.However,limited by von Neumann's bottleneck,the physical separation of sense,memory,and processing units in a conventional personal computer-based vision system tend to consume a significant amount of energy,time latency,and additional hardware costs.By integrating computational tasks of multiple functionalities into the sensors themselves,the emerging bio-inspired neuromorphic visual systems provide an opportunity to overcome these limitations.With high speed,ultralow power and strong adaptability,it is highly desirable to develop a neuromorphic vision system that is based on highly precise in-sensor computing devices,namely retinomorphic devices.We here present a timely review of retinomorphic devices for visual in-sensor computing.We begin with several types of physical mechanisms of photoelectric sensors that can be constructed for artificial vision.The potential applications of retinomorphic hardware are,thereafter,thoroughly summarized.We also highlight the possible strategies to existing challenges and give a brief perspective of retinomorphic architecture for in-sensor computing.展开更多
With the development of artificial intelligence and the Internet of Things,the number of sensory nodes is growing rapidly,leading to the exchange of large quantities of redundant data between sensors and computing uni...With the development of artificial intelligence and the Internet of Things,the number of sensory nodes is growing rapidly,leading to the exchange of large quantities of redundant data between sensors and computing units.Insensor computing schemes,which integrate sensing and processing,have provided a promising route to addressing the sensing/processing bottleneck by reducing power consumption,time delay and hardware redundancy.In this study,an in-sensor computing architecture involving a photoelectronic cell based on a wafer-scale two-dimensional MoS_(2)thin film was demonstrated.The MoS_(2)photodetector cell used a top-gate device structure with in-dium tin oxide(ITO)as the transparent gate electrode,which exhibited high air-stability and a high photoresponsivity(R)up to 555.8 A W^(-1) at an illumination power density(P_(in))of 16.0μW cm^(-2)(λ=500 nm).Additionally,a MoS_(2)photodetector array with uniform photoresponsive characteristics was achieved.Furthermore,logic gates,including inverter,NAND,and NOR,were achieved based on MoS_(2)photodetector cells.Such multifunctional and robust in-sensor computing was ascribed to the uniform wafer-scale MoS_(2)film grown by atomic layer deposition(ALD)and the unique device structure.Because the detection of optical signals and logic operations were achieved through MoS_(2)photodetector cells with area efficiency,the proposed in-sensor computing device paves the way for potential applications in high-performance,integrated sensing and processing systems.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the es...1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the escalating demand for high-performance computing over space.Computing over space entails the deployment of computational resources on platforms such as satellites to process large-scale data under constraints such as high radiation exposure,restricted power consumption,and minimized weight.展开更多
Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic ...Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic functions,i.e.,excita-tory post-synaptic current and pair-pulse facilitation are successfully mimicked with the memristor under electrical and optical stimulations.More importantly,the device exhibited distinguishable response currents by adjusting 4-bit input electrical/opti-cal signals.A multi-mode reservoir computing(RC)system is constructed with the optoelectronic memristors to emulate human tactile-visual fusion recognition and an accuracy of 98.7%is achieved.The optoelectronic memristor provides potential for developing multi-mode RC system.展开更多
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
基金Project supported in part by the National Key Research and Development Program of China(Grant No.2021YFA0716400)the National Natural Science Foundation of China(Grant Nos.62225405,62150027,61974080,61991443,61975093,61927811,61875104,62175126,and 62235011)+2 种基金the Ministry of Science and Technology of China(Grant Nos.2021ZD0109900 and 2021ZD0109903)the Collaborative Innovation Center of Solid-State Lighting and Energy-Saving ElectronicsTsinghua University Initiative Scientific Research Program.
文摘AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.
基金supported by the National Natural Science Foundation of China (52322210, 52172144, 22375069, 21825103, and U21A2069)National Key R&D Program of China (2021YFA1200501)+2 种基金Shenzhen Science and Technology Program (JCYJ20220818102215033, JCYJ20200109105422876)the Innovation Project of Optics Valley Laboratory (OVL2023PY007)Science and Technology Commission of Shanghai Municipality (21YF1454700)。
文摘The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where substantial data transfers are necessitated by the generation of extensive information and the need for frame-by-frame analysis. Herein, we present a novel approach for dynamic motion recognition, leveraging a spatial-temporal in-sensor computing system rooted in multiframe integration by employing photodetector. Our approach introduced a retinomorphic MoS_(2) photodetector device for motion detection and analysis. The device enables the generation of informative final states, nonlinearly embedding both past and present frames. Subsequent multiply-accumulate (MAC) calculations are efficiently performed as the classifier. When evaluating our devices for target detection and direction classification, we achieved an impressive recognition accuracy of 93.5%. By eliminating the need for frame-by-frame analysis, our system not only achieves high precision but also facilitates energy-efficient in-sensor computing.
基金supported by National Key Research and Development Program of China(Grant No.2022YFA1405600)Beijing Natural Science Foundation(Grant No.Z210006)+3 种基金National Natural Science Foundation of China—Young Scientists Fund(Grant No.12104051,62122004)Hong Kong Research Grant Council(Grant Nos.27206321,17205922,17212923 and C1009-22GF)Shenzhen Science and Technology Innovation Commission(SGDX20220530111405040)partially supported by ACCESS—AI Chip Center for Emerging Smart Systems,sponsored by Innovation and Technology Fund(ITF),Hong Kong SAR。
文摘Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology.
文摘The increasing popularity of quantum computing has resulted in a considerable rise in demand for cloud quantum computing usage in recent years.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity.In order to meet the needs of an increasing number of researchers,it is imperative to facilitate efficient and flexible access to computing resources in a cloud environment.In this paper,we propose a novel quantum computing paradigm,Virtual QPU(VQPU),which addresses this issue and enhances quantum cloud throughput with guaranteed circuit fidelity.The proposal introduces three innovative concepts:(1)The integration of virtualization technology into the field of quantum computing to enhance quantum cloud throughput.(2)The introduction of an asynchronous execution of circuits methodology to improve quantum computing flexibility.(3)The development of a virtual QPU allocation scheme for quantum tasks in a cloud environment to improve circuit fidelity.The concepts have been validated through the utilization of a self-built simulated quantum cloud platform.
基金supported by the National Key Research and Development Program of China under Grant 2022YFB3608300in part by the National Nature Science Foundation of China(NSFC)under Grants 62404050,U2341218,62574056,62204052。
文摘Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal scale tasks.In this study,we report an OECT-based neuromorphic device with tunable relaxation time(τ)by introducing an additional vertical back-gate electrode into a planar structure.The dual-gate design enablesτreconfiguration from 93 to 541 ms.The tunable relaxation behaviors can be attributed to the combined effects of planar-gate induced electrochemical doping and back-gateinduced electrostatic coupling,as verified by electrochemical impedance spectroscopy analysis.Furthermore,we used theτ-tunable OECT devices as physical reservoirs in the RC system for intelligent driving trajectory prediction,achieving a significant improvement in prediction accuracy from below 69%to 99%.The results demonstrate that theτ-tunable OECT shows a promising candidate for multi-temporal scale neuromorphic computing applications.
基金appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R384)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.
基金supported and funded by theDeanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2503).
文摘In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption.
基金supported by the NSFC(12474071)Natural Science Foundation of Shandong Province(ZR2024YQ051,ZR2025QB50)+6 种基金Guangdong Basic and Applied Basic Research Foundation(2025A1515011191)the Shanghai Sailing Program(23YF1402200,23YF1402400)funded by Basic Research Program of Jiangsu(BK20240424)Open Research Fund of State Key Laboratory of Crystal Materials(KF2406)Taishan Scholar Foundation of Shandong Province(tsqn202408006,tsqn202507058)Young Talent of Lifting engineering for Science and Technology in Shandong,China(SDAST2024QTB002)the Qilu Young Scholar Program of Shandong University。
文摘As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and electrochemical characteristics,MXenes have shown great potential in brain-inspired neuromorphic computing electronics,including neuromorphic gas sensors,pressure sensors and photodetectors.This paper provides a forward-looking review of the research progress regarding MXenes in the neuromorphic sensing domain and discussed the critical challenges that need to be resolved.Key bottlenecks such as insufficient long-term stability under environmental exposure,high costs,scalability limitations in large-scale production,and mechanical mismatch in wearable integration hinder their practical deployment.Furthermore,unresolved issues like interfacial compatibility in heterostructures and energy inefficiency in neu-romorphic signal conversion demand urgent attention.The review offers insights into future research directions enhance the fundamental understanding of MXene properties and promote further integration into neuromorphic computing applications through the convergence with various emerging technologies.
基金supported by the NSFC(12474071)Natural Science Foundation of Shandong Province(ZR2024YQ051)+5 种基金Open Research Fund of State Key Laboratory of Materials for Integrated Circuits(SKLJC-K2024-12)the Shanghai Sailing Program(23YF1402200,23YF1402400)Natural Science Foundation of Jiangsu Province(BK20240424)Taishan Scholar Foundation of Shandong Province(tsqn202408006)Young Talent of Lifting engineering for Science and Technology in Shandong,China(SDAST2024QTB002)the Qilu Young Scholar Program of Shandong University.
文摘The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,flexible memristors exhibit great application potential in emulating artificial synapses for highefficiency and low power consumption neuromorphic computing.This paper provides comprehensive overview of flexible memristors from perspectives of development history,material system,device structure,mechanical deformation method,device performance analysis,stress simulation during deformation,and neuromorphic computing applications.The recent advances in flexible electronics are summarized,including single device,device array and integration.The challenges and future perspectives of flexible memristor for neuromorphic computing are discussed deeply,paving the way for constructing wearable smart electronics and applications in large-scale neuromorphic computing and high-order intelligent robotics.
基金financially supported by the National Natural Science Foundation of China(Grant No.12172093)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2021A1515012607)。
文摘High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic features enable forming-free resistive switching,multilevel conductance modulation,and synaptic plasticity,making HEOs attractive for neuromorphic computing.This review outlines recent progress in HEO-based memristors across materials engineering,switching mechanisms,and synaptic emulation.Particular attention is given to vacancy migration,phase transitions,and valence-state dynamics—mechanisms that underlie the switching behaviors observed in both amorphous and crystalline systems.Their relevance to neuromorphic functions such as short-term plasticity and spike-timing-dependent learning is also examined.While encouraging results have been achieved at the device level,challenges remain in conductance precision,variability control,and scalable integration.Addressing these demands a concerted effort across materials design,interface optimization,and task-aware modeling.With such integration,HEO memristors offer a compelling pathway toward energy-efficient and adaptable brain-inspired electronics.
基金supported in part by the Chinese Academy of Sciences(No.XDA0330302)NSFC program(No.22127901)。
文摘Neuromorphic devices have garnered significant attention as potential building blocks for energy-efficient hardware systems owing to their capacity to emulate the computational efficiency of the brain.In this regard,reservoir computing(RC)framework,which leverages straightforward training methods and efficient temporal signal processing,has emerged as a promising scheme.While various physical reservoir devices,including ferroelectric,optoelectronic,and memristor-based systems,have been demonstrated,many still face challenges related to compatibility with mainstream complementary metal oxide semiconductor(CMOS)integration processes.This study introduced a silicon-based schottky barrier metal-oxide-semiconductor field effect transistor(SB-MOSFET),which was fabricated under low thermal budget and compatible with back-end-of-line(BEOL).The device demonstrated short-term memory characteristics,facilitated by the modulation of schottky barriers and charge trapping.Utilizing these characteristics,a RC system for temporal data processing was constructed,and its performance was validated in a 5×4 digital classification task,achieving an accuracy exceeding 98%after 50 training epochs.Furthermore,the system successfully processed temporal signal in waveform classification and prediction tasks using time-division multiplexing.Overall,the SB-MOSFET's high compatibility with CMOS technology provides substantial advantages for large-scale integration,enabling the development of energy-efficient reservoir computing hardware.
基金the National Key Research and Development Program(2021YFA0716400)the National Natural Science Foundation of China(62225405,62350002,61991443,62127814,62235005,and 61927811)the Collaborative Innovation Center of Solid-State Lighting and Energy-Saving Electronics。
文摘The rapid development of internet of things(loT)urgently needs edge miniaturized computing devices with high efficiency and low-power consumption.In-sensor computing has emerged as a promising technology to enable in-situ data processing within the sensor array.Here,we report an optoelectronic array for in-sensor computing by integrating photodiodes(PDs)with resistive random-access memories(RRAMs).The PD-RRAM unit cell exhibits reconfigurable optoelectronic output and photo-responsivity by programming RRAMs into different resistance states.Furthermore,a 3×3 PD-RRAM array is fabricated to demonstrate optical image recognition,achieving a universal architecture with ultralow latency and low power consumption.This study highlights the great potential of the PD-RRAM optoelectronic array as an energy-effcient in-sensor computing primitive for future IoT applications.
文摘Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources of computation and communication.Multiaccess edge computing(MEC)can offload computing-intensive tasks to the nearby edge servers,which alleviates the pressure of devices.Ultra-dense network(UDN)can provide effective spectrum resources by deploying a large number of micro base stations.Furthermore,network slicing can support various applications in different communication scenarios.Therefore,this paper integrates the ultra-dense network slicing and the MEC technology,and introduces a hybrid computing offloading strategy in order to satisfy various quality of service(QoS)of edge devices.In order to dynamically allocate limited resources,the above problem is formulated as multiagent distributed deep reinforcement learning(DRL),which will achieve low overhead computation offloading strategy and real-time resource allocation decisions.In this context,federated learning is added to train DRL agents in a distributed manner,where each agent is dedicated to exploring actions composed of offloading decisions and allocating resources,so as to jointly optimize system delay and energy consumption.Simulation results show that the proposed learning algorithm has better performance compared with other strategies in literature.
文摘This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.
基金supported by National Key Research and Development Program of China(2021YFA1200700)The National Natural Science Foundation of China(No.T2222025 and 62174053)+1 种基金Open Research Projects of Zhejiang Lab(2021MD0AB03),Shanghai Science and Technology Innovation Action Plan(21JC1402000 and 21520714100)the Fundamental Research Funds for the Central Universities.The authors would like to express their gratitude to EditSprings(https://www.editsprings.cn)for the expert linguistic services provided.
文摘Rapid developments in the Internet of Things and Artificial Intelligence trigger higher requirements for image perception and learning of external environments through visual systems.However,limited by von Neumann's bottleneck,the physical separation of sense,memory,and processing units in a conventional personal computer-based vision system tend to consume a significant amount of energy,time latency,and additional hardware costs.By integrating computational tasks of multiple functionalities into the sensors themselves,the emerging bio-inspired neuromorphic visual systems provide an opportunity to overcome these limitations.With high speed,ultralow power and strong adaptability,it is highly desirable to develop a neuromorphic vision system that is based on highly precise in-sensor computing devices,namely retinomorphic devices.We here present a timely review of retinomorphic devices for visual in-sensor computing.We begin with several types of physical mechanisms of photoelectric sensors that can be constructed for artificial vision.The potential applications of retinomorphic hardware are,thereafter,thoroughly summarized.We also highlight the possible strategies to existing challenges and give a brief perspective of retinomorphic architecture for in-sensor computing.
基金supported by the young scientist project of MOE innovation platform,the Science and Technology Commission of Shanghai Municipality(21DZ1100700)China Postdoctoral Science Foundation(Grant BX2021070,2021M700026)the Zhejiang Lab’s International Talent Fund for Young Professionals and Jiaxing Science and Technology Project(Grants No.2021AY10057).
文摘With the development of artificial intelligence and the Internet of Things,the number of sensory nodes is growing rapidly,leading to the exchange of large quantities of redundant data between sensors and computing units.Insensor computing schemes,which integrate sensing and processing,have provided a promising route to addressing the sensing/processing bottleneck by reducing power consumption,time delay and hardware redundancy.In this study,an in-sensor computing architecture involving a photoelectronic cell based on a wafer-scale two-dimensional MoS_(2)thin film was demonstrated.The MoS_(2)photodetector cell used a top-gate device structure with in-dium tin oxide(ITO)as the transparent gate electrode,which exhibited high air-stability and a high photoresponsivity(R)up to 555.8 A W^(-1) at an illumination power density(P_(in))of 16.0μW cm^(-2)(λ=500 nm).Additionally,a MoS_(2)photodetector array with uniform photoresponsive characteristics was achieved.Furthermore,logic gates,including inverter,NAND,and NOR,were achieved based on MoS_(2)photodetector cells.Such multifunctional and robust in-sensor computing was ascribed to the uniform wafer-scale MoS_(2)film grown by atomic layer deposition(ALD)and the unique device structure.Because the detection of optical signals and logic operations were achieved through MoS_(2)photodetector cells with area efficiency,the proposed in-sensor computing device paves the way for potential applications in high-performance,integrated sensing and processing systems.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
基金supported in part by the National Natural Science Foundation of China(62025404)in part by the National Key Research and Development Program of China(2022YFB3902802)+1 种基金in part by the Beijing Natural Science Foundation(L241013)in part by the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA000000).
文摘1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the escalating demand for high-performance computing over space.Computing over space entails the deployment of computational resources on platforms such as satellites to process large-scale data under constraints such as high radiation exposure,restricted power consumption,and minimized weight.
基金supported by the"Science and Technology Development Plan Project of Jilin Province,China"(Grant No.20240101018JJ)the Fundamental Research Funds for the Central Universities(Grant No.2412023YQ004)the National Natural Science Foundation of China(Grant Nos.52072065,52272140,52372137,and U23A20568).
文摘Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic functions,i.e.,excita-tory post-synaptic current and pair-pulse facilitation are successfully mimicked with the memristor under electrical and optical stimulations.More importantly,the device exhibited distinguishable response currents by adjusting 4-bit input electrical/opti-cal signals.A multi-mode reservoir computing(RC)system is constructed with the optoelectronic memristors to emulate human tactile-visual fusion recognition and an accuracy of 98.7%is achieved.The optoelectronic memristor provides potential for developing multi-mode RC system.
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.