期刊文献+
共找到256,392篇文章
< 1 2 250 >
每页显示 20 50 100
Advances in neuromorphic computing:Expanding horizons for AI development through novel artificial neurons and in-sensor computing
1
作者 杨玉波 赵吉哲 +11 位作者 刘胤洁 华夏扬 王天睿 郑纪元 郝智彪 熊兵 孙长征 韩彦军 王健 李洪涛 汪莱 罗毅 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期1-23,共23页
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ... AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI. 展开更多
关键词 neuromorphic computing spiking neural network(SNN) in-sensor computing artificial intelligence
原文传递
Multiframe-integrated, in-sensor computing using persistent photoconductivity 被引量:1
2
作者 Xiaoyong Jiang Minrui Ye +7 位作者 Yunhai Li Xiao Fu Tangxin Li Qixiao Zhao Jinjin Wang Tao Zhang Jinshui Miao Zengguang Cheng 《Journal of Semiconductors》 EI CAS CSCD 2024年第9期36-41,共6页
The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where su... The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where substantial data transfers are necessitated by the generation of extensive information and the need for frame-by-frame analysis. Herein, we present a novel approach for dynamic motion recognition, leveraging a spatial-temporal in-sensor computing system rooted in multiframe integration by employing photodetector. Our approach introduced a retinomorphic MoS_(2) photodetector device for motion detection and analysis. The device enables the generation of informative final states, nonlinearly embedding both past and present frames. Subsequent multiply-accumulate (MAC) calculations are efficiently performed as the classifier. When evaluating our devices for target detection and direction classification, we achieved an impressive recognition accuracy of 93.5%. By eliminating the need for frame-by-frame analysis, our system not only achieves high precision but also facilitates energy-efficient in-sensor computing. 展开更多
关键词 in-sensor MOS2 PHOTODETECTOR persistent photoconductivity reservoir computing
在线阅读 下载PDF
Nano device fabrication for in-memory and in-sensor reservoir computing
3
作者 Yinan Lin Xi Chen +4 位作者 Qianyu Zhang Junqi You Renjing Xu Zhongrui Wang Linfeng Sun 《International Journal of Extreme Manufacturing》 2025年第1期46-71,共26页
Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasti... Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology. 展开更多
关键词 reservoir computing memristive device fabrication compute-in-memory in-sensor computing
在线阅读 下载PDF
Virtual QPU:A Novel Implementation of Quantum Computing
4
作者 Danyang Zheng Jinchen Xv +1 位作者 Xin Zhou Zheng Shan 《Computers, Materials & Continua》 2026年第4期1008-1029,共22页
The increasing popularity of quantum computing has resulted in a considerable rise in demand for cloud quantum computing usage in recent years.Nevertheless,the rapid surge in demand for cloud-based quantum computing r... The increasing popularity of quantum computing has resulted in a considerable rise in demand for cloud quantum computing usage in recent years.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity.In order to meet the needs of an increasing number of researchers,it is imperative to facilitate efficient and flexible access to computing resources in a cloud environment.In this paper,we propose a novel quantum computing paradigm,Virtual QPU(VQPU),which addresses this issue and enhances quantum cloud throughput with guaranteed circuit fidelity.The proposal introduces three innovative concepts:(1)The integration of virtualization technology into the field of quantum computing to enhance quantum cloud throughput.(2)The introduction of an asynchronous execution of circuits methodology to improve quantum computing flexibility.(3)The development of a virtual QPU allocation scheme for quantum tasks in a cloud environment to improve circuit fidelity.The concepts have been validated through the utilization of a self-built simulated quantum cloud platform. 展开更多
关键词 Quantum computing scheduling parallel computing computational paradigm
在线阅读 下载PDF
Back-gate-tuned organic electrochemical transistor with temporal dynamic modulation for reservoir computing
5
作者 Qian Xu Jie Qiu +6 位作者 Mengyang Liu Dongzi Yang Tingpan Lan Jie Cao Yingfen Wei Hao Jiang Ming Wang 《Journal of Semiconductors》 2026年第1期118-123,共6页
Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal sca... Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal scale tasks.In this study,we report an OECT-based neuromorphic device with tunable relaxation time(τ)by introducing an additional vertical back-gate electrode into a planar structure.The dual-gate design enablesτreconfiguration from 93 to 541 ms.The tunable relaxation behaviors can be attributed to the combined effects of planar-gate induced electrochemical doping and back-gateinduced electrostatic coupling,as verified by electrochemical impedance spectroscopy analysis.Furthermore,we used theτ-tunable OECT devices as physical reservoirs in the RC system for intelligent driving trajectory prediction,achieving a significant improvement in prediction accuracy from below 69%to 99%.The results demonstrate that theτ-tunable OECT shows a promising candidate for multi-temporal scale neuromorphic computing applications. 展开更多
关键词 neuromorphic computing reservoir computing OECT tunable dynamics trajectory prediction
在线阅读 下载PDF
Multi-Objective Enhanced Cheetah Optimizer for Joint Optimization of Computation Offloading and Task Scheduling in Fog Computing
6
作者 Ahmad Zia Nazia Azim +5 位作者 Bekarystankyzy Akbayan Khalid J.Alzahrani Ateeq Ur Rehman Faheem Ullah Khan Nouf Al-Kahtani Hend Khalid Alkahtani 《Computers, Materials & Continua》 2026年第3期1559-1588,共30页
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c... The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods. 展开更多
关键词 computation offloading task scheduling cheetah optimizer fog computing optimization resource allocation internet of things
在线阅读 下载PDF
Energy Aware Task Scheduling of IoT Application Using a Hybrid Metaheuristic Algorithm in Cloud Computing
7
作者 Ahmed Awad Mohamed Eslam Abdelhakim Seyam +4 位作者 Ahmed R.Elsaeed Laith Abualigah Aseel Smerat Ahmed M.AbdelMouty Hosam E.Refaat 《Computers, Materials & Continua》 2026年第3期1786-1803,共18页
In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task schedul... In recent years,fog computing has become an important environment for dealing with the Internet of Things.Fog computing was developed to handle large-scale big data by scheduling tasks via cloud computing.Task scheduling is crucial for efficiently handling IoT user requests,thereby improving system performance,cost,and energy consumption across nodes in cloud computing.With the large amount of data and user requests,achieving the optimal solution to the task scheduling problem is challenging,particularly in terms of cost and energy efficiency.In this paper,we develop novel strategies to save energy consumption across nodes in fog computing when users execute tasks through the least-cost paths.Task scheduling is developed using modified artificial ecosystem optimization(AEO),combined with negative swarm operators,Salp Swarm Algorithm(SSA),in order to competitively optimize their capabilities during the exploitation phase of the optimal search process.In addition,the proposed strategy,Enhancement Artificial Ecosystem Optimization Salp Swarm Algorithm(EAEOSSA),attempts to find the most suitable solution.The optimization that combines cost and energy for multi-objective task scheduling optimization problems.The backpack problem is also added to improve both cost and energy in the iFogSim implementation as well.A comparison was made between the proposed strategy and other strategies in terms of time,cost,energy,and productivity.Experimental results showed that the proposed strategy improved energy consumption,cost,and time over other algorithms.Simulation results demonstrate that the proposed algorithm increases the average cost,average energy consumption,and mean service time in most scenarios,with average reductions of up to 21.15%in cost and 25.8%in energy consumption. 展开更多
关键词 Energy-efficient tasks internet of things(IoT) cloud fog computing artificial ecosystem-based optimization salp swarm algorithm cloud computing
在线阅读 下载PDF
Two-Dimensional MXene-Based Advanced Sensors for Neuromorphic Computing Intelligent Application
8
作者 Lin Lu Bo Sun +2 位作者 Zheng Wang Jialin Meng Tianyu Wang 《Nano-Micro Letters》 2026年第2期664-691,共28页
As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and el... As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and electrochemical characteristics,MXenes have shown great potential in brain-inspired neuromorphic computing electronics,including neuromorphic gas sensors,pressure sensors and photodetectors.This paper provides a forward-looking review of the research progress regarding MXenes in the neuromorphic sensing domain and discussed the critical challenges that need to be resolved.Key bottlenecks such as insufficient long-term stability under environmental exposure,high costs,scalability limitations in large-scale production,and mechanical mismatch in wearable integration hinder their practical deployment.Furthermore,unresolved issues like interfacial compatibility in heterostructures and energy inefficiency in neu-romorphic signal conversion demand urgent attention.The review offers insights into future research directions enhance the fundamental understanding of MXene properties and promote further integration into neuromorphic computing applications through the convergence with various emerging technologies. 展开更多
关键词 TWO-DIMENSIONAL MXenes SENSOR Neuromorphic computing Multimodal intelligent system Wearable electronics
在线阅读 下载PDF
Mechanical Properties Analysis of Flexible Memristors for Neuromorphic Computing
9
作者 Zhenqian Zhu Jiheng Shui +1 位作者 Tianyu Wang Jialin Meng 《Nano-Micro Letters》 2026年第1期53-79,共27页
The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,fle... The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,flexible memristors exhibit great application potential in emulating artificial synapses for highefficiency and low power consumption neuromorphic computing.This paper provides comprehensive overview of flexible memristors from perspectives of development history,material system,device structure,mechanical deformation method,device performance analysis,stress simulation during deformation,and neuromorphic computing applications.The recent advances in flexible electronics are summarized,including single device,device array and integration.The challenges and future perspectives of flexible memristor for neuromorphic computing are discussed deeply,paving the way for constructing wearable smart electronics and applications in large-scale neuromorphic computing and high-order intelligent robotics. 展开更多
关键词 Flexible memristor Neuromorphic computing Mechanical property Wearable electronics
在线阅读 下载PDF
High-Entropy Oxide Memristors for Neuromorphic Computing:From Material Engineering to Functional Integration
10
作者 Jia‑Li Yang Xin‑Gui Tang +4 位作者 Xuan Gu Qi‑Jun Sun Zhen‑Hua Tang Wen‑Hua Li Yan-Ping Jiang 《Nano-Micro Letters》 2026年第2期138-169,共32页
High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic f... High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic features enable forming-free resistive switching,multilevel conductance modulation,and synaptic plasticity,making HEOs attractive for neuromorphic computing.This review outlines recent progress in HEO-based memristors across materials engineering,switching mechanisms,and synaptic emulation.Particular attention is given to vacancy migration,phase transitions,and valence-state dynamics—mechanisms that underlie the switching behaviors observed in both amorphous and crystalline systems.Their relevance to neuromorphic functions such as short-term plasticity and spike-timing-dependent learning is also examined.While encouraging results have been achieved at the device level,challenges remain in conductance precision,variability control,and scalable integration.Addressing these demands a concerted effort across materials design,interface optimization,and task-aware modeling.With such integration,HEO memristors offer a compelling pathway toward energy-efficient and adaptable brain-inspired electronics. 展开更多
关键词 High-entropy oxides MEMRISTORS Neuromorphic computing Configurational entropy Resistive switching
在线阅读 下载PDF
A low-thermal-budget MOSFET-based reservoir computing for temporal data classification
11
作者 Yanqing Li Feixiong Wang +5 位作者 Heyi Huang Yadong Zhang Xiangpeng Liang Shuang Liu Jianshi Tang Huaxiang Yin 《Journal of Semiconductors》 2026年第1期42-48,共7页
Neuromorphic devices have garnered significant attention as potential building blocks for energy-efficient hardware systems owing to their capacity to emulate the computational efficiency of the brain.In this regard,r... Neuromorphic devices have garnered significant attention as potential building blocks for energy-efficient hardware systems owing to their capacity to emulate the computational efficiency of the brain.In this regard,reservoir computing(RC)framework,which leverages straightforward training methods and efficient temporal signal processing,has emerged as a promising scheme.While various physical reservoir devices,including ferroelectric,optoelectronic,and memristor-based systems,have been demonstrated,many still face challenges related to compatibility with mainstream complementary metal oxide semiconductor(CMOS)integration processes.This study introduced a silicon-based schottky barrier metal-oxide-semiconductor field effect transistor(SB-MOSFET),which was fabricated under low thermal budget and compatible with back-end-of-line(BEOL).The device demonstrated short-term memory characteristics,facilitated by the modulation of schottky barriers and charge trapping.Utilizing these characteristics,a RC system for temporal data processing was constructed,and its performance was validated in a 5×4 digital classification task,achieving an accuracy exceeding 98%after 50 training epochs.Furthermore,the system successfully processed temporal signal in waveform classification and prediction tasks using time-division multiplexing.Overall,the SB-MOSFET's high compatibility with CMOS technology provides substantial advantages for large-scale integration,enabling the development of energy-efficient reservoir computing hardware. 展开更多
关键词 schottky barrier MOSFET back-end-of-line integration reservoir computing
在线阅读 下载PDF
Optoelectronic array of photodiodes integrated with RRAMs for energy-efficient in-sensor computing 被引量:1
12
作者 Wen Pan Lai Wang +9 位作者 Jianshi Tang Heyi Huang Zhibiao Hao Changzheng Sun Bing Xiong Jian Wang Yanjun Han Hongtao Li Lin Gan Yi Luo 《Light: Science & Applications》 2025年第2期430-440,共11页
The rapid development of internet of things(loT)urgently needs edge miniaturized computing devices with high efficiency and low-power consumption.In-sensor computing has emerged as a promising technology to enable in-... The rapid development of internet of things(loT)urgently needs edge miniaturized computing devices with high efficiency and low-power consumption.In-sensor computing has emerged as a promising technology to enable in-situ data processing within the sensor array.Here,we report an optoelectronic array for in-sensor computing by integrating photodiodes(PDs)with resistive random-access memories(RRAMs).The PD-RRAM unit cell exhibits reconfigurable optoelectronic output and photo-responsivity by programming RRAMs into different resistance states.Furthermore,a 3×3 PD-RRAM array is fabricated to demonstrate optical image recognition,achieving a universal architecture with ultralow latency and low power consumption.This study highlights the great potential of the PD-RRAM optoelectronic array as an energy-effcient in-sensor computing primitive for future IoT applications. 展开更多
关键词 optoelectronic array internet things lot urgently RRAMs sensor computing edge miniaturized computing devices IoT photodiodes
原文传递
Intelligent Resource Allocation for Multiaccess Edge Computing in 5G Ultra-Dense Slicing Network Using Federated Multiagent DDPG Algorithm
13
作者 Gong Yu Gong Pengwei +3 位作者 Jiang He Xie Wen Wang Chenxi Xu Peijun 《China Communications》 2026年第1期273-289,共17页
Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources... Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources of computation and communication.Multiaccess edge computing(MEC)can offload computing-intensive tasks to the nearby edge servers,which alleviates the pressure of devices.Ultra-dense network(UDN)can provide effective spectrum resources by deploying a large number of micro base stations.Furthermore,network slicing can support various applications in different communication scenarios.Therefore,this paper integrates the ultra-dense network slicing and the MEC technology,and introduces a hybrid computing offloading strategy in order to satisfy various quality of service(QoS)of edge devices.In order to dynamically allocate limited resources,the above problem is formulated as multiagent distributed deep reinforcement learning(DRL),which will achieve low overhead computation offloading strategy and real-time resource allocation decisions.In this context,federated learning is added to train DRL agents in a distributed manner,where each agent is dedicated to exploring actions composed of offloading decisions and allocating resources,so as to jointly optimize system delay and energy consumption.Simulation results show that the proposed learning algorithm has better performance compared with other strategies in literature. 展开更多
关键词 federated learning multiaccess edge computing mutiagent deep reinforcement learning resource allocation ultra-dense slicing network
在线阅读 下载PDF
Lightweight YOLOv5 with ShuffleNetV2 for Rice Disease Detection in Edge Computing
14
作者 Qingtao Meng Sang-Hyun Lee 《Computers, Materials & Continua》 2026年第1期1395-1409,共15页
This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagno... This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements. 展开更多
关键词 Lightweight object detection YOLOv5-V2 ShuffleNet V2 edge computing rice disease detection
在线阅读 下载PDF
Retinomorphic hardware for in-sensor computing 被引量:4
15
作者 Guangdi Feng Xiaoxu Zhang +1 位作者 Bobo Tian Chungang Duan 《InfoMat》 SCIE CSCD 2023年第9期1-25,共25页
Rapid developments in the Internet of Things and Artificial Intelligence trigger higher requirements for image perception and learning of external environments through visual systems.However,limited by von Neumann'... Rapid developments in the Internet of Things and Artificial Intelligence trigger higher requirements for image perception and learning of external environments through visual systems.However,limited by von Neumann's bottleneck,the physical separation of sense,memory,and processing units in a conventional personal computer-based vision system tend to consume a significant amount of energy,time latency,and additional hardware costs.By integrating computational tasks of multiple functionalities into the sensors themselves,the emerging bio-inspired neuromorphic visual systems provide an opportunity to overcome these limitations.With high speed,ultralow power and strong adaptability,it is highly desirable to develop a neuromorphic vision system that is based on highly precise in-sensor computing devices,namely retinomorphic devices.We here present a timely review of retinomorphic devices for visual in-sensor computing.We begin with several types of physical mechanisms of photoelectric sensors that can be constructed for artificial vision.The potential applications of retinomorphic hardware are,thereafter,thoroughly summarized.We also highlight the possible strategies to existing challenges and give a brief perspective of retinomorphic architecture for in-sensor computing. 展开更多
关键词 FERROELECTRIC in-sensor computing photogating retinomorphic device
原文传递
Sensitive MoS_(2)photodetector cell with high air-stability for multifunctional in-sensor computing
16
作者 Dong-Hui Zhao Zheng-Hao Gu +7 位作者 Tian-Yu Wang Xiao-Jiao Guo Xi-Xi Jiang Min Zhang Hao Zhu Lin Chen Qing-Qing Sun David Wei Zhang 《Chip》 2022年第3期61-68,共8页
With the development of artificial intelligence and the Internet of Things,the number of sensory nodes is growing rapidly,leading to the exchange of large quantities of redundant data between sensors and computing uni... With the development of artificial intelligence and the Internet of Things,the number of sensory nodes is growing rapidly,leading to the exchange of large quantities of redundant data between sensors and computing units.Insensor computing schemes,which integrate sensing and processing,have provided a promising route to addressing the sensing/processing bottleneck by reducing power consumption,time delay and hardware redundancy.In this study,an in-sensor computing architecture involving a photoelectronic cell based on a wafer-scale two-dimensional MoS_(2)thin film was demonstrated.The MoS_(2)photodetector cell used a top-gate device structure with in-dium tin oxide(ITO)as the transparent gate electrode,which exhibited high air-stability and a high photoresponsivity(R)up to 555.8 A W^(-1) at an illumination power density(P_(in))of 16.0μW cm^(-2)(λ=500 nm).Additionally,a MoS_(2)photodetector array with uniform photoresponsive characteristics was achieved.Furthermore,logic gates,including inverter,NAND,and NOR,were achieved based on MoS_(2)photodetector cells.Such multifunctional and robust in-sensor computing was ascribed to the uniform wafer-scale MoS_(2)film grown by atomic layer deposition(ALD)and the unique device structure.Because the detection of optical signals and logic operations were achieved through MoS_(2)photodetector cells with area efficiency,the proposed in-sensor computing device paves the way for potential applications in high-performance,integrated sensing and processing systems. 展开更多
关键词 in-sensor computing PHOTODETECTORS MoS_(2) Atomic layer de-position Transparent electrode
原文传递
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
17
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
Computing over Space:Status,Challenges,and Opportunities 被引量:1
18
作者 Yaoqi Liu Yinhe Han +3 位作者 Hongxin Li Shuhao Gu Jibing Qiu Ting Li 《Engineering》 2025年第11期20-25,共6页
1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the es... 1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the escalating demand for high-performance computing over space.Computing over space entails the deployment of computational resources on platforms such as satellites to process large-scale data under constraints such as high radiation exposure,restricted power consumption,and minimized weight. 展开更多
关键词 satellite constellations deployment computational resources data processing space computing radiation exposure SPACE high performance computing power consumption
在线阅读 下载PDF
Optoelectronic memristor based on a-C:Te film for muti-mode reservoir computing 被引量:2
19
作者 Qiaoling Tian Kuo Xun +7 位作者 Zhuangzhuang Li Xiaoning Zhao Ya Lin Ye Tao Zhongqiang Wang Daniele Ielmini Haiyang Xu Yichun Liu 《Journal of Semiconductors》 2025年第2期144-149,共6页
Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic ... Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic functions,i.e.,excita-tory post-synaptic current and pair-pulse facilitation are successfully mimicked with the memristor under electrical and optical stimulations.More importantly,the device exhibited distinguishable response currents by adjusting 4-bit input electrical/opti-cal signals.A multi-mode reservoir computing(RC)system is constructed with the optoelectronic memristors to emulate human tactile-visual fusion recognition and an accuracy of 98.7%is achieved.The optoelectronic memristor provides potential for developing multi-mode RC system. 展开更多
关键词 optoelectronic memristor volatile switching muti-mode reservoir computing
在线阅读 下载PDF
DDPG-Based Intelligent Computation Offloading and Resource Allocation for LEO Satellite Edge Computing Network 被引量:1
20
作者 Jia Min Wu Jian +2 位作者 Zhang Liang Wang Xinyu Guo Qing 《China Communications》 2025年第3期1-15,共15页
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t... Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms. 展开更多
关键词 computation offloading deep deterministic policy gradient low earth orbit satellite mobile edge computing resource allocation
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部