期刊文献+
共找到256,972篇文章
< 1 2 250 >
每页显示 20 50 100
Back-gate-tuned organic electrochemical transistor with temporal dynamic modulation for reservoir computing
1
作者 Qian Xu Jie Qiu +6 位作者 Mengyang Liu Dongzi Yang Tingpan Lan Jie Cao Yingfen Wei Hao Jiang Ming Wang 《Journal of Semiconductors》 2026年第1期118-123,共6页
Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal sca... Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal scale tasks.In this study,we report an OECT-based neuromorphic device with tunable relaxation time(τ)by introducing an additional vertical back-gate electrode into a planar structure.The dual-gate design enablesτreconfiguration from 93 to 541 ms.The tunable relaxation behaviors can be attributed to the combined effects of planar-gate induced electrochemical doping and back-gateinduced electrostatic coupling,as verified by electrochemical impedance spectroscopy analysis.Furthermore,we used theτ-tunable OECT devices as physical reservoirs in the RC system for intelligent driving trajectory prediction,achieving a significant improvement in prediction accuracy from below 69%to 99%.The results demonstrate that theτ-tunable OECT shows a promising candidate for multi-temporal scale neuromorphic computing applications. 展开更多
关键词 neuromorphic computing reservoir computing OECT tunable dynamics trajectory prediction
在线阅读 下载PDF
Two-Dimensional MXene-Based Advanced Sensors for Neuromorphic Computing Intelligent Application
2
作者 Lin Lu Bo Sun +2 位作者 Zheng Wang Jialin Meng Tianyu Wang 《Nano-Micro Letters》 2026年第2期664-691,共28页
As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and el... As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and electrochemical characteristics,MXenes have shown great potential in brain-inspired neuromorphic computing electronics,including neuromorphic gas sensors,pressure sensors and photodetectors.This paper provides a forward-looking review of the research progress regarding MXenes in the neuromorphic sensing domain and discussed the critical challenges that need to be resolved.Key bottlenecks such as insufficient long-term stability under environmental exposure,high costs,scalability limitations in large-scale production,and mechanical mismatch in wearable integration hinder their practical deployment.Furthermore,unresolved issues like interfacial compatibility in heterostructures and energy inefficiency in neu-romorphic signal conversion demand urgent attention.The review offers insights into future research directions enhance the fundamental understanding of MXene properties and promote further integration into neuromorphic computing applications through the convergence with various emerging technologies. 展开更多
关键词 TWO-DIMENSIONAL MXenes SENSOR Neuromorphic computing Multimodal intelligent system Wearable electronics
在线阅读 下载PDF
Mechanical Properties Analysis of Flexible Memristors for Neuromorphic Computing
3
作者 Zhenqian Zhu Jiheng Shui +1 位作者 Tianyu Wang Jialin Meng 《Nano-Micro Letters》 2026年第1期53-79,共27页
The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,fle... The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,flexible memristors exhibit great application potential in emulating artificial synapses for highefficiency and low power consumption neuromorphic computing.This paper provides comprehensive overview of flexible memristors from perspectives of development history,material system,device structure,mechanical deformation method,device performance analysis,stress simulation during deformation,and neuromorphic computing applications.The recent advances in flexible electronics are summarized,including single device,device array and integration.The challenges and future perspectives of flexible memristor for neuromorphic computing are discussed deeply,paving the way for constructing wearable smart electronics and applications in large-scale neuromorphic computing and high-order intelligent robotics. 展开更多
关键词 Flexible memristor Neuromorphic computing Mechanical property Wearable electronics
在线阅读 下载PDF
High-Entropy Oxide Memristors for Neuromorphic Computing:From Material Engineering to Functional Integration
4
作者 Jia‑Li Yang Xin‑Gui Tang +4 位作者 Xuan Gu Qi‑Jun Sun Zhen‑Hua Tang Wen‑Hua Li Yan-Ping Jiang 《Nano-Micro Letters》 2026年第2期138-169,共32页
High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic f... High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic features enable forming-free resistive switching,multilevel conductance modulation,and synaptic plasticity,making HEOs attractive for neuromorphic computing.This review outlines recent progress in HEO-based memristors across materials engineering,switching mechanisms,and synaptic emulation.Particular attention is given to vacancy migration,phase transitions,and valence-state dynamics—mechanisms that underlie the switching behaviors observed in both amorphous and crystalline systems.Their relevance to neuromorphic functions such as short-term plasticity and spike-timing-dependent learning is also examined.While encouraging results have been achieved at the device level,challenges remain in conductance precision,variability control,and scalable integration.Addressing these demands a concerted effort across materials design,interface optimization,and task-aware modeling.With such integration,HEO memristors offer a compelling pathway toward energy-efficient and adaptable brain-inspired electronics. 展开更多
关键词 High-entropy oxides MEMRISTORS Neuromorphic computing Configurational entropy Resistive switching
在线阅读 下载PDF
A low-thermal-budget MOSFET-based reservoir computing for temporal data classification
5
作者 Yanqing Li Feixiong Wang +5 位作者 Heyi Huang Yadong Zhang Xiangpeng Liang Shuang Liu Jianshi Tang Huaxiang Yin 《Journal of Semiconductors》 2026年第1期42-48,共7页
Neuromorphic devices have garnered significant attention as potential building blocks for energy-efficient hardware systems owing to their capacity to emulate the computational efficiency of the brain.In this regard,r... Neuromorphic devices have garnered significant attention as potential building blocks for energy-efficient hardware systems owing to their capacity to emulate the computational efficiency of the brain.In this regard,reservoir computing(RC)framework,which leverages straightforward training methods and efficient temporal signal processing,has emerged as a promising scheme.While various physical reservoir devices,including ferroelectric,optoelectronic,and memristor-based systems,have been demonstrated,many still face challenges related to compatibility with mainstream complementary metal oxide semiconductor(CMOS)integration processes.This study introduced a silicon-based schottky barrier metal-oxide-semiconductor field effect transistor(SB-MOSFET),which was fabricated under low thermal budget and compatible with back-end-of-line(BEOL).The device demonstrated short-term memory characteristics,facilitated by the modulation of schottky barriers and charge trapping.Utilizing these characteristics,a RC system for temporal data processing was constructed,and its performance was validated in a 5×4 digital classification task,achieving an accuracy exceeding 98%after 50 training epochs.Furthermore,the system successfully processed temporal signal in waveform classification and prediction tasks using time-division multiplexing.Overall,the SB-MOSFET's high compatibility with CMOS technology provides substantial advantages for large-scale integration,enabling the development of energy-efficient reservoir computing hardware. 展开更多
关键词 schottky barrier MOSFET back-end-of-line integration reservoir computing
在线阅读 下载PDF
Intelligent Resource Allocation for Multiaccess Edge Computing in 5G Ultra-Dense Slicing Network Using Federated Multiagent DDPG Algorithm
6
作者 Gong Yu Gong Pengwei +3 位作者 Jiang He Xie Wen Wang Chenxi Xu Peijun 《China Communications》 2026年第1期273-289,共17页
Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources... Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources of computation and communication.Multiaccess edge computing(MEC)can offload computing-intensive tasks to the nearby edge servers,which alleviates the pressure of devices.Ultra-dense network(UDN)can provide effective spectrum resources by deploying a large number of micro base stations.Furthermore,network slicing can support various applications in different communication scenarios.Therefore,this paper integrates the ultra-dense network slicing and the MEC technology,and introduces a hybrid computing offloading strategy in order to satisfy various quality of service(QoS)of edge devices.In order to dynamically allocate limited resources,the above problem is formulated as multiagent distributed deep reinforcement learning(DRL),which will achieve low overhead computation offloading strategy and real-time resource allocation decisions.In this context,federated learning is added to train DRL agents in a distributed manner,where each agent is dedicated to exploring actions composed of offloading decisions and allocating resources,so as to jointly optimize system delay and energy consumption.Simulation results show that the proposed learning algorithm has better performance compared with other strategies in literature. 展开更多
关键词 federated learning multiaccess edge computing mutiagent deep reinforcement learning resource allocation ultra-dense slicing network
在线阅读 下载PDF
Lightweight YOLOv5 with ShuffleNetV2 for Rice Disease Detection in Edge Computing
7
作者 Qingtao Meng Sang-Hyun Lee 《Computers, Materials & Continua》 2026年第1期1395-1409,共15页
This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagno... This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements. 展开更多
关键词 Lightweight object detection YOLOv5-V2 ShuffleNet V2 edge computing rice disease detection
在线阅读 下载PDF
High-Throughput and Energy-Saving Blockchain for Untrusted IIoT Device Participation in Edge-to-End Collaborative Computing
8
作者 Zhang Zhen Huang Xiaowei +2 位作者 Li Chengjie Li Aihua Xiao Liqun 《China Communications》 2025年第11期132-143,共12页
The integration of blockchain and edgeto-end collaborative computing offers a solution to address the trust issues arising from untrusted IIoT devices.However,ensuring efficiency and energy-saving in applying blockcha... The integration of blockchain and edgeto-end collaborative computing offers a solution to address the trust issues arising from untrusted IIoT devices.However,ensuring efficiency and energy-saving in applying blockchain to edge-to-end collaborative computing remains a significant challenge.To tackle this,this paper proposes an innovative task-oriented blockchain architecture.The architecture comprises trusted Edge Computing(EC)servers and untrusted Industrial Internet of Things(IIoT)devices.We organize untrusted IIoT devices into several clusters,each executing a task in the form of smart contracts,and package the work logs of a task into a block.Executing a task with smart contracts within a cluster ensures the reliability of the task result.Reducing the scope of nodes involved in block consensus increases the overall throughput of the blockchain.Packaging task logs into blocks,storing and propagating blocks through corresponding Edge Computing(EC)servers reduces network load and avoids computing power competition.The paper also presents the proposed architecture’s theoretical TPS(Transactions Per Second)and failure probability calculations.Experimental results demonstrate that this architecture ensures computational security,improves TPS,and reduces resource consumption. 展开更多
关键词 blockchain technology consensus mechanism edge-to-end collaborative computing untrusted IIoT devices
在线阅读 下载PDF
The use of high-performance and high-throughput computing for the fertilization of digital earth and global change studies 被引量:2
9
作者 Yong Xue Dominic Palmer-Brown Huadong Guo 《International Journal of Digital Earth》 SCIE 2011年第3期185-210,共26页
The study of global climate change seeks to understand:(1)the components of the Earth’s varying environmental system,with a particular focus on climate;(2)how these components interact to determine present conditions... The study of global climate change seeks to understand:(1)the components of the Earth’s varying environmental system,with a particular focus on climate;(2)how these components interact to determine present conditions;(3)the factors driving these components;(4)the history of global change and the projection of future change;and(5)how knowledge about global environmental variability and change can be applied to present-day and future decision-making.This paper addresses the use of high-performance computing and high-throughput computing for a global change study on the Digital Earth(DE)platform.Two aspects of the use of high-performance computing(HPC)/high-throughput computing(HTC)on the DE platform are the processing of data from all sources,especially Earth observation data,and the simulation of global change models.The HPC/HTC is an essential and efficient tool for the processing of vast amounts of global data,especially Earth observation data.The current trend involves running complex global climate models using potentially millions of personal computers to achieve better climate change predictions than would ever be possible using the supercomputers currently available to scientists. 展开更多
关键词 high-performance computing(HPC) high-throughput computing(HTC) digital earth global change climate change Earth observation grid computing
原文传递
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
10
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
The rise of high-throughput computing 被引量:1
11
作者 Ning-hui SUN Yun-gang BAO Dong-rui FAN 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2018年第10期1245-1250,共6页
In recent years, the advent of emerging computing applications, such as cloud computing, artificial intelligence, and the Internet of Things, has led to three common requirements in computer system design: high utili... In recent years, the advent of emerging computing applications, such as cloud computing, artificial intelligence, and the Internet of Things, has led to three common requirements in computer system design: high utilization, high throughput, and low latency. Herein, these are referred to as the requirements of 'high-throughput computing (HTC)'. We further propose a new indicator called 'sysentropy' for measuring the degree of chaos and uncertainty within a computer system. We argue that unlike the designs of traditional computing systems that pursue high performance and low power consumption, HTC should aim at achieving low sysentropy. However, from the perspective of computer architecture, HTC faces two major challenges that relate to (1) the full exploitation of the application's data parallelism and execution concurrency to achieve high throughput, and (2) the achievement of low latency, even in the cases at which severe contention occurs in data paths with high utilization. To overcome these two challenges, we introduce two techniques: on-chip data flow architecture and labeled von Neumann architecture. We build two prototypes that can achieve high throughput and low latency, thereby significantly reducing sysentropy. 展开更多
关键词 high-throughput computing Sysentropy INFORMATION superbahn
原文传递
Computing over Space:Status,Challenges,and Opportunities 被引量:1
12
作者 Yaoqi Liu Yinhe Han +3 位作者 Hongxin Li Shuhao Gu Jibing Qiu Ting Li 《Engineering》 2025年第11期20-25,共6页
1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the es... 1.Introduction The rapid expansion of satellite constellations in recent years has resulted in the generation of massive amounts of data.This surge in data,coupled with diverse application scenarios,underscores the escalating demand for high-performance computing over space.Computing over space entails the deployment of computational resources on platforms such as satellites to process large-scale data under constraints such as high radiation exposure,restricted power consumption,and minimized weight. 展开更多
关键词 satellite constellations deployment computational resources data processing space computing radiation exposure SPACE high performance computing power consumption
在线阅读 下载PDF
Design strategies for fast-charging multiphase Na-ion layered cathodes:Dopant selection via computational high-throughput screening
13
作者 Taehyun Park Juo Kim +2 位作者 Yerim Jung Jiwon Sun Kyoungmin Min 《Journal of Energy Chemistry》 2025年第8期103-113,共11页
For the advancement of fast-charging sodium-ion batteries(SIBs),the synthesis of cutting-edge cathode materials with superior structural stability and enhanced Na+diffusion kinetics is imperative.Multiphase layered tr... For the advancement of fast-charging sodium-ion batteries(SIBs),the synthesis of cutting-edge cathode materials with superior structural stability and enhanced Na+diffusion kinetics is imperative.Multiphase layered transition metal oxides(LTMOs),which leverage the synergistic properties of two distinct monophasic LTMOs,have garnered significant attention;however,their efficacy under fast-charging conditions remains underexplored.In this study,we developed a high-throughput computational screening framework to identify optimal dopants that maximize the electrochemical performance of LTMOs.Specifically,we evaluated the efficacy of 32 dopants based on P2/O3-type Mn/Fe-based Na_(x)Mn_(0.5)Fe_(0.5)O_(2)(NMFO)cathode material.Multiphase LTMOs satisfying criteria for thermodynamic and structural stability,minimized phase transitions,and enhanced Na^(+)diffusion were systematically screened for their suitability in fast-charging applications.The analysis identified two dopants,Ti and Zr,which met all predefined screening criteria.Furthermore,we ranked and scored dopants based on their alignment with these criteria,establishing a comprehensive dopant performance database.These findings provide a robust foundation for experimental exploration and offer detailed guidelines for tailoring dopants to optimize fast-charging SIBs. 展开更多
关键词 Sodium-ion battery cathode Multiphase layered transition metal oxide Fast-charging high-throughput computational screening Doping strategy
在线阅读 下载PDF
Optoelectronic memristor based on a-C:Te film for muti-mode reservoir computing 被引量:2
14
作者 Qiaoling Tian Kuo Xun +7 位作者 Zhuangzhuang Li Xiaoning Zhao Ya Lin Ye Tao Zhongqiang Wang Daniele Ielmini Haiyang Xu Yichun Liu 《Journal of Semiconductors》 2025年第2期144-149,共6页
Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic ... Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic functions,i.e.,excita-tory post-synaptic current and pair-pulse facilitation are successfully mimicked with the memristor under electrical and optical stimulations.More importantly,the device exhibited distinguishable response currents by adjusting 4-bit input electrical/opti-cal signals.A multi-mode reservoir computing(RC)system is constructed with the optoelectronic memristors to emulate human tactile-visual fusion recognition and an accuracy of 98.7%is achieved.The optoelectronic memristor provides potential for developing multi-mode RC system. 展开更多
关键词 optoelectronic memristor volatile switching muti-mode reservoir computing
在线阅读 下载PDF
DDPG-Based Intelligent Computation Offloading and Resource Allocation for LEO Satellite Edge Computing Network 被引量:1
15
作者 Jia Min Wu Jian +2 位作者 Zhang Liang Wang Xinyu Guo Qing 《China Communications》 2025年第3期1-15,共15页
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t... Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms. 展开更多
关键词 computation offloading deep deterministic policy gradient low earth orbit satellite mobile edge computing resource allocation
在线阅读 下载PDF
Dynamic Task Offloading Scheme for Edge Computing via Meta-Reinforcement Learning 被引量:1
16
作者 Jiajia Liu Peng Xie +2 位作者 Wei Li Bo Tang Jianhua Liu 《Computers, Materials & Continua》 2025年第2期2609-2635,共27页
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the... As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments. 展开更多
关键词 Edge computing adaptive META task offloading joint optimization
在线阅读 下载PDF
Model-free prediction of chaotic dynamics with parameter-aware reservoir computing 被引量:1
17
作者 Jianmin Guo Yao Du +3 位作者 Haibo Luo Xuan Wang Yizhen Yu Xingang Wang 《Chinese Physics B》 2025年第4期143-152,共10页
Model-free,data-driven prediction of chaotic motions is a long-standing challenge in nonlinear science.Stimulated by the recent progress in machine learning,considerable attention has been given to the inference of ch... Model-free,data-driven prediction of chaotic motions is a long-standing challenge in nonlinear science.Stimulated by the recent progress in machine learning,considerable attention has been given to the inference of chaos by the technique of reservoir computing(RC).In particular,by incorporating a parameter-control channel into the standard RC,it is demonstrated that the machine is able to not only replicate the dynamics of the training states,but also infer new dynamics not included in the training set.The new machine-learning scheme,termed parameter-aware RC,opens up new avenues for data-based analysis of chaotic systems,and holds promise for predicting and controlling many real-world complex systems.Here,using typical chaotic systems as examples,we give a comprehensive introduction to this powerful machine-learning technique,including the algorithm,the implementation,the performance,and the open questions calling for further studies. 展开更多
关键词 chaos prediction time-series analysis bifurcation diagram parameter-aware reservoir computing
原文传递
Near‑Sensor Edge Computing System Enabled by a CMOS Compatible Photonic Integrated Circuit Platform Using Bilayer AlN/Si Waveguides 被引量:1
18
作者 Zhihao Ren Zixuan Zhang +4 位作者 Yangyang Zhuge Zian Xiao Siyu Xu Jingkai Zhou Chengkuo Lee 《Nano-Micro Letters》 2025年第11期1-20,共20页
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc... The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment. 展开更多
关键词 Photonic integrated circuits Edge computing Aluminum nitride Neural networks Wearable sensors
在线阅读 下载PDF
Exploitation of temporal dynamics and synaptic plasticity in multilayered ITO/ZnO/IGZO/ZnO/ITO memristor for energy-efficient reservoir computing 被引量:1
19
作者 Muhammad Ismail Seungjun Lee +2 位作者 Maria Rasheed Chandreswar Mahata Sungjun Kim 《Journal of Materials Science & Technology》 2025年第32期37-52,共16页
As the demand for advanced computational systems capable of handling large data volumes rises,nano-electronic devices,such as memristors,are being developed for efficient data processing,especially in reservoir comput... As the demand for advanced computational systems capable of handling large data volumes rises,nano-electronic devices,such as memristors,are being developed for efficient data processing,especially in reservoir computing(RC).RC enables the processing of temporal information with minimal training costs,making it a promising approach for neuromorphic computing.However,current memristor devices of-ten suffer from limitations in dynamic conductance and temporal behavior,which affects their perfor-mance in these applications.In this study,we present a multilayered indium-tin-oxide(ITO)/ZnO/indium-gallium-zinc oxide(IGZO)/ZnO/ITO memristor fabricated via radiofrequency sputtering to explore its fil-amentary and nonfilamentary resistive switching(RS)characteristics.High-resolution transmission elec-tron microscopy confirmed the polycrystalline structure of the ZnO/IGZO/ZnO active layer.Dual-switching modes were demonstrated by controlling the current compliance(I_(CC)).In the filamentary mode,the memristor exhibited a large memory window(10^(3)),low-operating voltages(±2 V),excellent cycle-to-cycle stability,and multilevel switching with controlled reset-stop voltages,making it suitable for high-density memory applications.Nonfilamentary switching demonstrated stable on/off ratios above 10,en-durance up to 102 cycles,and retention suited for short-term memory.Key synaptic behaviors,such as paired-pulse facilitation(PPF),post-tetanic potentiation(PTP),and spike-rate dependent plasticity(SRDP)were successfully emulated by modulating pulse amplitude,width,and interval.Experience-dependent plasticity(EDP)was also demonstrated,further replicating biological synaptic functions.These tempo-ral properties were utilized to develop a 4-bit reservoir computing system with 16 distinct conductance states,enabling efficient information encoding.For image recognition tasks,convolutional neural net-work(CNN)simulations achieved a high accuracy of 98.45%after 25 training epochs,outperforming the accuracy achieved following artificial neural network(ANN)simulations(87.79%).These findings demon-strate that the multilayered memristor exhibits high performance in neuromorphic systems,particularly for complex pattern recognition tasks,such as digit and letter classification. 展开更多
关键词 MEMRISTORS Temporal dynamics Synaptic plasticity Reservoir computing Neuromorphic systems Image recognition
原文传递
Synaptic devices based on silicon carbide for neuromorphic computing 被引量:1
20
作者 Boyu Ye Xiao Liu +2 位作者 Chao Wu Wensheng Yan Xiaodong Pi 《Journal of Semiconductors》 2025年第2期38-51,共14页
To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the vario... To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the various materials inves-tigated for the fabrication of synaptic devices,silicon carbide(SiC)has emerged as a preferred choices due to its high electron mobility,superior thermal conductivity,and excellent thermal stability,which exhibits promising potential for neuromorphic applications in harsh environments.In this review,the recent progress in SiC-based synaptic devices is summarized.Firstly,an in-depth discussion is conducted regarding the categories,working mechanisms,and structural designs of these devices.Subse-quently,several application scenarios for SiC-based synaptic devices are presented.Finally,a few perspectives and directions for their future development are outlined. 展开更多
关键词 silicon carbide wide bandgap semiconductors synaptic devices neuromorphic computing high temperature
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部