期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Nano device fabrication for in-memory and in-sensor reservoir computing
1
作者 Yinan Lin Xi Chen +4 位作者 Qianyu Zhang Junqi You Renjing Xu Zhongrui Wang Linfeng Sun 《International Journal of Extreme Manufacturing》 2025年第1期46-71,共26页
Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasti... Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology. 展开更多
关键词 reservoir computing memristive device fabrication compute-in-memory in-sensor computing
在线阅读 下载PDF
Research on high energy efficiency and low bit-width floating-point type data for abnormal object detection of transmission lines 被引量:1
2
作者 Chen Wang Guozheng Peng +2 位作者 Rui Song Jun Zhang Li Yan 《Global Energy Interconnection》 EI CSCD 2024年第3期324-335,共12页
Achieving a balance between accuracy and efficiency in target detection applications is an important research topic.To detect abnormal targets on power transmission lines at the power edge,this paper proposes an effec... Achieving a balance between accuracy and efficiency in target detection applications is an important research topic.To detect abnormal targets on power transmission lines at the power edge,this paper proposes an effective method for reducing the data bit width of the network for floating-point quantization.By performing exponent prealignment and mantissa shifting operations,this method avoids the frequent alignment operations of standard floating-point data,thereby further reducing the exponent and mantissa bit width input into the training process.This enables training low-data-bit width models with low hardware-resource consumption while maintaining accuracy.Experimental tests were conducted on a dataset of real-world images of abnormal targets on transmission lines.The results indicate that while maintaining accuracy at a basic level,the proposed method can significantly reduce the data bit width compared with single-precision data.This suggests that the proposed method has a marked ability to enhance the real-time detection of abnormal targets in transmission circuits.Furthermore,a qualitative analysis indicated that the proposed quantization method is particularly suitable for hardware architectures that integrate storage and computation and exhibit good transferability. 展开更多
关键词 Power edge Data format Quantification compute-in-memory
在线阅读 下载PDF
Linearity Performance of Charge Domain In-Memory Computing:Analysis and Calibration
3
作者 HENG ZHANG YICHUAN BAI +2 位作者 JUNJIE SHEN YUAN DU LI DU 《Integrated Circuits and Systems》 2024年第1期43-52,共10页
Deep learning has recently gained significant prominence in various real-world applications such as image recognition,natural language processing,and autonomous vehicles.While deep neural networks appear to have diffe... Deep learning has recently gained significant prominence in various real-world applications such as image recognition,natural language processing,and autonomous vehicles.While deep neural networks appear to have different architectures,the main operations within these models are matrix-vector multiplications(MVM).Compute-in-memory(CIM)architectures are promising solutions for accelerating the massive MVM operations by alleviating the frequent data movement issue in traditional processors.Analog CIM macros leverage current-accumulating or charge-sharing mechanisms to perform multiply-and-add(MAC)computations.Even though they can achieve high throughput and efficiency,the computing accuracy is sacrificed due to the analog nonidealities.To ensure precise MAC calculations,it is crucial to analyze the sources of nonidealities and identify their impacts,along with corresponding solutions.In this paper,comprehensive linearity analysis and dedicated calibration methods for charge domain static-random access memory(SRAM)based in-memory computing circuits are proposed.We analyze nonidealities from three areas based on the mechanism of charge domain computing:charge injection effect,temperature variations,and ADC reference voltage mismatch.By designing a 256×256 CIM macro and conducting investigations via post-layout simulation,we conclude that these nonidealities don’t deteriorate the computing linearity,but only cause the scaling and bias drift.To mitigate the scaling and bias drift identified,we propose three calibration methods ranging from the circuit level to the algorithm level,all of which exhibit promising results.The comprehensive analysis and calibration methods can assist in designing CIM macros with more accurate MAC computations,thereby supporting more robust deep learning inference. 展开更多
关键词 Deep learning accelerator compute-in-memory matrix-vector multiplication charge domain computing non-ideality analysis linearity calibration
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部