期刊文献+
共找到7篇文章
< 1 >
每页显示 20 50 100
A 28 nm 576K RRAM-based computing-in-memory macro featuring hybrid programming with area efficiency of 2.82 TOPS/mm^(2)
1
作者 Siqi Liu Songtao Wei +7 位作者 Peng Yao Dong Wu Lu Jie Sining Pan Jianshi Tang Bin Gao He Qian Huaqiang Wu 《Journal of Semiconductors》 2025年第6期112-119,共8页
Computing-in-memory(CIM)has been a promising candidate for artificial-intelligent applications thanks to the absence of data transfer between computation and storage blocks.Resistive random access memory(RRAM)based CI... Computing-in-memory(CIM)has been a promising candidate for artificial-intelligent applications thanks to the absence of data transfer between computation and storage blocks.Resistive random access memory(RRAM)based CIM has the advantage of high computing density,non-volatility as well as high energy efficiency.However,previous CIM research has predominantly focused on realizing high energy efficiency and high area efficiency for inference,while little attention has been devoted to addressing the challenges of on-chip programming speed,power consumption,and accuracy.In this paper,a fabri-cated 28 nm 576K RRAM-based CIM macro featuring optimized on-chip programming schemes is proposed to address the issues mentioned above.Different strategies of mapping weights to RRAM arrays are compared,and a novel direct-current ADC design is designed for both programming and inference stages.Utilizing the optimized hybrid programming scheme,4.67×programming speed,0.15×power saving and 4.31×compact weight distribution are realized.Besides,this macro achieves a normalized area efficiency of 2.82 TOPS/mm2 and a normalized energy efficiency of 35.6 TOPS/W. 展开更多
关键词 computing-in-memory on-chip programming scheme hybrid programming resistive random access memory matrix-vector-multiplication acceleration
在线阅读 下载PDF
Analog-to-Digital Converter Design for DiversePerformance Computing-in-Memory Systems:A Comprehensive Review
2
作者 SHUAI XIAO FUYI LI +4 位作者 TING HAO LANXIANG XIAO MANLIN XIAO WEI MAO GENQUAN HAN 《Integrated Circuits and Systems》 2025年第2期81-92,共12页
Computing-in-Memory(CIM)architectures have emerged as a pivotal technology for nextgeneration artificial intelligence(AI)and edge computing applications.By enabling computations directly within memory cells,CIM archit... Computing-in-Memory(CIM)architectures have emerged as a pivotal technology for nextgeneration artificial intelligence(AI)and edge computing applications.By enabling computations directly within memory cells,CIM architectures effectively minimize data movement and significantly enhance energy efficiency.In the CIM system,the analog-to-digital converter(ADC)bridges the gap between efficient analog computation and general digital processing,while influencing the overall accuracy,speed and energy efficiency of the system.This review presents theoretical analyses and practical case studies on the performance requirements of ADCs and their optimization methods in CIM systems,aiming to provide ideas and references for the design and optimization of CIM systems.The review comprehensively explores the relationship between the design of CIM architectures and ADC optimization,and raises the issue of design trade-offs between low power consumption,high speed operation and compact integration design.On this basis,novel customized ADC optimization methods are discussed in depth,and a large number of current CIM systems and their ADC optimization examples are reviewed,with optimization methods summarized and classified in terms of power consumption,speed,and area.In the final part,this review analyzes energy efficiency,ENOB,and frequency scaling trends,demonstrating how advanced processes enable ADCs to balance speed,power,and area trade-offs,guiding ADC optimization for next-gen CIM systems. 展开更多
关键词 Analog-to-Digital converter area optimization computing-in-memory edge computing energy efficiency high speed low power neural networks.
在线阅读 下载PDF
A High-Resistance SOT Device Based Computing-in-Memory Macro With High Sensing Margin and Multi-Bit MAC Operations for AI Edge Inference
3
作者 JUNZHAN LIU JINYAO MI +3 位作者 YANG LIU LIANG ZHANG HE ZHANG WANG KANG 《Integrated Circuits and Systems》 2025年第3期102-109,共8页
Computing-in-memory(CIM)offers a promising solution to the memory wall issue.Magnetoresistive random-access memory(MRAM)is a favored medium for CIM due to its non-volatility,high speed,low power,and technology maturit... Computing-in-memory(CIM)offers a promising solution to the memory wall issue.Magnetoresistive random-access memory(MRAM)is a favored medium for CIM due to its non-volatility,high speed,low power,and technology maturity.However,MRAM has continuously encountered the challenge of an insufficient high-resistance state(HRS)to low-resistance state(LRS)ratio,which affects the result accuracy of CIM.In this paper,based on SOT devices,we propose a 5T2M bit-cell structure that increases the high-to-low current ratio by modulating the sub-threshold operation region.Besides,by jointly using high-resistance devices(MΩlevel),the power consumption of the bit-cell array can be significantly reduced.Simultaneously,we have designed a compatible multi-bit implementation and macro architecture to support AI edge inference acceleration.This work was simulated under a 40-nm foundry process and a physically verified SOT-MTJ model.The results show that under the same high-to-low resistance ratio,a 52.6×high-to-low current ratio can be achieved,along with a 38.6%-98%bit-cell array power reduction. 展开更多
关键词 computing-in-memory SOT-MRAM HRS/LRS ratio multi-bit artificial intelligence.
在线阅读 下载PDF
Experimental Realization of Physical Unclonable Function Chip Utilizing Spintronic Memories
4
作者 Xiuye Zhang Chuanpeng Jiang +11 位作者 Jialiang Yin Daoqian Zhu Shiqi Wang Sai Li Zhongxiang Zhang Ao Du Wenlong Cai Hongxi Liu Kewen Shi Kaihua Cao Zhaohao Wang Weisheng Zhao 《Engineering》 2025年第6期141-148,共8页
In recent years,physical unclonable function(PUF)has emerged as a lightweight solution in the Internet of Things security.However,conventional PUFs based on complementary metal oxide semiconductor(CMOS)present challen... In recent years,physical unclonable function(PUF)has emerged as a lightweight solution in the Internet of Things security.However,conventional PUFs based on complementary metal oxide semiconductor(CMOS)present challenges such as insufficient randomness,significant power and area overhead,and vulnerability to environmental factors,leading to reduced reliability.In this study,we realize a strong,highly reliable and reconfigurable PUF with resistance against machine-learning attacks in a 1 kb spinorbit torque magnetic random access memory fabricated using a 180 nm CMOS process.This strong PUF achieves a challenge-response pair capacity of 10^(9) through a computing-in-memory approach.The results demonstrate that the proposed PUF exhibits near-ideal performance metrics:50.07% uniformity,50% diffuseness,49.89% uniqueness,and a bit error rate of 0%,even in a 375 K environment.The reconfigurability of PUF is demonstrated by a reconfigurable Hamming distance of 49.31% and a correlation coefficient of less than 0.2,making it difficult to extract output keys through side-channel analysis.Furthermore,resistance to machine-learning modeling attacks is confirmed by achieving an ideal accuracy prediction of approximately 50% in the test set. 展开更多
关键词 Physical unclonable function Spin-orbit torque magnetic random access memory computing-in-memory RECONFIGURABILITY Machine-learning attack
在线阅读 下载PDF
Optimized operation scheme of flash-memory-based neural network online training with ultra-high endurance 被引量:1
5
作者 Yang Feng Zhaohui Sun +6 位作者 Yueran Qi Xuepeng Zhan Junyu Zhang Jing Liu Masaharu Kobayashi Jixuan Wu Jiezhi Chen 《Journal of Semiconductors》 EI CAS CSCD 2024年第1期33-37,共5页
With the rapid development of machine learning,the demand for high-efficient computing becomes more and more urgent.To break the bottleneck of the traditional Von Neumann architecture,computing-in-memory(CIM)has attra... With the rapid development of machine learning,the demand for high-efficient computing becomes more and more urgent.To break the bottleneck of the traditional Von Neumann architecture,computing-in-memory(CIM)has attracted increasing attention in recent years.In this work,to provide a feasible CIM solution for the large-scale neural networks(NN)requiring continuous weight updating in online training,a flash-based computing-in-memory with high endurance(10^(9) cycles)and ultrafast programming speed is investigated.On the one hand,the proposed programming scheme of channel hot electron injection(CHEI)and hot hole injection(HHI)demonstrate high linearity,symmetric potentiation,and a depression process,which help to improve the training speed and accuracy.On the other hand,the low-damage programming scheme and memory window(MW)optimizations can suppress cell degradation effectively with improved computing accuracy.Even after 109 cycles,the leakage current(I_(off))of cells remains sub-10pA,ensuring the large-scale computing ability of memory.Further characterizations are done on read disturb to demonstrate its robust reliabilities.By processing CIFAR-10 tasks,it is evident that~90%accuracy can be achieved after 109 cycles in both ResNet50 and VGG16 NN.Our results suggest that flash-based CIM has great potential to overcome the limitations of traditional Von Neumann architectures and enable high-performance NN online training,which pave the way for further development of artificial intelligence(AI)accelerators. 展开更多
关键词 NOR flash memory computing-in-memory ENDURANCE neural network online training
在线阅读 下载PDF
二维铁电半导体层级处理模块设计及低功耗高性能人工视觉系统应用 被引量:1
6
作者 吴广成 向立 +17 位作者 王文强 姚程栋 颜泽毅 张成 吴家鑫 刘勇 郑弼元 刘华伟 胡城伟 孙兴霞 朱晨光 王一喆 熊雄 吴燕庆 高亮 李东 潘安练 李晟曼 《Science Bulletin》 SCIE EI CAS CSCD 2024年第4期473-482,共10页
The growth of data and Internet of Things challenges traditional hardware,which encounters efficiency and power issues owing to separate functional units for sensors,memory,and computation.In this study,we designed an... The growth of data and Internet of Things challenges traditional hardware,which encounters efficiency and power issues owing to separate functional units for sensors,memory,and computation.In this study,we designed an a-phase indium selenide(a-In_(2)Se_(3))transistor,which is a two-dimensional ferroelectric semiconductor as the channel material,to create artificial optic-neural and electro-neural synapses,enabling cutting-edge processing-in-sensor(PIS)and computing-in-memory(CIM)functionalities.As an optic-neural synapse for low-level sensory processing,the a-In_(2)Se_(3)transistor exhibits a high photoresponsivity(2855 A/W)and detectivity(2.91×10^(14)Jones),facilitating efficient feature extraction.For high-level processing tasks as an electro-neural synapse,it offers a fast program/erase speed of 40 ns/50μs and ultralow energy consumption of 0.37 aJ/spike.An AI vision system using a-In_(2)Se_(3)transistors has been demonstrated.It achieved an impressive recognition accuracy of 92.63%within 12 epochs owing to the synergistic combination of the PIS and CIM functionalities.This study demonstrates the potential of the a-In_(2)Se_(3)transistor in future vision hardware,enhancing processing,power efficiency,and AI applications. 展开更多
关键词 Two-dimensional ferroelectric SEMICONDUCTOR Processing-in-sensor computing-in-memory Synaptic device Artificial-intelligence vision system
原文传递
BASER:Bit-Wise Approximate Compressor Configurable In-SRAM-Computing for Energy-Efficient Neural Network Acceleration With Data-Aware Weight Remapping Method
7
作者 SHUNQIN CAI LIUKAI XU +4 位作者 DENGFENG WANG ZHI LI WEIKANG QIAN LIANG CHANG YANAN SUN 《Integrated Circuits and Systems》 2024年第2期80-91,共12页
SRAM-based computing-in-memory(SRAM-CIM)is expected to solve the“Memory Wall”problem.For the digital domain SRAM-CIM,full-precision digital logic has been utilized to achieve high computational accuracy.However,the ... SRAM-based computing-in-memory(SRAM-CIM)is expected to solve the“Memory Wall”problem.For the digital domain SRAM-CIM,full-precision digital logic has been utilized to achieve high computational accuracy.However,the energy and area efficiency advantages of CIM cannot be fully utilized under error-resilient neural networks(NNs)with given quantization bit-width.Therefore,an all-digital Bit-wise Approximate compressor configurable In-SRAM-computing macro for Energy-efficient NN acceleration,with a data-aware weight Remapping method(BASER),is proposed in this paper.Leveraging the NN error resilience property,six energy-efficient bit-wise compressor configurations are presented under 4b/4b and 3b/3b NN quantization,respectively.Concurrently,a data-aware weight remapping approach is proposed to enhance the NN accuracy without supplementary retraining further.Evaluations of VGG-9 and ResNet-18 on CIFAR-10 and CIFAR-100 datasets show that the proposed BASER achieves 1.35x and 1.29x improvement in energy efficiency,as well as limited accuracy loss and improved NN accuracy,as compared to the previous full-precision and approximate SRAM-CIM design,respectively. 展开更多
关键词 Approximate computing bit-wise configuration computing-in-memory static randomaccess memory weight remapping
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部