Computing-in-memory(CIM)has been a promising candidate for artificial-intelligent applications thanks to the absence of data transfer between computation and storage blocks.Resistive random access memory(RRAM)based CI...Computing-in-memory(CIM)has been a promising candidate for artificial-intelligent applications thanks to the absence of data transfer between computation and storage blocks.Resistive random access memory(RRAM)based CIM has the advantage of high computing density,non-volatility as well as high energy efficiency.However,previous CIM research has predominantly focused on realizing high energy efficiency and high area efficiency for inference,while little attention has been devoted to addressing the challenges of on-chip programming speed,power consumption,and accuracy.In this paper,a fabri-cated 28 nm 576K RRAM-based CIM macro featuring optimized on-chip programming schemes is proposed to address the issues mentioned above.Different strategies of mapping weights to RRAM arrays are compared,and a novel direct-current ADC design is designed for both programming and inference stages.Utilizing the optimized hybrid programming scheme,4.67×programming speed,0.15×power saving and 4.31×compact weight distribution are realized.Besides,this macro achieves a normalized area efficiency of 2.82 TOPS/mm2 and a normalized energy efficiency of 35.6 TOPS/W.展开更多
基金supported in part by the National Natural Science Foundation of China (62422405, 62025111,62495100, 92464302)the STI 2030-Major Projects(2021ZD0201200)+1 种基金the Shanghai Municipal Science and Technology Major Projectthe Beijing Advanced Innovation Center for Integrated Circuits
文摘Computing-in-memory(CIM)has been a promising candidate for artificial-intelligent applications thanks to the absence of data transfer between computation and storage blocks.Resistive random access memory(RRAM)based CIM has the advantage of high computing density,non-volatility as well as high energy efficiency.However,previous CIM research has predominantly focused on realizing high energy efficiency and high area efficiency for inference,while little attention has been devoted to addressing the challenges of on-chip programming speed,power consumption,and accuracy.In this paper,a fabri-cated 28 nm 576K RRAM-based CIM macro featuring optimized on-chip programming schemes is proposed to address the issues mentioned above.Different strategies of mapping weights to RRAM arrays are compared,and a novel direct-current ADC design is designed for both programming and inference stages.Utilizing the optimized hybrid programming scheme,4.67×programming speed,0.15×power saving and 4.31×compact weight distribution are realized.Besides,this macro achieves a normalized area efficiency of 2.82 TOPS/mm2 and a normalized energy efficiency of 35.6 TOPS/W.