期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Cambricon-QR:a sparse and bitwise reproducible quantized training accelerator
1
作者 李楠 ZHAO Yongwei +7 位作者 ZHI Tian LIU Chang DU Zidong HU Xing LI Wei ZHANG Xishan LI Ling SUN Guangzhong 《High Technology Letters》 EI CAS 2024年第1期52-60,共9页
Quantized training has been proven to be a prominent method to achieve deep neural network training under limited computational resources.It uses low bit-width arithmetics with a proper scaling factor to achieve negli... Quantized training has been proven to be a prominent method to achieve deep neural network training under limited computational resources.It uses low bit-width arithmetics with a proper scaling factor to achieve negligible accuracy loss.Cambricon-Q is the ASIC design proposed to efficiently support quantized training,and achieves significant performance improvement.However,there are still two caveats in the design.First,Cambricon-Q with different hardware specifications may lead to different numerical errors,resulting in non-reproducible behaviors which may become a major concern in critical applications.Second,Cambricon-Q cannot leverage data sparsity,where considerable cycles could still be squeezed out.To address the caveats,the acceleration core of Cambricon-Q is redesigned to support fine-grained irregular data processing.The new design not only enables acceleration on sparse data,but also enables performing local dynamic quantization by contiguous value ranges(which is hardware independent),instead of contiguous addresses(which is dependent on hardware factors).Experimental results show that the accuracy loss of the method still keeps negligible,and the accelerator achieves 1.61×performance improvement over Cambricon-Q,with about 10%energy increase. 展开更多
关键词 quantized training sparse accelerator Cambricon-QR
在线阅读 下载PDF
An accelerated augmented Lagrangian method for linearly constrained convex programming with the rate of convergence O(1/k^2) 被引量:1
2
作者 KE Yi-fen MA Chang-feng 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 2017年第1期117-126,共10页
In this paper, we propose and analyze an accelerated augmented Lagrangian method(denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k^2) whil... In this paper, we propose and analyze an accelerated augmented Lagrangian method(denoted by AALM) for solving the linearly constrained convex programming. We show that the convergence rate of AALM is O(1/k^2) while the convergence rate of the classical augmented Lagrangian method(ALM) is O1 k. Numerical experiments on the linearly constrained 1-2minimization problem are presented to demonstrate the effectiveness of AALM. 展开更多
关键词 convex augmented constrained minimization accelerated Lagrangian linearly iteration sparse stopping
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部