期刊文献+

一种注意力机制优化方法及硬件加速设计 被引量:4

An Improved Attention Mechanism Algorithm Model and Hardware Aceleration Design Method
在线阅读 下载PDF
导出
摘要 针对注意力机制在卷积神经网络的应用过程中无法避免的计算量增大、延迟增加问题,本文提出一种优化后的CBAM(Convolutional Block Attention Module)算法模型,并进行了硬件设计实现.论文基于传统CBAM模型结构,分析算法内部隐藏的潜在问题,设计更加符合注意力重要性参数提取初衷的算法模型;同时,通过计算过程优化,减少数据计算量、对算子进行最大并行组合;利用FPGA(Field Programmable Gate Array)可设计高效灵活并行阵列的优势,为改进后的CBAM算法设计一种硬件加速引擎结构.实验结果表明,与传统CBAM机制相比,改进后的注意力机制可以保持与原有算法模型几乎相同的精度,部署在FPGA的硬件加速计算引擎以180 MHz工作频率进行推理实验,经分析可得,本文提出的设计方案在同等硬件资源条件下,针对注意力机制电路可实现10.2%的计算速度提升,针对VGG16网络模型可实现4.5%的推理速度提升. Aiming at the problem of increased calculation and delay that cannot be avoided in the application of con-volutional neural network in the attention mechanism,this paper proposes an optimized CBAM(Convolutional Block Atten-tion Module)algorithm model.Based on the traditional CBAM model structure,we analyze the hidden problems inside the algorithm,and design an algorithm model that is more fit for the original intention of attention importance parameter extrac-tion;at the same time,through the optimization of the calculation process,the amount of data calculation is reduced,and the maximum parallel combination of operators is used;taking advantage of FPGA(Field Programmable Gate Array)to design efficient and flexible parallel arrays,we design a hardware acceleration engine structure for the improved CBAM algorithm.The experimental results show that compared with the traditional CBAM mechanism,the improved attention mechanism can maintain almost the same accuracy as the original algorithm model.The hardware accelerated computing engine de-ployed on the FPGA performs inference experiments at a working frequency of 180 MHz.After analysis,it can be found that the design proposed in this paper can achieve a 10.2%increase in calculation speed for the attention mechanism circuit and a 4.5%increase in inference speed for the VGG16 network model with the same hardware resources.
作者 王莹 王晶 高岚 吕旭 张伟功 WANG Ying;WANG Jing;GAO Lan;LYU Xu;ZHANG Wei-gong(College of Information Engineering,Capital Normal University,Beijing 100048,China;School of Mathematical Science,Capital Normal University,Beijing 100048,China)
出处 《电子学报》 EI CAS CSCD 北大核心 2023年第4期1021-1029,共9页 Acta Electronica Sinica
基金 国家自然科学基金面上项目(No.62076168)。
关键词 注意力机制 CBAM 卷积神经网络 FPGA 硬件加速器 Attention mechanism CBAM convolutional neural network FPGA hardware accelerator
  • 相关文献

参考文献5

二级参考文献13

共引文献57

同被引文献27

引证文献4

二级引证文献7

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部