With the rapid development and popularization of artificial intelligence technology,convolutional neural network(CNN)is applied in many fields,and begins to replace most traditional algorithms and gradually deploys to...With the rapid development and popularization of artificial intelligence technology,convolutional neural network(CNN)is applied in many fields,and begins to replace most traditional algorithms and gradually deploys to terminal devices.However,the huge data movement and computational complexity of CNN bring huge power consumption and performance challenges to the hardware,which hinders the application of CNN in embedded devices such as smartphones and smart cars.This paper implements a convolutional neural network accelerator based on Winograd convolution algorithm on field-programmable gate array(FPGA).Firstly,a convolution kernel decomposition method for Winograd convolution is proposed.The convolution kernel larger than 3×3 is divided into multiple 3×3 convolution kernels for convolution operation,and the unsynchronized long convolution operation is processed.Then,we design Winograd convolution array and use configurable multiplier to flexibly realize multiplication for data with different accuracy.Experimental results on VGG16 and AlexNet network show that our accelerator has the most energy efficient and 101 times that of the CPU,5.8 times that of the GPU.At the same time,it has higher energy efficiency than other convolutional neural network accelerators.展开更多
To tackle the challenge of applying convolutional neural network(CNN)in field-programmable gate array(FPGA)due to its computational complexity,a high-performance CNN hardware accelerator based on Verilog hardware desc...To tackle the challenge of applying convolutional neural network(CNN)in field-programmable gate array(FPGA)due to its computational complexity,a high-performance CNN hardware accelerator based on Verilog hardware description language was designed,which utilizes a pipeline architecture with three parallel dimensions including input channels,output channels,and convolution kernels.Firstly,two multiply-and-accumulate(MAC)operations were packed into one digital signal processing(DSP)block of FPGA to double the computation rate of the CNN accelerator.Secondly,strategies of feature map block partitioning and special memory arrangement were proposed to optimize the total amount of off-chip access memory and reduce the pressure on FPGA bandwidth.Finally,an efficient computational array combining multiplicative-additive tree and Winograd fast convolution algorithm was designed to balance hardware resource consumption and computational performance.The high parallel CNN accelerator was deployed in ZU3 EG of Alinx,using the YOLOv3-tiny algorithm as the test object.The average computing performance of the CNN accelerator is 127.5 giga operations per second(GOPS).The experimental results show that the hardware architecture effectively improves the computational power of CNN and provides better performance compared with other existing schemes in terms of power consumption and the efficiency of DSPs and block random access memory(BRAMs).展开更多
基金supported by the Project of the State Grid Corporation of China in 2022(No.5700-201941501A-0-0-00)the National Natural Science Foundation of China(No.U21B2031).
文摘With the rapid development and popularization of artificial intelligence technology,convolutional neural network(CNN)is applied in many fields,and begins to replace most traditional algorithms and gradually deploys to terminal devices.However,the huge data movement and computational complexity of CNN bring huge power consumption and performance challenges to the hardware,which hinders the application of CNN in embedded devices such as smartphones and smart cars.This paper implements a convolutional neural network accelerator based on Winograd convolution algorithm on field-programmable gate array(FPGA).Firstly,a convolution kernel decomposition method for Winograd convolution is proposed.The convolution kernel larger than 3×3 is divided into multiple 3×3 convolution kernels for convolution operation,and the unsynchronized long convolution operation is processed.Then,we design Winograd convolution array and use configurable multiplier to flexibly realize multiplication for data with different accuracy.Experimental results on VGG16 and AlexNet network show that our accelerator has the most energy efficient and 101 times that of the CPU,5.8 times that of the GPU.At the same time,it has higher energy efficiency than other convolutional neural network accelerators.
基金supported by the National Natural Science Foundation of China(61871132,62171135)。
文摘To tackle the challenge of applying convolutional neural network(CNN)in field-programmable gate array(FPGA)due to its computational complexity,a high-performance CNN hardware accelerator based on Verilog hardware description language was designed,which utilizes a pipeline architecture with three parallel dimensions including input channels,output channels,and convolution kernels.Firstly,two multiply-and-accumulate(MAC)operations were packed into one digital signal processing(DSP)block of FPGA to double the computation rate of the CNN accelerator.Secondly,strategies of feature map block partitioning and special memory arrangement were proposed to optimize the total amount of off-chip access memory and reduce the pressure on FPGA bandwidth.Finally,an efficient computational array combining multiplicative-additive tree and Winograd fast convolution algorithm was designed to balance hardware resource consumption and computational performance.The high parallel CNN accelerator was deployed in ZU3 EG of Alinx,using the YOLOv3-tiny algorithm as the test object.The average computing performance of the CNN accelerator is 127.5 giga operations per second(GOPS).The experimental results show that the hardware architecture effectively improves the computational power of CNN and provides better performance compared with other existing schemes in terms of power consumption and the efficiency of DSPs and block random access memory(BRAMs).