The sparse matrix vector multiplication (SpMV) is inevitable in almost all kinds of scientific computation, such as iterative methods for solving linear systems and eigenvalue problems. With the emergence and developm...The sparse matrix vector multiplication (SpMV) is inevitable in almost all kinds of scientific computation, such as iterative methods for solving linear systems and eigenvalue problems. With the emergence and development of Graphics Processing Units (GPUs), high efficient formats for SpMV should be constructed. The performance of SpMV is mainly determinted by the storage format for sparse matrix. Based on the idea of JAD format, this paper improved the ELLPACK-R format, reduced the waiting time between different threads in a warp, and the speed up achieved about 1.5 in our experimental results. Compared with other formats, such as CSR, ELL, BiELL and so on, our format performance of SpMV is optimal over 70 percent of the test matrix. We proposed a method based on parameters to analyze the performance impact on different formats. In addition, a formula was constructed to count the computation and the number of iterations.展开更多
It is an important task to improve performance for sparse matrix vector multiplication (SpMV), and it is a difficult task because of its irregular memory access. Gen- eral purpose GPU (GPGPU) provides high computi...It is an important task to improve performance for sparse matrix vector multiplication (SpMV), and it is a difficult task because of its irregular memory access. Gen- eral purpose GPU (GPGPU) provides high computing abil- ity and substantial bandwidth that cannot be fully exploited by SpMV due to its irregularity. In this paper, we propose two novel methods to optimize the memory bandwidth for SpMV on GPGPU. First, a new storage format is proposed to exploit memory bandwidth of GPU architecture more effi- ciently. The new storage format can ensure that there are as many non-zeros as possible in the format which is suitable to exploit the memory bandwidth of the GPU. Second, we pro- pose a cache blocking method to improve the performance of SpMV on GPU architecture. The sparse matrix is partitioned into sub-blocks that are stored in CSR format. With the block- ing method, the corresponding part of vector x can be reused in the GPU cache, so the time to access the global memory for vector x is reduced heavily. Experiments are carried out on three GPU platforms, GeForce 9800 GX2, GeForce GTX 480, and Tesla K40. Experimental results show that both new methods can efficiently improve the utilization of GPU mem- ory bandwidth and the performance of the GPU.展开更多
在图形处理器(GPU)上实现对角稀疏矩阵向量乘法(SpMV)可以充分利用GPU的并行计算能力,并加速矩阵向量乘法;然而,相关主流算法存在零元填充数据多、计算效率低的问题。针对上述问题,提出一种对角SpMV算法DIA-Dynamic(DIAgonal-Dynamic)...在图形处理器(GPU)上实现对角稀疏矩阵向量乘法(SpMV)可以充分利用GPU的并行计算能力,并加速矩阵向量乘法;然而,相关主流算法存在零元填充数据多、计算效率低的问题。针对上述问题,提出一种对角SpMV算法DIA-Dynamic(DIAgonal-Dynamic)。首先,设计一种全新的动态划分策略,根据矩阵的不同特征进行分块,在保证GPU高计算效率的同时大幅减少零元填充,去除冗余计算量;其次,提出一种对角稀疏矩阵存储格式BDIA(Block DIAgonal)存储分块数据,并调整数据布局,提高GPU上的访存性能;最后,基于GPU的底层进行条件分支优化,以减少分支判断,并使用动态共享内存解决向量的不规则访问问题。DIA-Dynamic与前沿Tile SpMV算法相比,平均加速比达到了1.88;与前沿BRCSD(Diagonal Compressed Storage based on Row-Blocks)-Ⅱ算法相比,平均零元填充减少了43%,平均加速比达到了1.70。实验结果表明,DIA-Dynamic能够有效提高GPU上对角SpMV的计算效率,缩短计算时间,提升程序性能。展开更多
文摘稀疏矩阵向量乘法(sparse matrix-vector multiplication,SpMV)是数值计算中的核心操作,广泛应用于科学计算、工程模拟以及机器学习中.SpMV的性能优化主要受限于不规则的稀疏模式,传统的优化通常依赖手动设计存储格式、计算策略和内存访问模式.现有张量编译器如TACO和TVM通过领域特定语言(domain specific language,DSL)可实现高性能算子生成,减轻开发人员繁琐的手动优化工作,但对稀疏计算的优化支持尚显不足,难以根据不同的稀疏模式自适应优化性能.为了解决这些问题,提出了名为SparseMode的稀疏编译框架,能够依据矩阵的稀疏模式为SpMV计算生成高效的向量化代码,并根据硬件平台的特性自适应地调整优化策略.该编译框架首先设计了领域专属语言SpMV-DSL,能够简洁高效地表达SpMV的稀疏矩阵和计算操作.然后提出了基于稀疏模式感知的方法,根据SpMV-DSL定义的矩阵存储格式和非零元素分布动态选择计算策略.最后通过稀疏模式分析和调度优化生成高效并行的SpMV算子代码,以充分利用SIMD指令提升性能.在不同硬件平台上的SpMV实验结果表明,SparseMode生成的SpMV算子代码相较于现有的TACO和TVM张量编译器实现了最高2.44倍的加速比.
文摘The sparse matrix vector multiplication (SpMV) is inevitable in almost all kinds of scientific computation, such as iterative methods for solving linear systems and eigenvalue problems. With the emergence and development of Graphics Processing Units (GPUs), high efficient formats for SpMV should be constructed. The performance of SpMV is mainly determinted by the storage format for sparse matrix. Based on the idea of JAD format, this paper improved the ELLPACK-R format, reduced the waiting time between different threads in a warp, and the speed up achieved about 1.5 in our experimental results. Compared with other formats, such as CSR, ELL, BiELL and so on, our format performance of SpMV is optimal over 70 percent of the test matrix. We proposed a method based on parameters to analyze the performance impact on different formats. In addition, a formula was constructed to count the computation and the number of iterations.
文摘稀疏矩阵向量乘(SpMV)是科学与工程计算中一个重要的核心函数,但在当前基于存储器层次结构的计算平台上,传统CSR(Compressed Sparse Row)存储的稀疏矩阵向量乘性能较低,运行效率往往远低于硬件浮点峰值的10%.目前现有的处理器架构一般都采用SIMD向量化技术进行加速,但是传统CSR格式的稀疏矩阵向量乘由于访存的不规则性,不能直接采用向量化技术进行加速,为了利用SIMD技术,对具有局部性特征的稀疏矩阵,提出了新的稀疏矩阵存储格式CSRL(Compressed Sparse Row with Local information),该格式可以减少SpMV时内存访问次数,并且能够充分利用硬件的SIMD向量化技术进行读取和计算,提高了SpMV性能.实验表明,该方法相比国际著名商业库Intel MKL10.3版平均性能提升达到29.5%,最高可达89%的性能提升.
文摘It is an important task to improve performance for sparse matrix vector multiplication (SpMV), and it is a difficult task because of its irregular memory access. Gen- eral purpose GPU (GPGPU) provides high computing abil- ity and substantial bandwidth that cannot be fully exploited by SpMV due to its irregularity. In this paper, we propose two novel methods to optimize the memory bandwidth for SpMV on GPGPU. First, a new storage format is proposed to exploit memory bandwidth of GPU architecture more effi- ciently. The new storage format can ensure that there are as many non-zeros as possible in the format which is suitable to exploit the memory bandwidth of the GPU. Second, we pro- pose a cache blocking method to improve the performance of SpMV on GPU architecture. The sparse matrix is partitioned into sub-blocks that are stored in CSR format. With the block- ing method, the corresponding part of vector x can be reused in the GPU cache, so the time to access the global memory for vector x is reduced heavily. Experiments are carried out on three GPU platforms, GeForce 9800 GX2, GeForce GTX 480, and Tesla K40. Experimental results show that both new methods can efficiently improve the utilization of GPU mem- ory bandwidth and the performance of the GPU.
文摘在图形处理器(GPU)上实现对角稀疏矩阵向量乘法(SpMV)可以充分利用GPU的并行计算能力,并加速矩阵向量乘法;然而,相关主流算法存在零元填充数据多、计算效率低的问题。针对上述问题,提出一种对角SpMV算法DIA-Dynamic(DIAgonal-Dynamic)。首先,设计一种全新的动态划分策略,根据矩阵的不同特征进行分块,在保证GPU高计算效率的同时大幅减少零元填充,去除冗余计算量;其次,提出一种对角稀疏矩阵存储格式BDIA(Block DIAgonal)存储分块数据,并调整数据布局,提高GPU上的访存性能;最后,基于GPU的底层进行条件分支优化,以减少分支判断,并使用动态共享内存解决向量的不规则访问问题。DIA-Dynamic与前沿Tile SpMV算法相比,平均加速比达到了1.88;与前沿BRCSD(Diagonal Compressed Storage based on Row-Blocks)-Ⅱ算法相比,平均零元填充减少了43%,平均加速比达到了1.70。实验结果表明,DIA-Dynamic能够有效提高GPU上对角SpMV的计算效率,缩短计算时间,提升程序性能。