Given that the concurrent L1-minimization(L1-min)problem is often required in some real applications,we investigate how to solve it in parallel on GPUs in this paper.First,we propose a novel self-adaptive warp impleme...Given that the concurrent L1-minimization(L1-min)problem is often required in some real applications,we investigate how to solve it in parallel on GPUs in this paper.First,we propose a novel self-adaptive warp implementation of the matrix-vector multiplication(Ax)and a novel self-adaptive thread implementation of the matrix-vector multiplication(ATx),respectively,on the GPU.The vector-operation and inner-product decision trees are adopted to choose the optimal vector-operation and inner-product kernels for vectors of any size.Second,based on the above proposed kernels,the iterative shrinkage-thresholding algorithm is utilized to present two concurrent L1-min solvers from the perspective of the streams and the thread blocks on a GPU,and optimize their performance by using the new features of GPU such as the shuffle instruction and the read-only data cache.Finally,we design a concurrent L1-min solver on multiple GPUs.The experimental results have validated the high effectiveness and good performance of our proposed methods.展开更多
We have successfully ported an arbitrary highorder discontinuous Galerkin method for solving the threedimensional isotropic elastic wave equation on unstructured tetrahedral meshes to multiple Graphic Processing Units...We have successfully ported an arbitrary highorder discontinuous Galerkin method for solving the threedimensional isotropic elastic wave equation on unstructured tetrahedral meshes to multiple Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) of NVIDIA and Message Passing Interface (MPI) and obtained a speedup factor of about 28.3 for the single-precision version of our codes and a speedup factor of about 14.9 for the double-precision version. The GPU used in the comparisons is NVIDIA Tesla C2070 Fermi, and the CPU used is Intel Xeon W5660. To effectively overlap inter-process communication with computation, we separate the elements on each subdomain into inner and outer elements and complete the computation on outer elements and fill the MPI buffer first. While the MPI messages travel across the network, the GPU performs computation on inner elements, and all other calculations that do not use information of outer elements from neighboring subdomains. A significant portion of the speedup also comes from a customized matrix-matrix multiplication kernel, which is used extensively throughout our program. Preliminary performance analysis on our parallel GPU codes shows favorable strong and weak scalabilities.展开更多
Scale Invariant Feature Transform (SIFT) algorithm is a widely used computer vision algorithm that detects and extracts local feature descriptors from images. SIFT is computationally intensive, making it infeasible fo...Scale Invariant Feature Transform (SIFT) algorithm is a widely used computer vision algorithm that detects and extracts local feature descriptors from images. SIFT is computationally intensive, making it infeasible for single threaded im-plementation to extract local feature descriptors for high-resolution images in real time. In this paper, an approach to parallelization of the SIFT algorithm is demonstrated using NVIDIA’s Graphics Processing Unit (GPU). The parallel-ization design for SIFT on GPUs is divided into two stages, a) Algorithm de-sign-generic design strategies which focuses on data and b) Implementation de-sign-architecture specific design strategies which focuses on optimally using GPU resources for maximum occupancy. Increasing memory latency hiding, eliminating branches and data blocking achieve a significant decrease in aver-age computational time. Furthermore, it is observed via Paraver tools that our approach to parallelization while optimizing for maximum occupancy allows GPU to execute memory bound SIFT algorithm at optimal levels.展开更多
As one of the most essential and important operations in linear algebra, the performance prediction of sparse matrix-vector multiplication (SpMV) on GPUs has got more and more attention in recent years. In 2012, Guo a...As one of the most essential and important operations in linear algebra, the performance prediction of sparse matrix-vector multiplication (SpMV) on GPUs has got more and more attention in recent years. In 2012, Guo and Wang put forward a new idea to predict the performance of SpMV on GPUs. However, they didn’t consider the matrix structure completely, so the execution time predicted by their model tends to be inaccurate for general sparse matrix. To address this problem, we proposed two new similar models, which take into account the structure of the matrices and make the performance prediction model more accurate. In addition, we predict the execution time of SpMV for CSR-V, CSR-S, ELL and JAD sparse matrix storage formats by the new models on the CUDA platform. Our experimental results show that the accuracy of prediction by our models is 1.69 times better than Guo and Wang’s model on average for most general matrices.展开更多
The parallel computation capabilities of modern graphics processing units (GPUs) have attracted increasing attention from researchers and engineers who have been conducting high computational throughput studies. How...The parallel computation capabilities of modern graphics processing units (GPUs) have attracted increasing attention from researchers and engineers who have been conducting high computational throughput studies. However, current single GPU based engineering solutions are often struggling to fulfill their real-time requirements. Thus, the multi-GPU-based approach has become a popular and cost-effective choice for tackling the demands. In those cases, the computational load balancing over multiple GPU "nodes" is often the key and bottleneck that affect the quality and performance of the real=time system. The existing load balancing approaches are mainly based on the assumption that all GPU nodes in the same computer framework are of equal computational performance, which is often not the case due to cluster design and other legacy issues. This paper presents a novel dynamic load balancing (DLB) model for rapid data division and allocation on heterogeneous GPU nodes based on an innovative fuzzy neural network (FNN). In this research, a 5-state parameter feedback mechanism defining the overall cluster and node performance is proposed. The corresponding FNN-based DLB model will be capable of monitoring and predicting individual node performance under different workload scenarios. A real=time adaptive scheduler has been devised to reorganize the data inputs to each node when necessary to maintain their runtime computational performance. The devised model has been implemented on two dimensional (2D) discrete wavelet transform (DWT) applications for evaluation. Experiment results show that this DLB model enables a high computational throughput while ensuring real=time and precision requirements from complex computational tasks.展开更多
Particle accelerators play an important role in a wide range of scientific discoveries and industrial applications. The self-consistent multi-particle simulation based on the particle-in-cell (PIC) method has been use...Particle accelerators play an important role in a wide range of scientific discoveries and industrial applications. The self-consistent multi-particle simulation based on the particle-in-cell (PIC) method has been used to study charged particle beam dynamics inside those accelerators. However, the PIC simulation is time-consuming and needs to use modern parallel computers for high-resolution applications. In this paper, we implemented a parallel beam dynamics PIC code on multi-node hybrid architecture computers with multiple Graphics Processing Units (GPUs). We used two methods to parallelize the PIC code on multiple GPUs and observed that the replication method is a better choice for moderate problem size and current computer hardware while the domain decomposition method might be a better choice for large problem size and more advanced computer hardware that allows direct communications among multiple GPUs. Using the multi-node hybrid architectures at Oak Ridge Leadership Computing Facility (OLCF), the optimized GPU PIC code achieves a reasonable parallel performance and scales up to 64 GPUs with 16 million particles.展开更多
The sparse matrix vector multiplication (SpMV) is inevitable in almost all kinds of scientific computation, such as iterative methods for solving linear systems and eigenvalue problems. With the emergence and developm...The sparse matrix vector multiplication (SpMV) is inevitable in almost all kinds of scientific computation, such as iterative methods for solving linear systems and eigenvalue problems. With the emergence and development of Graphics Processing Units (GPUs), high efficient formats for SpMV should be constructed. The performance of SpMV is mainly determinted by the storage format for sparse matrix. Based on the idea of JAD format, this paper improved the ELLPACK-R format, reduced the waiting time between different threads in a warp, and the speed up achieved about 1.5 in our experimental results. Compared with other formats, such as CSR, ELL, BiELL and so on, our format performance of SpMV is optimal over 70 percent of the test matrix. We proposed a method based on parameters to analyze the performance impact on different formats. In addition, a formula was constructed to count the computation and the number of iterations.展开更多
In our previous work, a novel algorithm to perform robust pose estimation was presented. The pose was estimated using points on the object to regions on image correspondence. The laboratory experiments conducted in th...In our previous work, a novel algorithm to perform robust pose estimation was presented. The pose was estimated using points on the object to regions on image correspondence. The laboratory experiments conducted in the previous work showed that the accuracy of the estimated pose was over 99% for position and 84% for orientation estimations respectively. However, for larger objects, the algorithm requires a high number of points to achieve the same accuracy. The requirement of higher number of points makes the algorithm, computationally intensive resulting in the algorithm infeasible for real-time computer vision applications. In this paper, the algorithm is parallelized to run on NVIDIA GPUs. The results indicate that even for objects having more than 2000 points, the algorithm can estimate the pose in real time for each frame of high-resolution videos.展开更多
Sparse matrix-vector multiplication(SpMV)is one of the key kernels extensively employed in both industrial and scientific applications,with its computation and random access incurring a lot of overhead.To capitalize o...Sparse matrix-vector multiplication(SpMV)is one of the key kernels extensively employed in both industrial and scientific applications,with its computation and random access incurring a lot of overhead.To capitalize on higher compute rates and data movement efficiency,there have been efforts to utilize mixed precision SpMV.However,most existing techniques focus on single-grained precision selection for all matrices.In this work,we concentrate on hierarchical precision selection strategies tailored for irregular matrices,driven by the need to achieve optimal load balancing among thread groups executing on GPUs.Based on the concept of strong connection,we firstly introduce a novel adaptive row-grained precision selection strategy that surpasses existing strategy within multi-precision Jacobi methods.Secondly,our experiments have uncovered a range within which converting double-precision floating-point numbers to single-precision floating-point numbers incurs a loss smaller than the machine precision FLT_EPSILON.This range is used for element-grained precision selection.Subsequently,we propose a hierarchical precision selection compressed sparse row format(CSR)storage method and enhance the CSR-Vector kernel,achieving higher relative speedups and load balancing on a benchmark suite composed of 41 matrices compared to existing methods.Finally,we integrate the mixed precision SpMV into the generalized minimal residual method(GMRES)algorithm,achieving faster execution speeds while maintaining similar convergence accuracy as double-precision GMRES.展开更多
Hash functions are essential in cryptographic primitives such as digital signatures,key exchanges,and blockchain technology.SM3,built upon the Merkle-Damgard structure,is a crucial element in Chinese commercial crypto...Hash functions are essential in cryptographic primitives such as digital signatures,key exchanges,and blockchain technology.SM3,built upon the Merkle-Damgard structure,is a crucial element in Chinese commercial cryptographic schemes.Optimizing hash function performance is crucial given the growth of Internet of Things(IoT)devices and the rapid evolution of blockchain technology.In this paper,we introduce a high-performance implementation framework for accelerating the SM3 cryptography hash function,short for HI-SM3,using heterogeneous GPU(graphics processing unit)parallel computing devices.HI-SM3 enhances the implementation of hash functions across four dimensions:parallelism,register utilization,memory access,and instruction efficiency,resulting in significant performance gains across various GPU platforms.Leveraging the NVIDIA RTX 4090 GPU,HI-SM3 achieves a remarkable peak performance of 454.74 GB/s,surpassing OpenSSL on a high-end server CPU(E5-2699V3)with 16 cores by over 150 times.On the Hygon DCU accelerator,a Chinese domestic graphics card,it achieves 113.77 GB/s.Furthermore,compared with the fastest known GPU-based SM3 implementation,HI-SM3 on the same GPU platform exhibits a 3.12x performance improvement.Even on embedded GPUs consuming less than 40W,HI-SM3 attains a throughput of 5.90 GB/s,which is twice as high as that of a server-level CPU.In summary,HI-SM3 provides a significant performance advantage,positioning it as a compelling solution for accelerating hash operations.展开更多
Graph neural networks(GNNs)can be adapted to GPUs with high computing capability due to massive arithmetic opera-tions.Compared with mini-batch training,full-graph training does not require sampling of the input graph...Graph neural networks(GNNs)can be adapted to GPUs with high computing capability due to massive arithmetic opera-tions.Compared with mini-batch training,full-graph training does not require sampling of the input graph and halo region,avoiding potential accuracy losses.Current deep learning frameworks evenly partition large graphs to scale GNN training to distributed multi-GPU platforms.On the other hand,the rapid revolution of hardware requires technology companies and research institutions to frequently update their equipment to cope with the latest tasks.This results in a large-scale cluster with a mixture of GPUs with various computational capabilities and hardware specifications.However,existing works fail to consider sub-graphs adapted to different GPU generations,leading to inefficient resource utilization and degraded training efficiency.Therefore,we propose_(ν)GNN,a Non-Uniformly partitioned full-graph GNN training framework on heterogeneous distributed platforms._(ν)GNN first models the GNN processing ability of hardware based on various theoretical parameters.Then,_(ν)GNN automatically obtains a reasonable task partitioning scheme by combining hardware,model,and graph dataset information.Finally,_(ν)GNN implements an irregular graph partitioning mechanism that allows GNN training tasks to execute efficiently on distributed heterogeneous systems.The experimental results show that in real-world scenarios with a mixture of GPU generations,_(ν)GNN can outperform other static partitioning schemes based on hardware specifications.展开更多
聚类是大规模高维向量数据分析的关键技术之一.近年来,基于密度的聚类算法DBSCAN(density-based spatial clustering of applications with noise)因其无须预先指定聚类数量、能够发现复杂聚类结构并有效识别噪声点的特性,在数据分析领...聚类是大规模高维向量数据分析的关键技术之一.近年来,基于密度的聚类算法DBSCAN(density-based spatial clustering of applications with noise)因其无须预先指定聚类数量、能够发现复杂聚类结构并有效识别噪声点的特性,在数据分析领域得到了广泛应用.然而,现有的基于密度的聚类算法在处理高维向量数据时将产生极高的时间代价且面临维度灾难等问题,难以在实际场景中部署应用.此外,随着信息技术的发展,高维向量数据规模急剧增加,使用CPU进行高维向量聚类在时间代价和可扩展性等方面将面临更大的挑战.为此,提出一种GPU加速的高维向量聚类算法,通过引入K近邻(K-nearest neighbor,KNN)图索引加速DBSCAN的计算.首先,设计了GPU加速的并行K近邻图构建算法,显著降低了K近邻图索引的构建开销.其次,提出了基于层间并行的K-means树分区算法及基于广度优先搜索和核心近邻图的并行聚类算法,改进了DBSCAN算法的计算流程,实现了高并发向量聚类.最后,在真实向量数据集上进行了大量实验,并将所提出的方法与现有方法进行了性能对比.实验结果表明,所提方法在保证聚类精度的前提下,将大规模向量聚类的效率提高了5.7–2822.5倍.展开更多
基金The research has been supported by the Natural Science Foundation of China under great number 61872422the Natural Science Foundation of Zhejiang Province,China under great number LY19F020028.
文摘Given that the concurrent L1-minimization(L1-min)problem is often required in some real applications,we investigate how to solve it in parallel on GPUs in this paper.First,we propose a novel self-adaptive warp implementation of the matrix-vector multiplication(Ax)and a novel self-adaptive thread implementation of the matrix-vector multiplication(ATx),respectively,on the GPU.The vector-operation and inner-product decision trees are adopted to choose the optimal vector-operation and inner-product kernels for vectors of any size.Second,based on the above proposed kernels,the iterative shrinkage-thresholding algorithm is utilized to present two concurrent L1-min solvers from the perspective of the streams and the thread blocks on a GPU,and optimize their performance by using the new features of GPU such as the shuffle instruction and the read-only data cache.Finally,we design a concurrent L1-min solver on multiple GPUs.The experimental results have validated the high effectiveness and good performance of our proposed methods.
基金supported by the School of Energy Resources at the University of WyomingThe GPU hardware used in this study was purchased using the NSF Grant EAR-0930040
文摘We have successfully ported an arbitrary highorder discontinuous Galerkin method for solving the threedimensional isotropic elastic wave equation on unstructured tetrahedral meshes to multiple Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) of NVIDIA and Message Passing Interface (MPI) and obtained a speedup factor of about 28.3 for the single-precision version of our codes and a speedup factor of about 14.9 for the double-precision version. The GPU used in the comparisons is NVIDIA Tesla C2070 Fermi, and the CPU used is Intel Xeon W5660. To effectively overlap inter-process communication with computation, we separate the elements on each subdomain into inner and outer elements and complete the computation on outer elements and fill the MPI buffer first. While the MPI messages travel across the network, the GPU performs computation on inner elements, and all other calculations that do not use information of outer elements from neighboring subdomains. A significant portion of the speedup also comes from a customized matrix-matrix multiplication kernel, which is used extensively throughout our program. Preliminary performance analysis on our parallel GPU codes shows favorable strong and weak scalabilities.
文摘Scale Invariant Feature Transform (SIFT) algorithm is a widely used computer vision algorithm that detects and extracts local feature descriptors from images. SIFT is computationally intensive, making it infeasible for single threaded im-plementation to extract local feature descriptors for high-resolution images in real time. In this paper, an approach to parallelization of the SIFT algorithm is demonstrated using NVIDIA’s Graphics Processing Unit (GPU). The parallel-ization design for SIFT on GPUs is divided into two stages, a) Algorithm de-sign-generic design strategies which focuses on data and b) Implementation de-sign-architecture specific design strategies which focuses on optimally using GPU resources for maximum occupancy. Increasing memory latency hiding, eliminating branches and data blocking achieve a significant decrease in aver-age computational time. Furthermore, it is observed via Paraver tools that our approach to parallelization while optimizing for maximum occupancy allows GPU to execute memory bound SIFT algorithm at optimal levels.
文摘As one of the most essential and important operations in linear algebra, the performance prediction of sparse matrix-vector multiplication (SpMV) on GPUs has got more and more attention in recent years. In 2012, Guo and Wang put forward a new idea to predict the performance of SpMV on GPUs. However, they didn’t consider the matrix structure completely, so the execution time predicted by their model tends to be inaccurate for general sparse matrix. To address this problem, we proposed two new similar models, which take into account the structure of the matrices and make the performance prediction model more accurate. In addition, we predict the execution time of SpMV for CSR-V, CSR-S, ELL and JAD sparse matrix storage formats by the new models on the CUDA platform. Our experimental results show that the accuracy of prediction by our models is 1.69 times better than Guo and Wang’s model on average for most general matrices.
基金supported by National Natural Science Foundation of China(No.61203172)the SSTP of Sichuan(Nos.2018YYJC0994 and 2017JY0011)Shenzhen STPP(No.GJHZ20160301164521358)
文摘The parallel computation capabilities of modern graphics processing units (GPUs) have attracted increasing attention from researchers and engineers who have been conducting high computational throughput studies. However, current single GPU based engineering solutions are often struggling to fulfill their real-time requirements. Thus, the multi-GPU-based approach has become a popular and cost-effective choice for tackling the demands. In those cases, the computational load balancing over multiple GPU "nodes" is often the key and bottleneck that affect the quality and performance of the real=time system. The existing load balancing approaches are mainly based on the assumption that all GPU nodes in the same computer framework are of equal computational performance, which is often not the case due to cluster design and other legacy issues. This paper presents a novel dynamic load balancing (DLB) model for rapid data division and allocation on heterogeneous GPU nodes based on an innovative fuzzy neural network (FNN). In this research, a 5-state parameter feedback mechanism defining the overall cluster and node performance is proposed. The corresponding FNN-based DLB model will be capable of monitoring and predicting individual node performance under different workload scenarios. A real=time adaptive scheduler has been devised to reorganize the data inputs to each node when necessary to maintain their runtime computational performance. The devised model has been implemented on two dimensional (2D) discrete wavelet transform (DWT) applications for evaluation. Experiment results show that this DLB model enables a high computational throughput while ensuring real=time and precision requirements from complex computational tasks.
文摘Particle accelerators play an important role in a wide range of scientific discoveries and industrial applications. The self-consistent multi-particle simulation based on the particle-in-cell (PIC) method has been used to study charged particle beam dynamics inside those accelerators. However, the PIC simulation is time-consuming and needs to use modern parallel computers for high-resolution applications. In this paper, we implemented a parallel beam dynamics PIC code on multi-node hybrid architecture computers with multiple Graphics Processing Units (GPUs). We used two methods to parallelize the PIC code on multiple GPUs and observed that the replication method is a better choice for moderate problem size and current computer hardware while the domain decomposition method might be a better choice for large problem size and more advanced computer hardware that allows direct communications among multiple GPUs. Using the multi-node hybrid architectures at Oak Ridge Leadership Computing Facility (OLCF), the optimized GPU PIC code achieves a reasonable parallel performance and scales up to 64 GPUs with 16 million particles.
文摘The sparse matrix vector multiplication (SpMV) is inevitable in almost all kinds of scientific computation, such as iterative methods for solving linear systems and eigenvalue problems. With the emergence and development of Graphics Processing Units (GPUs), high efficient formats for SpMV should be constructed. The performance of SpMV is mainly determinted by the storage format for sparse matrix. Based on the idea of JAD format, this paper improved the ELLPACK-R format, reduced the waiting time between different threads in a warp, and the speed up achieved about 1.5 in our experimental results. Compared with other formats, such as CSR, ELL, BiELL and so on, our format performance of SpMV is optimal over 70 percent of the test matrix. We proposed a method based on parameters to analyze the performance impact on different formats. In addition, a formula was constructed to count the computation and the number of iterations.
文摘In our previous work, a novel algorithm to perform robust pose estimation was presented. The pose was estimated using points on the object to regions on image correspondence. The laboratory experiments conducted in the previous work showed that the accuracy of the estimated pose was over 99% for position and 84% for orientation estimations respectively. However, for larger objects, the algorithm requires a high number of points to achieve the same accuracy. The requirement of higher number of points makes the algorithm, computationally intensive resulting in the algorithm infeasible for real-time computer vision applications. In this paper, the algorithm is parallelized to run on NVIDIA GPUs. The results indicate that even for objects having more than 2000 points, the algorithm can estimate the pose in real time for each frame of high-resolution videos.
基金supported by National Natural Science Foundation of China(No.22333003)。
文摘Sparse matrix-vector multiplication(SpMV)is one of the key kernels extensively employed in both industrial and scientific applications,with its computation and random access incurring a lot of overhead.To capitalize on higher compute rates and data movement efficiency,there have been efforts to utilize mixed precision SpMV.However,most existing techniques focus on single-grained precision selection for all matrices.In this work,we concentrate on hierarchical precision selection strategies tailored for irregular matrices,driven by the need to achieve optimal load balancing among thread groups executing on GPUs.Based on the concept of strong connection,we firstly introduce a novel adaptive row-grained precision selection strategy that surpasses existing strategy within multi-precision Jacobi methods.Secondly,our experiments have uncovered a range within which converting double-precision floating-point numbers to single-precision floating-point numbers incurs a loss smaller than the machine precision FLT_EPSILON.This range is used for element-grained precision selection.Subsequently,we propose a hierarchical precision selection compressed sparse row format(CSR)storage method and enhance the CSR-Vector kernel,achieving higher relative speedups and load balancing on a benchmark suite composed of 41 matrices compared to existing methods.Finally,we integrate the mixed precision SpMV into the generalized minimal residual method(GMRES)algorithm,achieving faster execution speeds while maintaining similar convergence accuracy as double-precision GMRES.
基金supported by the National Natural Science Foundation of China under Grant Nos.U23B2002,62302238,and 62372245the Natural Science Foundation of Jiangsu Province of China under Grant No.BK20220388+2 种基金the Natural Science Research Project of Colleges and Universities in Jiangsu Province of China under Grant No.22KJB520004the China Postdoctoral Science Foundation under Grant No.2022M711689the CCF-Tencent Rhino-Bird Open Research Fund under Grant No.CCF-Tencent RAGR20240129.
文摘Hash functions are essential in cryptographic primitives such as digital signatures,key exchanges,and blockchain technology.SM3,built upon the Merkle-Damgard structure,is a crucial element in Chinese commercial cryptographic schemes.Optimizing hash function performance is crucial given the growth of Internet of Things(IoT)devices and the rapid evolution of blockchain technology.In this paper,we introduce a high-performance implementation framework for accelerating the SM3 cryptography hash function,short for HI-SM3,using heterogeneous GPU(graphics processing unit)parallel computing devices.HI-SM3 enhances the implementation of hash functions across four dimensions:parallelism,register utilization,memory access,and instruction efficiency,resulting in significant performance gains across various GPU platforms.Leveraging the NVIDIA RTX 4090 GPU,HI-SM3 achieves a remarkable peak performance of 454.74 GB/s,surpassing OpenSSL on a high-end server CPU(E5-2699V3)with 16 cores by over 150 times.On the Hygon DCU accelerator,a Chinese domestic graphics card,it achieves 113.77 GB/s.Furthermore,compared with the fastest known GPU-based SM3 implementation,HI-SM3 on the same GPU platform exhibits a 3.12x performance improvement.Even on embedded GPUs consuming less than 40W,HI-SM3 attains a throughput of 5.90 GB/s,which is twice as high as that of a server-level CPU.In summary,HI-SM3 provides a significant performance advantage,positioning it as a compelling solution for accelerating hash operations.
基金supported by the National Natural Science Foundation of China(Grant No.62402525)the Fundamental Research Funds for the Central Universities(Grant No.2462023YJRC023).
文摘Graph neural networks(GNNs)can be adapted to GPUs with high computing capability due to massive arithmetic opera-tions.Compared with mini-batch training,full-graph training does not require sampling of the input graph and halo region,avoiding potential accuracy losses.Current deep learning frameworks evenly partition large graphs to scale GNN training to distributed multi-GPU platforms.On the other hand,the rapid revolution of hardware requires technology companies and research institutions to frequently update their equipment to cope with the latest tasks.This results in a large-scale cluster with a mixture of GPUs with various computational capabilities and hardware specifications.However,existing works fail to consider sub-graphs adapted to different GPU generations,leading to inefficient resource utilization and degraded training efficiency.Therefore,we propose_(ν)GNN,a Non-Uniformly partitioned full-graph GNN training framework on heterogeneous distributed platforms._(ν)GNN first models the GNN processing ability of hardware based on various theoretical parameters.Then,_(ν)GNN automatically obtains a reasonable task partitioning scheme by combining hardware,model,and graph dataset information.Finally,_(ν)GNN implements an irregular graph partitioning mechanism that allows GNN training tasks to execute efficiently on distributed heterogeneous systems.The experimental results show that in real-world scenarios with a mixture of GPU generations,_(ν)GNN can outperform other static partitioning schemes based on hardware specifications.
文摘聚类是大规模高维向量数据分析的关键技术之一.近年来,基于密度的聚类算法DBSCAN(density-based spatial clustering of applications with noise)因其无须预先指定聚类数量、能够发现复杂聚类结构并有效识别噪声点的特性,在数据分析领域得到了广泛应用.然而,现有的基于密度的聚类算法在处理高维向量数据时将产生极高的时间代价且面临维度灾难等问题,难以在实际场景中部署应用.此外,随着信息技术的发展,高维向量数据规模急剧增加,使用CPU进行高维向量聚类在时间代价和可扩展性等方面将面临更大的挑战.为此,提出一种GPU加速的高维向量聚类算法,通过引入K近邻(K-nearest neighbor,KNN)图索引加速DBSCAN的计算.首先,设计了GPU加速的并行K近邻图构建算法,显著降低了K近邻图索引的构建开销.其次,提出了基于层间并行的K-means树分区算法及基于广度优先搜索和核心近邻图的并行聚类算法,改进了DBSCAN算法的计算流程,实现了高并发向量聚类.最后,在真实向量数据集上进行了大量实验,并将所提出的方法与现有方法进行了性能对比.实验结果表明,所提方法在保证聚类精度的前提下,将大规模向量聚类的效率提高了5.7–2822.5倍.