期刊文献+
共找到3,879篇文章
< 1 2 194 >
每页显示 20 50 100
基于GPUs可视化技术的心脏辅助诊断系统研究
1
作者 陈宇珂 吴效明 +2 位作者 杨荣骞 欧陕兴 郑理华 《医疗卫生装备》 CAS 2011年第10期16-18,共3页
目的:实现基于GPUs的心脏断层图像的精确分割和三维可视化,完成心脏辅助诊断系统的设计。方法:结合临床专家诊断经验、心脏CT图像先验特征和图像分割算法模型,采用GPUs并行数据处理技术实现心脏结构的分割和三维可视化。结果:完成了CT... 目的:实现基于GPUs的心脏断层图像的精确分割和三维可视化,完成心脏辅助诊断系统的设计。方法:结合临床专家诊断经验、心脏CT图像先验特征和图像分割算法模型,采用GPUs并行数据处理技术实现心脏结构的分割和三维可视化。结果:完成了CT心脏序列图像的精确、快速、鲁棒分割和三维可视化,初步实现了基于GPUs的可视化技术的心脏辅助诊断系统。结论:研究充分利用计算机图形处理单元GPU强大的并行计算能力,解决了医学图像处理和分割中的问题,提高了程序的运行效率,改善了用户体验。 展开更多
关键词 专家系统 心脏 双源CT CUDA gpus
在线阅读 下载PDF
Efficient Concurrent L1-Minimization Solvers on GPUs 被引量:1
2
作者 Xinyue Chu Jiaquan Gao Bo Sheng 《Computer Systems Science & Engineering》 SCIE EI 2021年第9期305-320,共16页
Given that the concurrent L1-minimization(L1-min)problem is often required in some real applications,we investigate how to solve it in parallel on GPUs in this paper.First,we propose a novel self-adaptive warp impleme... Given that the concurrent L1-minimization(L1-min)problem is often required in some real applications,we investigate how to solve it in parallel on GPUs in this paper.First,we propose a novel self-adaptive warp implementation of the matrix-vector multiplication(Ax)and a novel self-adaptive thread implementation of the matrix-vector multiplication(ATx),respectively,on the GPU.The vector-operation and inner-product decision trees are adopted to choose the optimal vector-operation and inner-product kernels for vectors of any size.Second,based on the above proposed kernels,the iterative shrinkage-thresholding algorithm is utilized to present two concurrent L1-min solvers from the perspective of the streams and the thread blocks on a GPU,and optimize their performance by using the new features of GPU such as the shuffle instruction and the read-only data cache.Finally,we design a concurrent L1-min solver on multiple GPUs.The experimental results have validated the high effectiveness and good performance of our proposed methods. 展开更多
关键词 Concurrent L1-minimization problem dense matrix-vector multiplication fast iterative shrinkage-thresholding algorithm CUDA gpus
在线阅读 下载PDF
Accelerating the discontinuous Galerkin method for seismic wave propagation simulations using multiple GPUs with CUDA and MPI 被引量:3
3
作者 Dawei Mu Po Chen Liqiang Wang 《Earthquake Science》 2013年第6期377-393,共17页
We have successfully ported an arbitrary highorder discontinuous Galerkin method for solving the threedimensional isotropic elastic wave equation on unstructured tetrahedral meshes to multiple Graphic Processing Units... We have successfully ported an arbitrary highorder discontinuous Galerkin method for solving the threedimensional isotropic elastic wave equation on unstructured tetrahedral meshes to multiple Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) of NVIDIA and Message Passing Interface (MPI) and obtained a speedup factor of about 28.3 for the single-precision version of our codes and a speedup factor of about 14.9 for the double-precision version. The GPU used in the comparisons is NVIDIA Tesla C2070 Fermi, and the CPU used is Intel Xeon W5660. To effectively overlap inter-process communication with computation, we separate the elements on each subdomain into inner and outer elements and complete the computation on outer elements and fill the MPI buffer first. While the MPI messages travel across the network, the GPU performs computation on inner elements, and all other calculations that do not use information of outer elements from neighboring subdomains. A significant portion of the speedup also comes from a customized matrix-matrix multiplication kernel, which is used extensively throughout our program. Preliminary performance analysis on our parallel GPU codes shows favorable strong and weak scalabilities. 展开更多
关键词 Seismic wave propagation DiscontinuousGalerkin method GPU
在线阅读 下载PDF
An Approach to Parallelization of SIFT Algorithm on GPUs for Real-Time Applications 被引量:4
4
作者 Raghu Raj Prasanna Kumar Suresh Muknahallipatna John McInroy 《Journal of Computer and Communications》 2016年第17期18-50,共33页
Scale Invariant Feature Transform (SIFT) algorithm is a widely used computer vision algorithm that detects and extracts local feature descriptors from images. SIFT is computationally intensive, making it infeasible fo... Scale Invariant Feature Transform (SIFT) algorithm is a widely used computer vision algorithm that detects and extracts local feature descriptors from images. SIFT is computationally intensive, making it infeasible for single threaded im-plementation to extract local feature descriptors for high-resolution images in real time. In this paper, an approach to parallelization of the SIFT algorithm is demonstrated using NVIDIA’s Graphics Processing Unit (GPU). The parallel-ization design for SIFT on GPUs is divided into two stages, a) Algorithm de-sign-generic design strategies which focuses on data and b) Implementation de-sign-architecture specific design strategies which focuses on optimally using GPU resources for maximum occupancy. Increasing memory latency hiding, eliminating branches and data blocking achieve a significant decrease in aver-age computational time. Furthermore, it is observed via Paraver tools that our approach to parallelization while optimizing for maximum occupancy allows GPU to execute memory bound SIFT algorithm at optimal levels. 展开更多
关键词 Scale Invariant Feature Transform (SIFT) Parallel Computing GPU GPU Occupancy Portable Parallel Programming CUDA
在线阅读 下载PDF
Performance Prediction Based on Statistics of Sparse Matrix-Vector Multiplication on GPUs 被引量:1
5
作者 Ruixing Wang Tongxiang Gu Ming Li 《Journal of Computer and Communications》 2017年第6期65-83,共19页
As one of the most essential and important operations in linear algebra, the performance prediction of sparse matrix-vector multiplication (SpMV) on GPUs has got more and more attention in recent years. In 2012, Guo a... As one of the most essential and important operations in linear algebra, the performance prediction of sparse matrix-vector multiplication (SpMV) on GPUs has got more and more attention in recent years. In 2012, Guo and Wang put forward a new idea to predict the performance of SpMV on GPUs. However, they didn’t consider the matrix structure completely, so the execution time predicted by their model tends to be inaccurate for general sparse matrix. To address this problem, we proposed two new similar models, which take into account the structure of the matrices and make the performance prediction model more accurate. In addition, we predict the execution time of SpMV for CSR-V, CSR-S, ELL and JAD sparse matrix storage formats by the new models on the CUDA platform. Our experimental results show that the accuracy of prediction by our models is 1.69 times better than Guo and Wang’s model on average for most general matrices. 展开更多
关键词 SPARSE Matrix-Vector MULTIPLICATION Performance Prediction GPU Normal DISTRIBUTION UNIFORM DISTRIBUTION
暂未订购
A Fuzzy Neural Network Based Dynamic Data Allocation Model on Heterogeneous Multi-GPUs for Large-scale Computations
6
作者 Chao-Long Zhang Yuan-Ping Xu +3 位作者 Zhi-Jie Xu Jia He Jing Wang Jian-Hua Adu 《International Journal of Automation and computing》 EI CSCD 2018年第2期181-193,共13页
The parallel computation capabilities of modern graphics processing units (GPUs) have attracted increasing attention from researchers and engineers who have been conducting high computational throughput studies. How... The parallel computation capabilities of modern graphics processing units (GPUs) have attracted increasing attention from researchers and engineers who have been conducting high computational throughput studies. However, current single GPU based engineering solutions are often struggling to fulfill their real-time requirements. Thus, the multi-GPU-based approach has become a popular and cost-effective choice for tackling the demands. In those cases, the computational load balancing over multiple GPU "nodes" is often the key and bottleneck that affect the quality and performance of the real=time system. The existing load balancing approaches are mainly based on the assumption that all GPU nodes in the same computer framework are of equal computational performance, which is often not the case due to cluster design and other legacy issues. This paper presents a novel dynamic load balancing (DLB) model for rapid data division and allocation on heterogeneous GPU nodes based on an innovative fuzzy neural network (FNN). In this research, a 5-state parameter feedback mechanism defining the overall cluster and node performance is proposed. The corresponding FNN-based DLB model will be capable of monitoring and predicting individual node performance under different workload scenarios. A real=time adaptive scheduler has been devised to reorganize the data inputs to each node when necessary to maintain their runtime computational performance. The devised model has been implemented on two dimensional (2D) discrete wavelet transform (DWT) applications for evaluation. Experiment results show that this DLB model enables a high computational throughput while ensuring real=time and precision requirements from complex computational tasks. 展开更多
关键词 Heterogeneous GPU cluster dynamic load balancing fuzzy neural network adaptive scheduler discrete wavelet trans-form.
原文传递
Implementation of a Particle Accelerator Beam Dynamics Code on Multi-Node GPUs
7
作者 Zhicong Liu Ji Qiang 《Journal of Software Engineering and Applications》 2019年第9期321-338,共18页
Particle accelerators play an important role in a wide range of scientific discoveries and industrial applications. The self-consistent multi-particle simulation based on the particle-in-cell (PIC) method has been use... Particle accelerators play an important role in a wide range of scientific discoveries and industrial applications. The self-consistent multi-particle simulation based on the particle-in-cell (PIC) method has been used to study charged particle beam dynamics inside those accelerators. However, the PIC simulation is time-consuming and needs to use modern parallel computers for high-resolution applications. In this paper, we implemented a parallel beam dynamics PIC code on multi-node hybrid architecture computers with multiple Graphics Processing Units (GPUs). We used two methods to parallelize the PIC code on multiple GPUs and observed that the replication method is a better choice for moderate problem size and current computer hardware while the domain decomposition method might be a better choice for large problem size and more advanced computer hardware that allows direct communications among multiple GPUs. Using the multi-node hybrid architectures at Oak Ridge Leadership Computing Facility (OLCF), the optimized GPU PIC code achieves a reasonable parallel performance and scales up to 64 GPUs with 16 million particles. 展开更多
关键词 PARTICLE ACCELERATOR PARTICLE-IN-CELL GPU Parallel BEAM Dynamics Simulation
暂未订购
Real-Time Scheduling Using GPUs--Advanced and More Accurate Proof of Feasibility
8
作者 Peter Fodrek L'udovit Farkas +3 位作者 Michal Blahol Martin Foltin Juraj Hn'it Tomas Murgas 《通讯和计算机(中英文版)》 2012年第8期863-871,共9页
关键词 实时调度 GPU 图形处理器 DDR内存 证明 评估报告 调度子系统 Linux
在线阅读 下载PDF
PELLR: A Permutated ELLPACK-R Format for SpMV on GPUs
9
作者 Zhiqi Wang Tongxiang Gu 《Journal of Computer and Communications》 2020年第4期44-58,共15页
The sparse matrix vector multiplication (SpMV) is inevitable in almost all kinds of scientific computation, such as iterative methods for solving linear systems and eigenvalue problems. With the emergence and developm... The sparse matrix vector multiplication (SpMV) is inevitable in almost all kinds of scientific computation, such as iterative methods for solving linear systems and eigenvalue problems. With the emergence and development of Graphics Processing Units (GPUs), high efficient formats for SpMV should be constructed. The performance of SpMV is mainly determinted by the storage format for sparse matrix. Based on the idea of JAD format, this paper improved the ELLPACK-R format, reduced the waiting time between different threads in a warp, and the speed up achieved about 1.5 in our experimental results. Compared with other formats, such as CSR, ELL, BiELL and so on, our format performance of SpMV is optimal over 70 percent of the test matrix. We proposed a method based on parameters to analyze the performance impact on different formats. In addition, a formula was constructed to count the computation and the number of iterations. 展开更多
关键词 SpMV GPU STORAGE FORMAT HIGH PERFORMANCE
在线阅读 下载PDF
Acceleration of Points to Convex Region Correspondence Pose Estimation Algorithm on GPUs for Real-Time Applications
10
作者 Raghu Raj P. Kumar Suresh S. Muknahallipatna John E. McInroy 《Journal of Computer and Communications》 2016年第17期1-17,共18页
In our previous work, a novel algorithm to perform robust pose estimation was presented. The pose was estimated using points on the object to regions on image correspondence. The laboratory experiments conducted in th... In our previous work, a novel algorithm to perform robust pose estimation was presented. The pose was estimated using points on the object to regions on image correspondence. The laboratory experiments conducted in the previous work showed that the accuracy of the estimated pose was over 99% for position and 84% for orientation estimations respectively. However, for larger objects, the algorithm requires a high number of points to achieve the same accuracy. The requirement of higher number of points makes the algorithm, computationally intensive resulting in the algorithm infeasible for real-time computer vision applications. In this paper, the algorithm is parallelized to run on NVIDIA GPUs. The results indicate that even for objects having more than 2000 points, the algorithm can estimate the pose in real time for each frame of high-resolution videos. 展开更多
关键词 Pose Estimation Parallel Computing GPU CUDA Real Time Image Processing
在线阅读 下载PDF
Increasing Momentum-Like Factors:A Method for Reducing Training Errors on Multiple GPUs 被引量:2
11
作者 Yu Tang Zhigang Kan +4 位作者 Lujia Yin Zhiquan Lai Zhaoning Zhang Linbo Qiao Dongsheng Li 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2022年第1期114-126,共13页
In distributed training,increasing batch size can improve parallelism,but it can also bring many difficulties to the training process and cause training errors.In this work,we investigate the occurrence of training er... In distributed training,increasing batch size can improve parallelism,but it can also bring many difficulties to the training process and cause training errors.In this work,we investigate the occurrence of training errors in theory and train ResNet-50 on CIFAR-10 by using Stochastic Gradient Descent(SGD) and Adaptive moment estimation(Adam) while keeping the total batch size in the parameter server constant and lowering the batch size on each Graphics Processing Unit(GPU).A new method that considers momentum to eliminate training errors in distributed training is proposed.We define a Momentum-like Factor(MF) to represent the influence of former gradients on parameter updates in each iteration.Then,we modify the MF values and conduct experiments to explore how different MF values influence the training performance based on SGD,Adam,and Nesterov accelerated gradient.Experimental results reveal that increasing MFs is a reliable method for reducing training errors in distributed training.The analysis of convergent conditions in distributed training with consideration of a large batch size and multiple GPUs is presented in this paper. 展开更多
关键词 multiple Graphics Processing Units(gpus) batch size training error distributed training momentum-like factors
原文传递
Toward Cost-EffectiveReservoir Simulation Solvers on GPUs 被引量:2
12
作者 Zheng Li Shuhong Wu +1 位作者 Jinchao Xu Chensong Zhang 《Advances in Applied Mathematics and Mechanics》 SCIE 2016年第6期971-991,共21页
In this paper,we focus on graphical processing unit(GPU)and discuss how its architecture affects the choice of algorithm and implementation of fully-implicit petroleum reservoir simulation.In order to obtain satisfact... In this paper,we focus on graphical processing unit(GPU)and discuss how its architecture affects the choice of algorithm and implementation of fully-implicit petroleum reservoir simulation.In order to obtain satisfactory performance on new many-core architectures such as GPUs,the simulator developers must know a great deal on the specific hardware and spend a lot of time on fine tuning the code.Porting a large petroleum reservoir simulator to emerging hardware architectures is expensive and risky.We analyze major components of an in-house reservoir simulator and investigate how to port them to GPUs in a cost-effective way.Preliminary numerical experiments show that our GPU-based simulator is robust and effective.More importantly,these numerical results clearly identify the main bottlenecks to obtain ideal speedup on GPUs and possibly other many-core architectures. 展开更多
关键词 gpus reservoir simulation fully-implicit method
在线阅读 下载PDF
A survey on dynamic graph processing on GPUs: concepts, terminologies and systems
13
作者 Hongru GAO Xiaofei LIAO +3 位作者 Zhiyuan SHAO Kexin LI Jiajie CHEN Hai JIN 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第4期1-23,共23页
Graphs that are used to model real-world entities with vertices and relationships among entities with edges,have proven to be a powerful tool for describing real-world problems in applications.In most real-world scena... Graphs that are used to model real-world entities with vertices and relationships among entities with edges,have proven to be a powerful tool for describing real-world problems in applications.In most real-world scenarios,entities and their relationships are subject to constant changes.Graphs that record such changes are called dynamic graphs.In recent years,the widespread application scenarios of dynamic graphs have stimulated extensive research on dynamic graph processing systems that continuously ingest graph updates and produce up-to-date graph analytics results.As the scale of dynamic graphs becomes larger,higher performance requirements are demanded to dynamic graph processing systems.With the massive parallel processing power and high memory bandwidth,GPUs become mainstream vehicles to accelerate dynamic graph processing tasks.GPU-based dynamic graph processing systems mainly address two challenges:maintaining the graph data when updates occur(i.e.,graph updating)and producing analytics results in time(i.e.,graph computing).In this paper,we survey GPU-based dynamic graph processing systems and review their methods on addressing both graph updating and graph computing.To comprehensively discuss existing dynamic graph processing systems on GPUs,we first introduce the terminologies of dynamic graph processing and then develop a taxonomy to describe the methods employed for graph updating and graph computing.In addition,we discuss the challenges and future research directions of dynamic graph processing on GPUs. 展开更多
关键词 dynamic graphs graph processing graph algorithms gpus
原文传递
Kohn–Sham time-dependent density functional theory with Tamm–Dancoff approximation on massively parallel GPUs
14
作者 Inkoo Kim Daun Jeong +7 位作者 Won-Joon Son Hyung-Jin Kim Young Min Rhee Yongsik Jung Hyeonho Choi Jinkyu Yim Inkook Jang Dae Sin Kim 《npj Computational Materials》 SCIE EI CSCD 2023年第1期1556-1567,共12页
We report a high-performance multi graphics processing unit(GPU)implementation of the Kohn–Sham time-dependent density functional theory(TDDFT)within the Tamm–Dancoff approximation.Our algorithm on massively paralle... We report a high-performance multi graphics processing unit(GPU)implementation of the Kohn–Sham time-dependent density functional theory(TDDFT)within the Tamm–Dancoff approximation.Our algorithm on massively parallel computing systems using multiple parallel models in tandem scales optimally with material size,considerably reducing the computational wall time.A benchmark TDDFT study was performed on a green fluorescent protein complex composed of 4353 atoms with 40,518 atomic orbitals represented by Gaussian-type functions,demonstrating the effect of distant protein residues on the excitation.As the largest molecule attempted to date to the best of our knowledge,the proposed strategy demonstrated reasonably high efficiencies up to 256 GPUs on a custom-built state-of-the-art GPU computing system with Nvidia A100 GPUs.We believe that our GPU-oriented algorithms,which empower first-principles simulation for very large-scale applications,may render deeper understanding of the molecular basis of material behaviors,eventually revealing new possibilities for breakthrough designs on new material systems. 展开更多
关键词 gpus GRAPHICS MASSIVE
原文传递
GPU‑accelerated Monte Carlo method for dose calculation of mesh‑type computational phantoms
15
作者 Shu‑Chang Yan Rui Qiu +3 位作者 Xi‑Yu Luo An‑Kang Hu Zhen Wu Jun‑Li Li 《Nuclear Science and Techniques》 2026年第1期297-308,共12页
Computational phantoms play an essential role in radiation dosimetry and health physics.Although mesh-type phantoms offer a high resolution and adjustability,their use in dose calculations is limited by their slow com... Computational phantoms play an essential role in radiation dosimetry and health physics.Although mesh-type phantoms offer a high resolution and adjustability,their use in dose calculations is limited by their slow computational speed.Progress in heterogeneous computing has allowed for substantial acceleration in the computation of mesh-type phantoms by utilizing hardware accelerators.In this study,a GPU-accelerated Monte Carlo method was developed to expedite the dose calculation for mesh-type computational phantoms.This involved designing and implementing the entire procedural flow of a GPUaccelerated Monte Carlo program.We employed acceleration structures to process the mesh-type phantom,optimized the traversal methodology,and achieved a flattened structure to overcome the limitations of GPU stack depths.Particle transport methods were realized within the mesh-type phantom,encompassing particle location and intersection techniques.In response to typical external irradiation scenarios,we utilized Geant4 along with the GPU program and its CPU serial code for dose calculations,assessing both computational accuracy and efficiency.In comparison with the benchmark simulated using Geant4 on the CPU using one thread,the relative differences in the organ dose calculated by the GPU program predominantly lay within a margin of 5%,whereas the computational time was reduced by a factor ranging from 120 to 2700.To the best of our knowledge,this study achieved a GPU-accelerated dose calculation method for mesh-type phantoms for the first time,reducing the computational time from hours to seconds per simulation of ten million particles and offering a swift and precise Monte Carlo method for dose calculation in mesh-type computational phantoms. 展开更多
关键词 GPU Monte Carloference Mesh-type phantom External exposure Heterogeneous
暂未订购
CUDA‑based GPU‑only computation for efficient tracking simulation of single and multi‑bunch collective effects
16
作者 Keon Hee Kim Eun‑San Kim 《Nuclear Science and Techniques》 2026年第1期61-79,共19页
Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based met... Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based methods,tracking on a single CPU core,or parallelizing the computation across multiple cores via the message passing interface(MPI).Although these approaches work well for single-bunch tracking,scaling them to multiple bunches significantly increases the computational load,which often necessitates the use of a dedicated multi-CPU cluster.To address this challenge,alternative methods leveraging General-Purpose computing on Graphics Processing Units(GPGPU)have been proposed,enabling tracking studies on a standalone desktop personal computer(PC).However,frequent CPU-GPU interactions,including data transfers and synchronization operations during tracking,can introduce communication overheads,potentially reducing the overall effectiveness of GPU-based computations.In this study,we propose a novel approach that eliminates this overhead by performing the entire tracking simulation process exclusively on the GPU,thereby enabling the simultaneous processing of all bunches and their macro-particles.Specifically,we introduce MBTRACK2-CUDA,a Compute Unified Device Architecture(CUDA)ported version of MBTRACK2,which facilitates efficient tracking of single-and multi-bunch collective effects by leveraging the full GPU-resident computation. 展开更多
关键词 Code development GPU computing Collective effects
在线阅读 下载PDF
前沿科技
17
《今日科技》 2026年第1期58-59,共2页
人工智能科创高地李飞飞团队发布实时3D世界模型RTFM斯坦福大学李飞飞团队日前发布实时生成式世界模型RTFM(RealTime Frame Model,实时帧模型)。该模型采用自回归扩散Transformer架构,仅凭单张H100GPU即可实时渲染持久且3D一致的虚拟世... 人工智能科创高地李飞飞团队发布实时3D世界模型RTFM斯坦福大学李飞飞团队日前发布实时生成式世界模型RTFM(RealTime Frame Model,实时帧模型)。该模型采用自回归扩散Transformer架构,仅凭单张H100GPU即可实时渲染持久且3D一致的虚拟世界。通过端到端学习大规模视频数据,RTFM无须构建显式3D表征,仅输入2D图像即可从新视角生成高质量场景,成功模拟几何、反射、阴影等复杂物理现象,标志着生成式AI在三维空间理解与实时渲染领域取得重大突破。 展开更多
关键词 实时3D世界模型 H100GPU RTFM 端到端学习
在线阅读 下载PDF
Molecular dynamics simulation of complex multiphase flow on a computer cluster with GPUs 被引量:9
18
作者 CHEN FeiGuo GE Wei LI JingHai 《Science China Chemistry》 SCIE EI CAS 2009年第3期372-380,共9页
Compute Unified Device Architecture (CUDA) was used to design and implement molecular dynamics (MD) simulations on graphics processing units (GPU). With an NVIDIA Tesla C870, a 20-60 fold speedup over that of one core... Compute Unified Device Architecture (CUDA) was used to design and implement molecular dynamics (MD) simulations on graphics processing units (GPU). With an NVIDIA Tesla C870, a 20-60 fold speedup over that of one core of the Intel Xeon 5430 CPU was achieved, reaching up to 150 Gflops. MD simulation of cavity flow and particle-bubble interaction in liquid was implemented on multiple GPUs using a message passing interface (MPI). Up to 200 GPUs were tested on a special network topology, which achieves good scalability. The capability of GPU clusters for large-scale molecular dynamics simulation of meso-scale flow behavior was, therefore, uncovered. 展开更多
关键词 MULTIPHASE flow MOLECULAR dynamics CUDA GPU parallel COMPUTING
原文传递
MPFFT:An Auto-Tuning FFT Library for OpenCL GPUs 被引量:10
19
作者 Yan Li Yun-Quan Zhang +2 位作者 Yi-Qun Liu Guo-Ping Long Hai-Peng Jia 《Journal of Computer Science & Technology》 SCIE EI CSCD 2013年第1期90-105,共16页
Fourier methods have revolutionized many fields of science and engineering, such as astronomy, medical imaging, seismology and spectroscopy, and the fast Fourier transform (FFT) is a computationally efficient method... Fourier methods have revolutionized many fields of science and engineering, such as astronomy, medical imaging, seismology and spectroscopy, and the fast Fourier transform (FFT) is a computationally efficient method of generating a Fourier transform. The emerging class of high performance computing architectures, such as GPU, seeks to achieve much higher performance and efficiency by exposing a hierarchy of distinct memories to software. However, the complexity of GPU programming poses a significant challenge to developers. In this paper, we propose an automatic performance tuning framework for FFT on various OpenCL GPUs, and implement a high performance library named MPFFT based on this framework. For power-of-two length FFTs, our library substantially outperforms the cIAmdFft library on AMD GPUs and achieves comparable performance as the CUFFT library on NVIDIA GPUs. Furthermore, our library also supports non-power-of-two size. For 3D non-power-of-two FFTs, our library delivers 1.5x to 28x faster than FFTYV with 4 threads and 20.01x average speedup over CUFFT 4.0 on Tesla C2050. 展开更多
关键词 fast Fourier transform GPU OPENCL AUTO-TUNING
原文传递
Parallel algorithm for real-time contouring from grid DEM on modern GPUs 被引量:3
20
作者 CHEN Zhuo,SHEN Lei,ZHAO YanQing & YANG ChongJun State Key Laboratory of Remote Sensing Science,Jointly Sponsored by the Institute of Remote Sensing Applications of Chinese Academy of Sciences and Beijing Normal University,Beijing 100101,China 《Science China(Technological Sciences)》 SCIE EI CAS 2010年第S1期33-37,共5页
A real-time algorithm for constructing contour maps from grid DEM data is pre-sented.It runs completely within the programmable 3D visualization pipeline.The interpolation is paralleled by rasterizer units in the grap... A real-time algorithm for constructing contour maps from grid DEM data is pre-sented.It runs completely within the programmable 3D visualization pipeline.The interpolation is paralleled by rasterizer units in the graphics card,and contour line extraction is paralleled by pixel shader.During each frame of the rendering,we first make an elevation gradient map out of original terrain vertex data,then figure out the final contour lines with image-space processing,and directly blend the results on the original scene to obtain a final scene with contour map using alpha-blending.We implement this method in our global 3D-digitalearth system with Direct3D?9.0c API and tested on some consumer level PC platforms.For arbitrary scene with certain LOD level,the process takes less than 10 ms,giving topologically correct,anti-aliased contour lines. 展开更多
关键词 CONTOUR map CONTOUR LINES paralleled computing 3D TERRAIN GPU digital earth
原文传递
上一页 1 2 194 下一页 到第
使用帮助 返回顶部