期刊文献+
共找到3,874篇文章
< 1 2 194 >
每页显示 20 50 100
基于GPUs可视化技术的心脏辅助诊断系统研究
1
作者 陈宇珂 吴效明 +2 位作者 杨荣骞 欧陕兴 郑理华 《医疗卫生装备》 CAS 2011年第10期16-18,共3页
目的:实现基于GPUs的心脏断层图像的精确分割和三维可视化,完成心脏辅助诊断系统的设计。方法:结合临床专家诊断经验、心脏CT图像先验特征和图像分割算法模型,采用GPUs并行数据处理技术实现心脏结构的分割和三维可视化。结果:完成了CT... 目的:实现基于GPUs的心脏断层图像的精确分割和三维可视化,完成心脏辅助诊断系统的设计。方法:结合临床专家诊断经验、心脏CT图像先验特征和图像分割算法模型,采用GPUs并行数据处理技术实现心脏结构的分割和三维可视化。结果:完成了CT心脏序列图像的精确、快速、鲁棒分割和三维可视化,初步实现了基于GPUs的可视化技术的心脏辅助诊断系统。结论:研究充分利用计算机图形处理单元GPU强大的并行计算能力,解决了医学图像处理和分割中的问题,提高了程序的运行效率,改善了用户体验。 展开更多
关键词 专家系统 心脏 双源CT CUDA gpus
在线阅读 下载PDF
Efficient Concurrent L1-Minimization Solvers on GPUs 被引量:1
2
作者 Xinyue Chu Jiaquan Gao Bo Sheng 《Computer Systems Science & Engineering》 SCIE EI 2021年第9期305-320,共16页
Given that the concurrent L1-minimization(L1-min)problem is often required in some real applications,we investigate how to solve it in parallel on GPUs in this paper.First,we propose a novel self-adaptive warp impleme... Given that the concurrent L1-minimization(L1-min)problem is often required in some real applications,we investigate how to solve it in parallel on GPUs in this paper.First,we propose a novel self-adaptive warp implementation of the matrix-vector multiplication(Ax)and a novel self-adaptive thread implementation of the matrix-vector multiplication(ATx),respectively,on the GPU.The vector-operation and inner-product decision trees are adopted to choose the optimal vector-operation and inner-product kernels for vectors of any size.Second,based on the above proposed kernels,the iterative shrinkage-thresholding algorithm is utilized to present two concurrent L1-min solvers from the perspective of the streams and the thread blocks on a GPU,and optimize their performance by using the new features of GPU such as the shuffle instruction and the read-only data cache.Finally,we design a concurrent L1-min solver on multiple GPUs.The experimental results have validated the high effectiveness and good performance of our proposed methods. 展开更多
关键词 Concurrent L1-minimization problem dense matrix-vector multiplication fast iterative shrinkage-thresholding algorithm CUDA gpus
在线阅读 下载PDF
Accelerating the discontinuous Galerkin method for seismic wave propagation simulations using multiple GPUs with CUDA and MPI 被引量:3
3
作者 Dawei Mu Po Chen Liqiang Wang 《Earthquake Science》 2013年第6期377-393,共17页
We have successfully ported an arbitrary highorder discontinuous Galerkin method for solving the threedimensional isotropic elastic wave equation on unstructured tetrahedral meshes to multiple Graphic Processing Units... We have successfully ported an arbitrary highorder discontinuous Galerkin method for solving the threedimensional isotropic elastic wave equation on unstructured tetrahedral meshes to multiple Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) of NVIDIA and Message Passing Interface (MPI) and obtained a speedup factor of about 28.3 for the single-precision version of our codes and a speedup factor of about 14.9 for the double-precision version. The GPU used in the comparisons is NVIDIA Tesla C2070 Fermi, and the CPU used is Intel Xeon W5660. To effectively overlap inter-process communication with computation, we separate the elements on each subdomain into inner and outer elements and complete the computation on outer elements and fill the MPI buffer first. While the MPI messages travel across the network, the GPU performs computation on inner elements, and all other calculations that do not use information of outer elements from neighboring subdomains. A significant portion of the speedup also comes from a customized matrix-matrix multiplication kernel, which is used extensively throughout our program. Preliminary performance analysis on our parallel GPU codes shows favorable strong and weak scalabilities. 展开更多
关键词 Seismic wave propagation DiscontinuousGalerkin method GPU
在线阅读 下载PDF
An Approach to Parallelization of SIFT Algorithm on GPUs for Real-Time Applications 被引量:4
4
作者 Raghu Raj Prasanna Kumar Suresh Muknahallipatna John McInroy 《Journal of Computer and Communications》 2016年第17期18-50,共33页
Scale Invariant Feature Transform (SIFT) algorithm is a widely used computer vision algorithm that detects and extracts local feature descriptors from images. SIFT is computationally intensive, making it infeasible fo... Scale Invariant Feature Transform (SIFT) algorithm is a widely used computer vision algorithm that detects and extracts local feature descriptors from images. SIFT is computationally intensive, making it infeasible for single threaded im-plementation to extract local feature descriptors for high-resolution images in real time. In this paper, an approach to parallelization of the SIFT algorithm is demonstrated using NVIDIA’s Graphics Processing Unit (GPU). The parallel-ization design for SIFT on GPUs is divided into two stages, a) Algorithm de-sign-generic design strategies which focuses on data and b) Implementation de-sign-architecture specific design strategies which focuses on optimally using GPU resources for maximum occupancy. Increasing memory latency hiding, eliminating branches and data blocking achieve a significant decrease in aver-age computational time. Furthermore, it is observed via Paraver tools that our approach to parallelization while optimizing for maximum occupancy allows GPU to execute memory bound SIFT algorithm at optimal levels. 展开更多
关键词 Scale Invariant Feature Transform (SIFT) Parallel Computing GPU GPU Occupancy Portable Parallel Programming CUDA
在线阅读 下载PDF
Performance Prediction Based on Statistics of Sparse Matrix-Vector Multiplication on GPUs 被引量:1
5
作者 Ruixing Wang Tongxiang Gu Ming Li 《Journal of Computer and Communications》 2017年第6期65-83,共19页
As one of the most essential and important operations in linear algebra, the performance prediction of sparse matrix-vector multiplication (SpMV) on GPUs has got more and more attention in recent years. In 2012, Guo a... As one of the most essential and important operations in linear algebra, the performance prediction of sparse matrix-vector multiplication (SpMV) on GPUs has got more and more attention in recent years. In 2012, Guo and Wang put forward a new idea to predict the performance of SpMV on GPUs. However, they didn’t consider the matrix structure completely, so the execution time predicted by their model tends to be inaccurate for general sparse matrix. To address this problem, we proposed two new similar models, which take into account the structure of the matrices and make the performance prediction model more accurate. In addition, we predict the execution time of SpMV for CSR-V, CSR-S, ELL and JAD sparse matrix storage formats by the new models on the CUDA platform. Our experimental results show that the accuracy of prediction by our models is 1.69 times better than Guo and Wang’s model on average for most general matrices. 展开更多
关键词 SPARSE Matrix-Vector MULTIPLICATION Performance Prediction GPU Normal DISTRIBUTION UNIFORM DISTRIBUTION
暂未订购
A Fuzzy Neural Network Based Dynamic Data Allocation Model on Heterogeneous Multi-GPUs for Large-scale Computations
6
作者 Chao-Long Zhang Yuan-Ping Xu +3 位作者 Zhi-Jie Xu Jia He Jing Wang Jian-Hua Adu 《International Journal of Automation and computing》 EI CSCD 2018年第2期181-193,共13页
The parallel computation capabilities of modern graphics processing units (GPUs) have attracted increasing attention from researchers and engineers who have been conducting high computational throughput studies. How... The parallel computation capabilities of modern graphics processing units (GPUs) have attracted increasing attention from researchers and engineers who have been conducting high computational throughput studies. However, current single GPU based engineering solutions are often struggling to fulfill their real-time requirements. Thus, the multi-GPU-based approach has become a popular and cost-effective choice for tackling the demands. In those cases, the computational load balancing over multiple GPU "nodes" is often the key and bottleneck that affect the quality and performance of the real=time system. The existing load balancing approaches are mainly based on the assumption that all GPU nodes in the same computer framework are of equal computational performance, which is often not the case due to cluster design and other legacy issues. This paper presents a novel dynamic load balancing (DLB) model for rapid data division and allocation on heterogeneous GPU nodes based on an innovative fuzzy neural network (FNN). In this research, a 5-state parameter feedback mechanism defining the overall cluster and node performance is proposed. The corresponding FNN-based DLB model will be capable of monitoring and predicting individual node performance under different workload scenarios. A real=time adaptive scheduler has been devised to reorganize the data inputs to each node when necessary to maintain their runtime computational performance. The devised model has been implemented on two dimensional (2D) discrete wavelet transform (DWT) applications for evaluation. Experiment results show that this DLB model enables a high computational throughput while ensuring real=time and precision requirements from complex computational tasks. 展开更多
关键词 Heterogeneous GPU cluster dynamic load balancing fuzzy neural network adaptive scheduler discrete wavelet trans-form.
原文传递
Implementation of a Particle Accelerator Beam Dynamics Code on Multi-Node GPUs
7
作者 Zhicong Liu Ji Qiang 《Journal of Software Engineering and Applications》 2019年第9期321-338,共18页
Particle accelerators play an important role in a wide range of scientific discoveries and industrial applications. The self-consistent multi-particle simulation based on the particle-in-cell (PIC) method has been use... Particle accelerators play an important role in a wide range of scientific discoveries and industrial applications. The self-consistent multi-particle simulation based on the particle-in-cell (PIC) method has been used to study charged particle beam dynamics inside those accelerators. However, the PIC simulation is time-consuming and needs to use modern parallel computers for high-resolution applications. In this paper, we implemented a parallel beam dynamics PIC code on multi-node hybrid architecture computers with multiple Graphics Processing Units (GPUs). We used two methods to parallelize the PIC code on multiple GPUs and observed that the replication method is a better choice for moderate problem size and current computer hardware while the domain decomposition method might be a better choice for large problem size and more advanced computer hardware that allows direct communications among multiple GPUs. Using the multi-node hybrid architectures at Oak Ridge Leadership Computing Facility (OLCF), the optimized GPU PIC code achieves a reasonable parallel performance and scales up to 64 GPUs with 16 million particles. 展开更多
关键词 PARTICLE ACCELERATOR PARTICLE-IN-CELL GPU Parallel BEAM Dynamics Simulation
暂未订购
Real-Time Scheduling Using GPUs--Advanced and More Accurate Proof of Feasibility
8
作者 Peter Fodrek L'udovit Farkas +3 位作者 Michal Blahol Martin Foltin Juraj Hn'it Tomas Murgas 《通讯和计算机(中英文版)》 2012年第8期863-871,共9页
关键词 实时调度 GPU 图形处理器 DDR内存 证明 评估报告 调度子系统 Linux
在线阅读 下载PDF
PELLR: A Permutated ELLPACK-R Format for SpMV on GPUs
9
作者 Zhiqi Wang Tongxiang Gu 《Journal of Computer and Communications》 2020年第4期44-58,共15页
The sparse matrix vector multiplication (SpMV) is inevitable in almost all kinds of scientific computation, such as iterative methods for solving linear systems and eigenvalue problems. With the emergence and developm... The sparse matrix vector multiplication (SpMV) is inevitable in almost all kinds of scientific computation, such as iterative methods for solving linear systems and eigenvalue problems. With the emergence and development of Graphics Processing Units (GPUs), high efficient formats for SpMV should be constructed. The performance of SpMV is mainly determinted by the storage format for sparse matrix. Based on the idea of JAD format, this paper improved the ELLPACK-R format, reduced the waiting time between different threads in a warp, and the speed up achieved about 1.5 in our experimental results. Compared with other formats, such as CSR, ELL, BiELL and so on, our format performance of SpMV is optimal over 70 percent of the test matrix. We proposed a method based on parameters to analyze the performance impact on different formats. In addition, a formula was constructed to count the computation and the number of iterations. 展开更多
关键词 SpMV GPU STORAGE FORMAT HIGH PERFORMANCE
在线阅读 下载PDF
Acceleration of Points to Convex Region Correspondence Pose Estimation Algorithm on GPUs for Real-Time Applications
10
作者 Raghu Raj P. Kumar Suresh S. Muknahallipatna John E. McInroy 《Journal of Computer and Communications》 2016年第17期1-17,共18页
In our previous work, a novel algorithm to perform robust pose estimation was presented. The pose was estimated using points on the object to regions on image correspondence. The laboratory experiments conducted in th... In our previous work, a novel algorithm to perform robust pose estimation was presented. The pose was estimated using points on the object to regions on image correspondence. The laboratory experiments conducted in the previous work showed that the accuracy of the estimated pose was over 99% for position and 84% for orientation estimations respectively. However, for larger objects, the algorithm requires a high number of points to achieve the same accuracy. The requirement of higher number of points makes the algorithm, computationally intensive resulting in the algorithm infeasible for real-time computer vision applications. In this paper, the algorithm is parallelized to run on NVIDIA GPUs. The results indicate that even for objects having more than 2000 points, the algorithm can estimate the pose in real time for each frame of high-resolution videos. 展开更多
关键词 Pose Estimation Parallel Computing GPU CUDA Real Time Image Processing
在线阅读 下载PDF
Increasing Momentum-Like Factors:A Method for Reducing Training Errors on Multiple GPUs 被引量:2
11
作者 Yu Tang Zhigang Kan +4 位作者 Lujia Yin Zhiquan Lai Zhaoning Zhang Linbo Qiao Dongsheng Li 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2022年第1期114-126,共13页
In distributed training,increasing batch size can improve parallelism,but it can also bring many difficulties to the training process and cause training errors.In this work,we investigate the occurrence of training er... In distributed training,increasing batch size can improve parallelism,but it can also bring many difficulties to the training process and cause training errors.In this work,we investigate the occurrence of training errors in theory and train ResNet-50 on CIFAR-10 by using Stochastic Gradient Descent(SGD) and Adaptive moment estimation(Adam) while keeping the total batch size in the parameter server constant and lowering the batch size on each Graphics Processing Unit(GPU).A new method that considers momentum to eliminate training errors in distributed training is proposed.We define a Momentum-like Factor(MF) to represent the influence of former gradients on parameter updates in each iteration.Then,we modify the MF values and conduct experiments to explore how different MF values influence the training performance based on SGD,Adam,and Nesterov accelerated gradient.Experimental results reveal that increasing MFs is a reliable method for reducing training errors in distributed training.The analysis of convergent conditions in distributed training with consideration of a large batch size and multiple GPUs is presented in this paper. 展开更多
关键词 multiple Graphics Processing Units(gpus) batch size training error distributed training momentum-like factors
原文传递
面向稀疏矩阵向量乘法的GPU性能建模和算法优化
12
作者 马澄宇 李锁兰 +3 位作者 刘一诺 赵文哲 任鹏举 夏天 《集成电路与嵌入式系统》 2026年第1期5-11,共7页
针对GPU平台上稀疏矩阵向量乘(SpMV)操作的性能瓶颈问题,提出了一种基于行重分割的优化算法及其配套性能评估模型。该方法首先基于矩阵行长度与计算资源分配之间的量化映射关系,通过设定动态阈值将原始矩阵划分为长行和短行子矩阵,分别... 针对GPU平台上稀疏矩阵向量乘(SpMV)操作的性能瓶颈问题,提出了一种基于行重分割的优化算法及其配套性能评估模型。该方法首先基于矩阵行长度与计算资源分配之间的量化映射关系,通过设定动态阈值将原始矩阵划分为长行和短行子矩阵,分别采用线程级和线程块级并行策略进行计算,从而有效缓解GPU SIMT执行特性与稀疏矩阵非规则数据分布之间的矛盾。为量化预处理过程中引入的额外开销,分别建立了针对Atomic Conflict和Padding的性能损失模型,将额外的访存和计算转换为可计算的开销函数。基于上述模型,构建了参数空间搜索算法,通过预先获取硬件性能指标和矩阵非零元分布信息,快速在参数集合中搜索得到最优预处理参数。实验结果表明,该优化算法在多种典型稀疏矩阵数据集上均优于传统的GPU稀疏计算库cuSPARSE,在部分场景下性能提升达1.26倍及1.17倍。此外,参数搜索开销较低,且该方法具备良好的通用性,可适配不同的输入矩阵与GPU硬件架构。 展开更多
关键词 GPU性能建模 并行算法优化 稀疏矩阵 SpMV
在线阅读 下载PDF
Toward Cost-EffectiveReservoir Simulation Solvers on GPUs 被引量:2
13
作者 Zheng Li Shuhong Wu +1 位作者 Jinchao Xu Chensong Zhang 《Advances in Applied Mathematics and Mechanics》 SCIE 2016年第6期971-991,共21页
In this paper,we focus on graphical processing unit(GPU)and discuss how its architecture affects the choice of algorithm and implementation of fully-implicit petroleum reservoir simulation.In order to obtain satisfact... In this paper,we focus on graphical processing unit(GPU)and discuss how its architecture affects the choice of algorithm and implementation of fully-implicit petroleum reservoir simulation.In order to obtain satisfactory performance on new many-core architectures such as GPUs,the simulator developers must know a great deal on the specific hardware and spend a lot of time on fine tuning the code.Porting a large petroleum reservoir simulator to emerging hardware architectures is expensive and risky.We analyze major components of an in-house reservoir simulator and investigate how to port them to GPUs in a cost-effective way.Preliminary numerical experiments show that our GPU-based simulator is robust and effective.More importantly,these numerical results clearly identify the main bottlenecks to obtain ideal speedup on GPUs and possibly other many-core architectures. 展开更多
关键词 gpus reservoir simulation fully-implicit method
在线阅读 下载PDF
A survey on dynamic graph processing on GPUs: concepts, terminologies and systems
14
作者 Hongru GAO Xiaofei LIAO +3 位作者 Zhiyuan SHAO Kexin LI Jiajie CHEN Hai JIN 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第4期1-23,共23页
Graphs that are used to model real-world entities with vertices and relationships among entities with edges,have proven to be a powerful tool for describing real-world problems in applications.In most real-world scena... Graphs that are used to model real-world entities with vertices and relationships among entities with edges,have proven to be a powerful tool for describing real-world problems in applications.In most real-world scenarios,entities and their relationships are subject to constant changes.Graphs that record such changes are called dynamic graphs.In recent years,the widespread application scenarios of dynamic graphs have stimulated extensive research on dynamic graph processing systems that continuously ingest graph updates and produce up-to-date graph analytics results.As the scale of dynamic graphs becomes larger,higher performance requirements are demanded to dynamic graph processing systems.With the massive parallel processing power and high memory bandwidth,GPUs become mainstream vehicles to accelerate dynamic graph processing tasks.GPU-based dynamic graph processing systems mainly address two challenges:maintaining the graph data when updates occur(i.e.,graph updating)and producing analytics results in time(i.e.,graph computing).In this paper,we survey GPU-based dynamic graph processing systems and review their methods on addressing both graph updating and graph computing.To comprehensively discuss existing dynamic graph processing systems on GPUs,we first introduce the terminologies of dynamic graph processing and then develop a taxonomy to describe the methods employed for graph updating and graph computing.In addition,we discuss the challenges and future research directions of dynamic graph processing on GPUs. 展开更多
关键词 dynamic graphs graph processing graph algorithms gpus
原文传递
Kohn–Sham time-dependent density functional theory with Tamm–Dancoff approximation on massively parallel GPUs
15
作者 Inkoo Kim Daun Jeong +7 位作者 Won-Joon Son Hyung-Jin Kim Young Min Rhee Yongsik Jung Hyeonho Choi Jinkyu Yim Inkook Jang Dae Sin Kim 《npj Computational Materials》 SCIE EI CSCD 2023年第1期1556-1567,共12页
We report a high-performance multi graphics processing unit(GPU)implementation of the Kohn–Sham time-dependent density functional theory(TDDFT)within the Tamm–Dancoff approximation.Our algorithm on massively paralle... We report a high-performance multi graphics processing unit(GPU)implementation of the Kohn–Sham time-dependent density functional theory(TDDFT)within the Tamm–Dancoff approximation.Our algorithm on massively parallel computing systems using multiple parallel models in tandem scales optimally with material size,considerably reducing the computational wall time.A benchmark TDDFT study was performed on a green fluorescent protein complex composed of 4353 atoms with 40,518 atomic orbitals represented by Gaussian-type functions,demonstrating the effect of distant protein residues on the excitation.As the largest molecule attempted to date to the best of our knowledge,the proposed strategy demonstrated reasonably high efficiencies up to 256 GPUs on a custom-built state-of-the-art GPU computing system with Nvidia A100 GPUs.We believe that our GPU-oriented algorithms,which empower first-principles simulation for very large-scale applications,may render deeper understanding of the molecular basis of material behaviors,eventually revealing new possibilities for breakthrough designs on new material systems. 展开更多
关键词 gpus GRAPHICS MASSIVE
原文传递
Molecular dynamics simulation of complex multiphase flow on a computer cluster with GPUs 被引量:9
16
作者 CHEN FeiGuo GE Wei LI JingHai 《Science China Chemistry》 SCIE EI CAS 2009年第3期372-380,共9页
Compute Unified Device Architecture (CUDA) was used to design and implement molecular dynamics (MD) simulations on graphics processing units (GPU). With an NVIDIA Tesla C870, a 20-60 fold speedup over that of one core... Compute Unified Device Architecture (CUDA) was used to design and implement molecular dynamics (MD) simulations on graphics processing units (GPU). With an NVIDIA Tesla C870, a 20-60 fold speedup over that of one core of the Intel Xeon 5430 CPU was achieved, reaching up to 150 Gflops. MD simulation of cavity flow and particle-bubble interaction in liquid was implemented on multiple GPUs using a message passing interface (MPI). Up to 200 GPUs were tested on a special network topology, which achieves good scalability. The capability of GPU clusters for large-scale molecular dynamics simulation of meso-scale flow behavior was, therefore, uncovered. 展开更多
关键词 MULTIPHASE flow MOLECULAR dynamics CUDA GPU parallel COMPUTING
原文传递
MPFFT:An Auto-Tuning FFT Library for OpenCL GPUs 被引量:10
17
作者 Yan Li Yun-Quan Zhang +2 位作者 Yi-Qun Liu Guo-Ping Long Hai-Peng Jia 《Journal of Computer Science & Technology》 SCIE EI CSCD 2013年第1期90-105,共16页
Fourier methods have revolutionized many fields of science and engineering, such as astronomy, medical imaging, seismology and spectroscopy, and the fast Fourier transform (FFT) is a computationally efficient method... Fourier methods have revolutionized many fields of science and engineering, such as astronomy, medical imaging, seismology and spectroscopy, and the fast Fourier transform (FFT) is a computationally efficient method of generating a Fourier transform. The emerging class of high performance computing architectures, such as GPU, seeks to achieve much higher performance and efficiency by exposing a hierarchy of distinct memories to software. However, the complexity of GPU programming poses a significant challenge to developers. In this paper, we propose an automatic performance tuning framework for FFT on various OpenCL GPUs, and implement a high performance library named MPFFT based on this framework. For power-of-two length FFTs, our library substantially outperforms the cIAmdFft library on AMD GPUs and achieves comparable performance as the CUFFT library on NVIDIA GPUs. Furthermore, our library also supports non-power-of-two size. For 3D non-power-of-two FFTs, our library delivers 1.5x to 28x faster than FFTYV with 4 threads and 20.01x average speedup over CUFFT 4.0 on Tesla C2050. 展开更多
关键词 fast Fourier transform GPU OPENCL AUTO-TUNING
原文传递
Parallel algorithm for real-time contouring from grid DEM on modern GPUs 被引量:3
18
作者 CHEN Zhuo,SHEN Lei,ZHAO YanQing & YANG ChongJun State Key Laboratory of Remote Sensing Science,Jointly Sponsored by the Institute of Remote Sensing Applications of Chinese Academy of Sciences and Beijing Normal University,Beijing 100101,China 《Science China(Technological Sciences)》 SCIE EI CAS 2010年第S1期33-37,共5页
A real-time algorithm for constructing contour maps from grid DEM data is pre-sented.It runs completely within the programmable 3D visualization pipeline.The interpolation is paralleled by rasterizer units in the grap... A real-time algorithm for constructing contour maps from grid DEM data is pre-sented.It runs completely within the programmable 3D visualization pipeline.The interpolation is paralleled by rasterizer units in the graphics card,and contour line extraction is paralleled by pixel shader.During each frame of the rendering,we first make an elevation gradient map out of original terrain vertex data,then figure out the final contour lines with image-space processing,and directly blend the results on the original scene to obtain a final scene with contour map using alpha-blending.We implement this method in our global 3D-digitalearth system with Direct3D?9.0c API and tested on some consumer level PC platforms.For arbitrary scene with certain LOD level,the process takes less than 10 ms,giving topologically correct,anti-aliased contour lines. 展开更多
关键词 CONTOUR map CONTOUR LINES paralleled computing 3D TERRAIN GPU digital earth
原文传递
A non-uniform grid approach for high-resolution flood inundation simulation based on GPUs 被引量:3
19
作者 Jun-hui Wang Jing-ming Hou +5 位作者 Jia-hui Gong Bing-yao Li Bao-shan Shi Min-peng Guo Jian Shen Peng Lu 《Journal of Hydrodynamics》 SCIE EI CSCD 2021年第4期844-860,共17页
In view of the frequent occurrence of floods due to climate change, and the fact that a large calculation domain, with complex land types, is required for solving the problem of the flood simulations, this paper propo... In view of the frequent occurrence of floods due to climate change, and the fact that a large calculation domain, with complex land types, is required for solving the problem of the flood simulations, this paper proposes an optimized non-uniform grid model combined with a high-resolution model based on the graphics processing unit (GPU) acceleration to simulate the surface water flow process. For the grid division, the topographic gradient change is taken as the control variable and different optimization criteria are designed according to different land types. In the numerical model, the Godunov-type method is adopted for the spatial discretization, the TVD-MUSUL and Runge-Kutta methods are used to improve the model’s spatial and temporal calculation accuracies, and the simulation time is reduced by leveraging the GPU acceleration. The model is applied to ideal and actual case studies. The results show that the numerical model based on a non-uniform grid enjoys a good stability. In the simulation of the urban inundation, approximately 40%–50% of the urban average topographic gradient change to be covered is taken as the threshold for the non-uniform grid division, and the calculation efficiency and accuracy can be optimized. In this case, the calculation efficiency of the non-uniform grid based on the optimized parameters is 2–3 times of that of the uniform grid, and the approach can be adopted for the actual flood simulation in large-scale areas. 展开更多
关键词 Non-uniform grid high-resolution model Godunov-type flood simulation graphics processing unit(GPU)acceleration
原文传递
Parallel LDPC Decoding on GPUs Using a Stream-Based Computing Approach 被引量:2
20
作者 Gabriel Falcao Student Member +3 位作者 Shinichi Yamagiwa Vitor Silva Leonel Sousa Senior Member 《Journal of Computer Science & Technology》 SCIE EI CSCD 2009年第5期913-924,共12页
Low-Density Parity-Check (LDPC) codes are powerful error correcting codes adopted by recent communication standards. LDPC decoders are based on belief propagation algorithms, which make use of a Tanner graph and ver... Low-Density Parity-Check (LDPC) codes are powerful error correcting codes adopted by recent communication standards. LDPC decoders are based on belief propagation algorithms, which make use of a Tanner graph and very intensive message-passing computation, and usually require hardware-based dedicated solutions. With the exponential increase of the computational power of commodity graphics processing units (GPUs), new opportunities have arisen to develop general purpose processing on GPUs. This paper proposes the use of GPUs for implementing flexible and programmable LDPC decoders. A new stream-based approach is proposed, based on compact data structures to represent the Tanner graph. It is shown that such a challenging application for stream-based computing, because of irregular memory access patterns, memory bandwidth and recursive flow control constraints, can be efficiently implemented on GPUs. The proposal was experimentally evaluated by programming LDPC decoders on GPUs using the Caravela platform, a generic interface tool for managing the kernels' execution regardless of the GPU manufacturer and operating system. Moreover, to relatively assess the obtained results, we have also implemented LDPC decoders on general purpose processors with Streaming Single Instruction Multiple Data (SIMD) Extensions. Experimental results show that the solution proposed here efficiently decodes several codewords simultaneously, reducing the processing time by one order of magnitude. 展开更多
关键词 data-parallel computing graphics processing unit (GPU) Caravela low-density parity-check (LDPC) code error correcting code
原文传递
上一页 1 2 194 下一页 到第
使用帮助 返回顶部