期刊文献+
共找到971篇文章
< 1 2 49 >
每页显示 20 50 100
CUDA‑based GPU‑only computation for efficient tracking simulation of single and multi‑bunch collective effects
1
作者 Keon Hee Kim Eun‑San Kim 《Nuclear Science and Techniques》 2026年第1期61-79,共19页
Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based met... Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based methods,tracking on a single CPU core,or parallelizing the computation across multiple cores via the message passing interface(MPI).Although these approaches work well for single-bunch tracking,scaling them to multiple bunches significantly increases the computational load,which often necessitates the use of a dedicated multi-CPU cluster.To address this challenge,alternative methods leveraging General-Purpose computing on Graphics Processing Units(GPGPU)have been proposed,enabling tracking studies on a standalone desktop personal computer(PC).However,frequent CPU-GPU interactions,including data transfers and synchronization operations during tracking,can introduce communication overheads,potentially reducing the overall effectiveness of GPU-based computations.In this study,we propose a novel approach that eliminates this overhead by performing the entire tracking simulation process exclusively on the GPU,thereby enabling the simultaneous processing of all bunches and their macro-particles.Specifically,we introduce MBTRACK2-CUDA,a Compute Unified Device Architecture(CUDA)ported version of MBTRACK2,which facilitates efficient tracking of single-and multi-bunch collective effects by leveraging the full GPU-resident computation. 展开更多
关键词 Code development gpu computing Collective effects
在线阅读 下载PDF
An incompressible flow solver on a GPU/CPU heterogeneous architecture parallel computing platform 被引量:1
2
作者 Qianqian Li Rong Li Zixuan Yang 《Theoretical & Applied Mechanics Letters》 CSCD 2023年第5期387-393,共7页
A computational fluid dynamics(CFD)solver for a GPU/CPU heterogeneous architecture parallel computing platform is developed to simulate incompressible flows on billion-level grid points.To solve the Poisson equation,t... A computational fluid dynamics(CFD)solver for a GPU/CPU heterogeneous architecture parallel computing platform is developed to simulate incompressible flows on billion-level grid points.To solve the Poisson equation,the conjugate gradient method is used as a basic solver,and a Chebyshev method in combination with a Jacobi sub-preconditioner is used as a preconditioner.The developed CFD solver shows good performance on parallel efficiency,which exceeds 90%in the weak-scalability test when the number of grid points allocated to each GPU card is greater than 2083.In the acceleration test,it is found that running a simulation with 10403 grid points on 125 GPU cards accelerates by 203.6x over the same number of CPU cores.The developed solver is then tested in the context of a two-dimensional lid-driven cavity flow and three-dimensional Taylor-Green vortex flow.The results are consistent with previous results in the literature. 展开更多
关键词 gpu Acceleration Parallel computing Poisson equation PRECONDITIONER
在线阅读 下载PDF
Study of a GPU-based parallel computing method for the Monte Carlo program 被引量:2
3
作者 罗志飞 邱睿 +3 位作者 李明 武祯 曾志 李君利 《Nuclear Science and Techniques》 SCIE CAS CSCD 2014年第A01期27-30,共4页
关键词 并行计算方法 蒙特卡罗程序 gpu GEANT4 模拟程序 蒙特卡洛方法 并行处理能力 图形处理单元
在线阅读 下载PDF
基于斯托克斯平面近似函数与GPU并行的海洋重力梯度模型计算
4
作者 卜靖宇 叶周润 +3 位作者 梁星辉 刘金钊 柳林涛 王嘉琛 《合肥工业大学学报(自然科学版)》 北大核心 2026年第2期253-259,共7页
相对于其他重力场元素,扰动重力梯度能更多地反映变化的不规则地球产生的高频信息。在计算扰动重力梯度时,由于斯托克斯积分较为复杂导致被积函数复杂难以直接用牛顿-莱布尼茨公式计算、且计算的数据量过于庞大导致计算耗时过长。为有... 相对于其他重力场元素,扰动重力梯度能更多地反映变化的不规则地球产生的高频信息。在计算扰动重力梯度时,由于斯托克斯积分较为复杂导致被积函数复杂难以直接用牛顿-莱布尼茨公式计算、且计算的数据量过于庞大导致计算耗时过长。为有效解决该问题,文章使用高斯数值积分解决被积函数复杂的问题,同时利用统一计算设备架构(compute unified device architecture,CUDA)在计算过程中实现了在图形处理器(graphics processing unit,GPU)端的并行计算,根据拉普拉斯方程可以检验计算结果的准确性,并且选取了某海域3°×2°范围海平面的重力异常数据进行计算。结果表明,使用高斯数值积分以及CUDA并行计算的方法,提供准确计算结果的同时也提高了计算效率。 展开更多
关键词 扰动重力梯度 重力异常 CUDA并行计算 图形处理器(gpu) 高斯数值积分
在线阅读 下载PDF
Regularized focusing inversion for large-scale gravity data based on GPU parallel computing
5
作者 WANG Haoran DING Yidan +1 位作者 LI Feida LI Jing 《Global Geology》 2019年第3期179-187,共9页
Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes... Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape. 展开更多
关键词 LARGE-SCALE gravity data gpu parallel computing CUDA equivalent geometric TRELLIS FOCUSING INVERSION
在线阅读 下载PDF
A Hybrid Parallel Strategy for Isogeometric Topology Optimization via CPU/GPU Heterogeneous Computing
6
作者 Zhaohui Xia Baichuan Gao +3 位作者 Chen Yu Haotian Han Haobo Zhang Shuting Wang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1103-1137,共35页
This paper aims to solve large-scale and complex isogeometric topology optimization problems that consumesignificant computational resources. A novel isogeometric topology optimization method with a hybrid parallelstr... This paper aims to solve large-scale and complex isogeometric topology optimization problems that consumesignificant computational resources. A novel isogeometric topology optimization method with a hybrid parallelstrategy of CPU/GPU is proposed, while the hybrid parallel strategies for stiffness matrix assembly, equationsolving, sensitivity analysis, and design variable update are discussed in detail. To ensure the high efficiency ofCPU/GPU computing, a workload balancing strategy is presented for optimally distributing the workload betweenCPU and GPU. To illustrate the advantages of the proposedmethod, three benchmark examples are tested to verifythe hybrid parallel strategy in this paper. The results show that the efficiency of the hybrid method is faster thanserial CPU and parallel GPU, while the speedups can be up to two orders of magnitude. 展开更多
关键词 Topology optimization high-efficiency isogeometric analysis CPU/gpu parallel computing hybrid OpenMPCUDA
在线阅读 下载PDF
A Rayleigh Wave Globally Optimal Full Waveform Inversion Framework Based on GPU Parallel Computing
7
作者 Zhao Le Wei Zhang +3 位作者 Xin Rong Yiming Wang Wentao Jin Zhengxuan Cao 《Journal of Geoscience and Environment Protection》 2023年第3期327-338,共12页
Conventional gradient-based full waveform inversion (FWI) is a local optimization, which is highly dependent on the initial model and prone to trapping in local minima. Globally optimal FWI that can overcome this limi... Conventional gradient-based full waveform inversion (FWI) is a local optimization, which is highly dependent on the initial model and prone to trapping in local minima. Globally optimal FWI that can overcome this limitation is particularly attractive, but is currently limited by the huge amount of calculation. In this paper, we propose a globally optimal FWI framework based on GPU parallel computing, which greatly improves the efficiency, and is expected to make globally optimal FWI more widely used. In this framework, we simplify and recombine the model parameters, and optimize the model iteratively. Each iteration contains hundreds of individuals, each individual is independent of the other, and each individual contains forward modeling and cost function calculation. The framework is suitable for a variety of globally optimal algorithms, and we test the framework with particle swarm optimization algorithm for example. Both the synthetic and field examples achieve good results, indicating the effectiveness of the framework. . 展开更多
关键词 Full Waveform Inversion Finite-Difference Method Globally Optimal Framework gpu Parallel computing Particle Swarm Optimization
在线阅读 下载PDF
A Subdomain-Based GPU Parallel Scheme for Accelerating Perdynamics Modeling with Reduced Graphics Memory
8
作者 Zuokun Yang Jun Li +1 位作者 Xin Lai Lisheng Liu 《Computer Modeling in Engineering & Sciences》 2026年第1期256-285,共30页
Peridynamics(PD)demonstrates unique advantages in addressing fracture problems,however,its nonlocality and meshfree discretization result in high computational and storage costs.Moreover,in its engineering application... Peridynamics(PD)demonstrates unique advantages in addressing fracture problems,however,its nonlocality and meshfree discretization result in high computational and storage costs.Moreover,in its engineering applications,the computational scale of classical GPU parallel schemes is often limited by the finite graphics memory of GPU devices.In the present study,we develop an efficient particle information management strategy based on the cell-linked list method and on this basis propose a subdomain-based GPU parallel scheme,which exhibits outstanding acceleration performance in specific compute kernels while significantly reducing graphics memory usage.Compared to the classical parallel scheme,the cell-linked list method facilitates efficient management of particle information within subdomains,enabling the proposed parallel scheme to effectively reduce graphics memory usage by optimizing the size and number of subdomains while significantly improving the speed of neighbor search.As demonstrated in PD examples,the proposed parallel scheme enhances the neighbor search efficiency dramatically and achieves a significant speedup relative to serial programs.For instance,without considering the time of data transmission,the proposed scheme achieves a remarkable speedup of nearly 1076.8×in one test case,due to its excellent computational efficiency in the neighbor search.Additionally,for 2D and 3D PD models with tens of millions of particles,the graphics memory usage can be reduced up to 83.6%and 85.9%,respectively.Therefore,this subdomain-based GPU parallel scheme effectively avoids graphics memory shortages while significantly improving the computational efficiency,providing new insights into studying more complex large-scale problems. 展开更多
关键词 PERIDYNAMICS gpu CUDA parallel computing cell-linked list
在线阅读 下载PDF
Enhancing SS-OCT 3D image reconstruction:A real-time system with stripe artifact suppression and GPU parallel acceleration
9
作者 Dandan LIU 《虚拟现实与智能硬件(中英文)》 2026年第1期115-130,共16页
Optical coherence tomography(OCT),particularly Swept-Source OCT,is widely employed in medical diagnostics and industrial inspections owing to its high-resolution imaging capabilities.However,Swept-Source OCT 3D imagin... Optical coherence tomography(OCT),particularly Swept-Source OCT,is widely employed in medical diagnostics and industrial inspections owing to its high-resolution imaging capabilities.However,Swept-Source OCT 3D imaging often suffers from stripe artifacts caused by unstable light sources,system noise,and environmental interference,posing challenges to real-time processing of large-scale datasets.To address this issue,this study introduces a real-time reconstruction system that integrates stripe-artifact suppression and parallel computing using a graphics processing unit.This approach employs a frequency-domain filtering algorithm with adaptive anti-suppression parameters,dynamically adjusted through an image quality evaluation function and optimized using a convolutional neural network for complex frequency-domain feature learning.Additionally,a graphics processing unit integrated 3D reconstruction framework is developed,enhancing data processing throughput and real-time performance via a dual-queue decoupling mechanism.Experimental results demonstrate significant improvements in structural similarity(0.92),peak signal-to-noise ratio(31.62 dB),and stripe suppression ratio(15.73 dB)compared with existing methods.On the RTX 4090 platform,the proposed system achieved an end-to-end delay of 94.36 milliseconds,a frame rate of 10.3 frames per second,and a throughput of 121.5 million voxels per second,effectively suppressing artifacts while preserving image details and enhancing real-time 3D reconstruction performance. 展开更多
关键词 Stripe artifact suppression 3D reconstruction gpu parallel computing Adaptive frequency domain filtering Convolutional neural network
在线阅读 下载PDF
Large-Eddy Simulation of Airflow over a Steep, Three-Dimensional Isolated Hill with Multi-GPUs Computing
10
作者 Takanori Uchida 《Open Journal of Fluid Dynamics》 2018年第4期416-434,共19页
The present research attempted a Large-Eddy Simulation (LES) of airflow over a steep, three-dimensional isolated hill by using the latest multi-cores multi-CPUs systems. As a result, it was found that 1) turbulence si... The present research attempted a Large-Eddy Simulation (LES) of airflow over a steep, three-dimensional isolated hill by using the latest multi-cores multi-CPUs systems. As a result, it was found that 1) turbulence simulations using approximately 50 million grid points are feasible and 2) the use of this system resulted in the achievement of a high computation speed, which exceeded the speed of parallel computation attained by a single CPU on one of the latest supercomputers. Furthermore, LES was conducted by using the multi-GPUs systems. The results of these simulations revealed the following findings: 1) the multi-GPUs environment which used the NVDIA? Tesla M2090 or the M2075 could simulate turbulence in a model with as many as approximately 50 million grid points. 2) The computation speed achieved by the multi-GPUs environments exceeded that by parallel computation which used four to six CPUs of one of the latest supercomputers. 展开更多
关键词 LES ISOLATED HILL Multi-Cores Multi-CPUs computing Multi-gpus computing
暂未订购
异构CPU-GPU系统机密计算综述
11
作者 郝萌 李佳勇 +1 位作者 杨洪伟 张伟哲 《信息网络安全》 北大核心 2025年第11期1658-1672,共15页
随着人工智能等数据密集型应用的普及,以CPU与GPU为核心的异构计算系统已成为关键基础设施。然而,在云和边缘等非可信环境中,敏感数据在处理阶段面临着严峻的安全威胁,传统加密方法对此无能为力。机密计算利用硬件可信执行环境(TEE)为... 随着人工智能等数据密集型应用的普及,以CPU与GPU为核心的异构计算系统已成为关键基础设施。然而,在云和边缘等非可信环境中,敏感数据在处理阶段面临着严峻的安全威胁,传统加密方法对此无能为力。机密计算利用硬件可信执行环境(TEE)为保护使用中的数据提供了有效方案,但现有技术主要集中在CPU端。将TEE安全边界无缝扩展至计算引擎核心GPU,已成为当前学术界与工业界关注的焦点。文章对CPU-GPU异构系统中的机密计算技术进行系统性综述。首先,文章回顾了机密计算的基本概念并剖析了针对GPU的典型攻击向量。然后,对现有GPU机密计算方案进行分类,涵盖硬件辅助、软硬件协同及纯软件实现等技术范式。最后,文章总结了该领域面临的关键挑战,并展望了未来研究方向。 展开更多
关键词 机密计算 可信执行环境 异构计算 gpu
在线阅读 下载PDF
基于GPU并行计算的拓扑优化全流程加速设计方法
12
作者 张长东 吴奕凡 +3 位作者 周铉华 李旭东 肖息 张自来 《航空制造技术》 北大核心 2025年第12期34-41,67,共9页
随着大尺寸航空航天装备的发展需求,高效高精度的大规模拓扑优化设计成为该领域关注的焦点。针对现有大规模拓扑优化设计存在的计算量巨大、计算效率低下等问题,基于GPU并行计算开展了拓扑优化全流程加速设计方法的研究。对网格划分、... 随着大尺寸航空航天装备的发展需求,高效高精度的大规模拓扑优化设计成为该领域关注的焦点。针对现有大规模拓扑优化设计存在的计算量巨大、计算效率低下等问题,基于GPU并行计算开展了拓扑优化全流程加速设计方法的研究。对网格划分、刚度矩阵计算与组装、有限元求解等过程进行了并行加速,实现了高效高精度的体素网格划分及有限元过程的高效求解。此外,该方法针对拓扑优化设计过程的加速需求,对灵敏度过滤过程进行了并行加速处理。以300万体素单元的姿态推力器模型为设计对象,发现相比于Abaqus 2022软件的拓扑优化并行加速计算,本文所提方法的加速比提高了1259%,且两种方法的相似度极高,验证了所提方法的有效性与实用性。 展开更多
关键词 拓扑优化 并行计算 gpu加速 符号距离场 稀疏矩阵 网格划分
在线阅读 下载PDF
CPU+GPU并行加速的星链信号实时高精度频率估计算法
13
作者 代传金 秦培杰 +1 位作者 李林 臧博 《航空学报》 北大核心 2025年第24期215-228,共14页
星链下行信号实时高精度频率估计算法设计与实现是LEO卫星动态机会导航工程应用的关键技术。针对传统极大似然估计、频域滑窗估计及卡尔曼滤波等算法在低信噪比星链信号捕获中鲁棒性差、实时性不足的问题,提出多子载波联合频偏估计(MC-J... 星链下行信号实时高精度频率估计算法设计与实现是LEO卫星动态机会导航工程应用的关键技术。针对传统极大似然估计、频域滑窗估计及卡尔曼滤波等算法在低信噪比星链信号捕获中鲁棒性差、实时性不足的问题,提出多子载波联合频偏估计(MC-JFE)算法,通过深度挖掘信号多子载波结构特征,联合优化载波频率与频率间隔参数,提升频率估计精度与实时性。为突破MC-JFE算法工程应用中密集计算瓶颈,创新构建了一种CPU+GPU异构并行的加速处理架构,通过协同调度CPU逻辑控制与GPU大规模并行计算能力,算法执行效率实现超一个数量级提升。为验证设计算法的理论与技术实现有效性,基于半实物仿真平台生成的星链下行信标数据,开展了5978颗星链卫星信号实时频率估计试验,并结合我国边境地区实测信号进行多普勒估计算法对比研究。结果表明:所提出的MC-JFE算法在−10~10 dB全信噪比范围内保持最低估计误差边界,估计精度提升50%以上(0 dB);通过相位信息融合机制,在部分子载波中断时维持稳定输出;基于CUDA最优线程块配置的CPU+GPU异构架构,加速比峰值达47倍,较传统CPU方案提升2.8倍,且精度与加速比呈正相关特性,为LEO卫星动态机会导航提供了高可靠、强实时的频率估计技术支撑,具有重要工程应用价值。 展开更多
关键词 星链下行信号 高精度频率估计 CPU+gpu异构 并行加速 多线程处理
原文传递
基于CPU-GPU的超音速流场N-S方程数值模拟
14
作者 卢志伟 张皓茹 +3 位作者 刘锡尧 王亚东 张卓凯 张君安 《中国机械工程》 北大核心 2025年第9期1942-1950,共9页
为深入分析超音速流场的特性并提高数值计算效率,设计了一种高效的加速算法。该算法充分利用中央处理器-图形处理器(CPU-GPU)异构并行模式,通过异步流方式实现数据传输及处理,显著加速了超音速流场数值模拟的计算过程。结果表明:GPU并... 为深入分析超音速流场的特性并提高数值计算效率,设计了一种高效的加速算法。该算法充分利用中央处理器-图形处理器(CPU-GPU)异构并行模式,通过异步流方式实现数据传输及处理,显著加速了超音速流场数值模拟的计算过程。结果表明:GPU并行计算速度明显高于CPU串行计算速度,其加速比随流场网格规模的增大而明显提高。GPU并行计算可以有效提高超音速流场的计算速度,为超音速飞行器的设计、优化、性能评估及其研发提供一种强有力的并行计算方法。 展开更多
关键词 超音速流场 中央处理器-图形处理器 异构计算 有限差分
在线阅读 下载PDF
Programming for scientific computing on peta-scale heterogeneous parallel systems 被引量:1
15
作者 杨灿群 吴强 +2 位作者 唐滔 王锋 薛京灵 《Journal of Central South University》 SCIE EI CAS 2013年第5期1189-1203,共15页
Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to co... Peta-scale high-perfomlance computing systems are increasingly built with heterogeneous CPU and GPU nodes to achieve higher power efficiency and computation throughput. While providing unprecedented capabilities to conduct computational experiments of historic significance, these systems are presently difficult to program. The users, who are domain experts rather than computer experts, prefer to use programming models closer to their domains (e.g., physics and biology) rather than MPI and OpenME This has led the development of domain-specific programming that provides domain-specific programming interfaces but abstracts away some performance-critical architecture details. Based on experience in designing large-scale computing systems, a hybrid programming framework for scientific computing on heterogeneous architectures is proposed in this work. Its design philosophy is to provide a collaborative mechanism for domain experts and computer experts so that both domain-specific knowledge and performance-critical architecture details can be adequately exploited. Two real-world scientific applications have been evaluated on TH-IA, a peta-scale CPU-GPU heterogeneous system that is currently the 5th fastest supercomputer in the world. The experimental results show that the proposed framework is well suited for developing large-scale scientific computing applications on peta-scale heterogeneous CPU/GPU systems. 展开更多
关键词 heterogeneous parallel system programming framework scientific computing gpu computing molecular dynamic
在线阅读 下载PDF
Managing Computing Infrastructure for IoT Data 被引量:1
16
作者 Sapna Tyagi Ashraf Darwish Mohammad Yahiya Khan 《Advances in Internet of Things》 2014年第3期29-35,共7页
Digital data have become a torrent engulfing every area of business, science and engineering disciplines, gushing into every economy, every organization and every user of digital technology. In the age of big data, de... Digital data have become a torrent engulfing every area of business, science and engineering disciplines, gushing into every economy, every organization and every user of digital technology. In the age of big data, deriving values and insights from big data using rich analytics becomes important for achieving competitiveness, success and leadership in every field. The Internet of Things (IoT) is causing the number and types of products to emit data at an unprecedented rate. Heterogeneity, scale, timeliness, complexity, and privacy problems with large data impede progress at all phases of the pipeline that can create value from data issues. With the push of such massive data, we are entering a new era of computing driven by novel and ground breaking research innovation on elastic parallelism, partitioning and scalability. Designing a scalable system for analysing, processing and mining huge real world datasets has become one of the challenging problems facing both systems researchers and data management researchers. In this paper, we will give an overview of computing infrastructure for IoT data processing, focusing on architectural and major challenges of massive data. We will briefly discuss about emerging computing infrastructure and technologies that are promising for improving massive data management. 展开更多
关键词 BIG DATA Cloud computing DATA ANALYTICS Elastic SCALABILITY Heterogeneous computing gpu PCM Massive DATA Processing
在线阅读 下载PDF
EG-STC: An Efficient Secure Two-Party Computation Scheme Based on Embedded GPU for Artificial Intelligence Systems
17
作者 Zhenjiang Dong Xin Ge +2 位作者 Yuehua Huang Jiankuo Dong Jiang Xu 《Computers, Materials & Continua》 SCIE EI 2024年第6期4021-4044,共24页
This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.W... This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications. 展开更多
关键词 Secure two-party computation embedded gpu acceleration privacy-preserving machine learning edge computing
在线阅读 下载PDF
基于ROACH2-GPU的集群相关器研究——Hashpipe软件在X-engine模块中的应用
18
作者 张科 王钊 +6 位作者 李吉夏 吴锋泉 田海俊 牛晨辉 张巨勇 陈志平 陈学雷 《贵州师范大学学报(自然科学版)》 北大核心 2025年第2期114-121,共8页
随着国际上越来越多干涉阵列设备的建造与运行,为人类探测未知宇宙的奥秘提供了丰富的观测数据,然而随之带来高速和密集型数据实时处理的巨大困难,对传统的数据处理技术提出了严峻的挑战。基于我国已建造的天籁计划一期项目在数据实时... 随着国际上越来越多干涉阵列设备的建造与运行,为人类探测未知宇宙的奥秘提供了丰富的观测数据,然而随之带来高速和密集型数据实时处理的巨大困难,对传统的数据处理技术提出了严峻的挑战。基于我国已建造的天籁计划一期项目在数据实时关联计算的需求,利用GPU在高性能并行计算上的优势,为天籁柱形探路者阵列设计并实现一套基于ROACH2-GPU的集群相关器,深入探究Hashpipe(High availibility shared pipeline engine)软件在集群相关器X-engine模块中的应用。首先介绍ROACH2-GPU集群相关器的整体架构,然后研究Hashpipe的核心功能和数据处理方法,实现了完整的分布式异构处理功能,优化了Hashpipe控制和参数接口。根据实际观测需求,可修改程序参数,能实现不同通道数量的相关器配置,降低后端软硬件设计的难度和成本。最后,在完成软件正确性测试的基础上,进行了强射电天文源的观测和处理,能够获得准确的干涉条纹。 展开更多
关键词 ROACH2-gpu Hashpipe 集群相关器 X-engine模块 并行计算
在线阅读 下载PDF
面向GPU平台的通用Stencil自动调优框架
19
作者 孙庆骁 杨海龙 《计算机研究与发展》 北大核心 2025年第10期2622-2634,共13页
Stencil计算在科学应用中得到了广泛采用.许多高性能计算(HPC)平台利用GPU的高计算能力来加速Stencil计算.近年来,Stencil计算在阶数、内存访问和计算模式等方面变得更加复杂.为了使Stencil计算适配GPU架构,学术界提出了各种基于流处理... Stencil计算在科学应用中得到了广泛采用.许多高性能计算(HPC)平台利用GPU的高计算能力来加速Stencil计算.近年来,Stencil计算在阶数、内存访问和计算模式等方面变得更加复杂.为了使Stencil计算适配GPU架构,学术界提出了各种基于流处理和分块的优化技术.由于Stencil计算模式和GPU架构的多样性,没有单一的优化技术适合所有Stencil实例.因此,研究人员提出了Stencil自动调优机制来对给定优化技术组合进行参数搜索.然而,现有机制引入了庞大的离线分析成本和在线预测开销,并且无法灵活地推广到任意Stencil模式.为了解决上述问题,提出了通用Stencil自动调优框架GeST,其在GPU平台上实现Stencil计算的极致性能优化.具体来说,GeST通过零填充格式构建全局搜索空间,利用变异系数量化参数相关性并生成参数组;之后,GeST迭代地从参数组选取参数值,根据奖励策略调整采样比例并通过哈希编码避免冗余执行.实验结果表明,与其他先进的自动调优工作相比,Ge ST能够在短时间内识别出性能更优的参数设置. 展开更多
关键词 Stencil计算 gpu 自动调优 性能优化 参数搜索
在线阅读 下载PDF
基于国产GPU的国产公钥密码SM2高性能并行加速方法
20
作者 吴雯 董建阔 +4 位作者 刘鹏博 董振江 胡昕 张品昌 肖甫 《通信学报》 北大核心 2025年第5期15-28,共14页
为了满足国家信息安全自主可控的战略需求,确保算法的透明性和安全性,提出基于国产GPU的国产公钥密码SM2数字签名算法的高性能并行加速方法。首先,设计适用于域运算的底层函数,优化有限域运算的效率,约减采用2轮进位消解以抵御计时攻击... 为了满足国家信息安全自主可控的战略需求,确保算法的透明性和安全性,提出基于国产GPU的国产公钥密码SM2数字签名算法的高性能并行加速方法。首先,设计适用于域运算的底层函数,优化有限域运算的效率,约减采用2轮进位消解以抵御计时攻击。其次,基于雅可比(Jacobian)坐标实现点加和倍点运算,充分利用寄存器和全局内存的特性,设计离线/在线预计算表以提高点乘计算效率。最后,根据海光深度计算单元(DCU)的特点进行实验设计,实现高性能的SM2签名和验签算法,分别达到6816kops/s的签名吞吐量和1385kops/s的验签吞吐量。研究验证了基于国产GPU的国产公钥密码SM2数字签名算法的可行性和有效性,为国内信息安全自主可控领域提供了重要的技术支持。 展开更多
关键词 国家商用密码 数字签名 图形处理器 异构计算
在线阅读 下载PDF
上一页 1 2 49 下一页 到第
使用帮助 返回顶部