To predict the thermal-hydraulic(T/H)parameters of the reactor core for liquid-metal-cooled fast reactors(LMFRs),especially under flow blockage accidents,we developed a subchannel code called KMC-FB.This code uses a t...To predict the thermal-hydraulic(T/H)parameters of the reactor core for liquid-metal-cooled fast reactors(LMFRs),especially under flow blockage accidents,we developed a subchannel code called KMC-FB.This code uses a time-dependent,four-equation,singlephase flow model together with a 3D heat conduction model for the fuel rods,which is solved by numerical methods based on the finite difference method with a staggered mesh.Owing to the local effect of the blockage on the flow field,low axial flow,increased forced crossflow,and backflow occur.To more accurately simulate this problem,we implemented a robust and novel solution method.We verified the code with a low-flow(~0.01 m/s)and large-scale blockage case.For the preliminary validation,we compared our results with the experimental data of the NACIE-UP BFPS blockage test and the KIT19ROD blockage test.The results revealed that KMC-FB has sufficient solution accuracy and can be used in future flow blockage analyses for LMFRs.展开更多
A simulation code,GOAT,is developed to simulate single-bunch intensity-dependent effects and their interplay in the proton ring of the Electron-Ion Collider in China(EicC)project.GOAT is a scalable and portable macrop...A simulation code,GOAT,is developed to simulate single-bunch intensity-dependent effects and their interplay in the proton ring of the Electron-Ion Collider in China(EicC)project.GOAT is a scalable and portable macroparticle tracking code written in Python and coded by object-oriented programming technology.It allows for transverse and longitudinal tracking,including impedance,space charge effect,electron cloud effect,and beam-beam interaction.In this paper,physical models and numerical approaches for the four types of high-intensity effects,together with the benchmark results obtained through other simulation codes or theories,are presented and discussed.In addition,a numerical application of the cross-talk simulation between the beam-beam interaction and transverse impedance is shown,and a dipole instability is observed below the respective instability threshold.Different mitigation measures implemented in the code are used to suppress the instability.The flexibility,completeness,and advancement demonstrate that GOAT is a powerful tool for beam dynamics studies in the EicC project or other high-intensity accelerators.展开更多
Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based met...Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based methods,tracking on a single CPU core,or parallelizing the computation across multiple cores via the message passing interface(MPI).Although these approaches work well for single-bunch tracking,scaling them to multiple bunches significantly increases the computational load,which often necessitates the use of a dedicated multi-CPU cluster.To address this challenge,alternative methods leveraging General-Purpose computing on Graphics Processing Units(GPGPU)have been proposed,enabling tracking studies on a standalone desktop personal computer(PC).However,frequent CPU-GPU interactions,including data transfers and synchronization operations during tracking,can introduce communication overheads,potentially reducing the overall effectiveness of GPU-based computations.In this study,we propose a novel approach that eliminates this overhead by performing the entire tracking simulation process exclusively on the GPU,thereby enabling the simultaneous processing of all bunches and their macro-particles.Specifically,we introduce MBTRACK2-CUDA,a Compute Unified Device Architecture(CUDA)ported version of MBTRACK2,which facilitates efficient tracking of single-and multi-bunch collective effects by leveraging the full GPU-resident computation.展开更多
文摘To predict the thermal-hydraulic(T/H)parameters of the reactor core for liquid-metal-cooled fast reactors(LMFRs),especially under flow blockage accidents,we developed a subchannel code called KMC-FB.This code uses a time-dependent,four-equation,singlephase flow model together with a 3D heat conduction model for the fuel rods,which is solved by numerical methods based on the finite difference method with a staggered mesh.Owing to the local effect of the blockage on the flow field,low axial flow,increased forced crossflow,and backflow occur.To more accurately simulate this problem,we implemented a robust and novel solution method.We verified the code with a low-flow(~0.01 m/s)and large-scale blockage case.For the preliminary validation,we compared our results with the experimental data of the NACIE-UP BFPS blockage test and the KIT19ROD blockage test.The results revealed that KMC-FB has sufficient solution accuracy and can be used in future flow blockage analyses for LMFRs.
基金supported by the National Science Fund for Distinguished Young Scholars (No.11825505)the National Key R&D Program of China (No.2019YFA0405400)。
文摘A simulation code,GOAT,is developed to simulate single-bunch intensity-dependent effects and their interplay in the proton ring of the Electron-Ion Collider in China(EicC)project.GOAT is a scalable and portable macroparticle tracking code written in Python and coded by object-oriented programming technology.It allows for transverse and longitudinal tracking,including impedance,space charge effect,electron cloud effect,and beam-beam interaction.In this paper,physical models and numerical approaches for the four types of high-intensity effects,together with the benchmark results obtained through other simulation codes or theories,are presented and discussed.In addition,a numerical application of the cross-talk simulation between the beam-beam interaction and transverse impedance is shown,and a dipole instability is observed below the respective instability threshold.Different mitigation measures implemented in the code are used to suppress the instability.The flexibility,completeness,and advancement demonstrate that GOAT is a powerful tool for beam dynamics studies in the EicC project or other high-intensity accelerators.
基金supported by the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT(MSIT)(No.RS-2022-00143178)the Ministry of Education(MOE)(Nos.2022R1A6A3A13053896 and 2022R1F1A1074616),Republic of Korea.
文摘Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based methods,tracking on a single CPU core,or parallelizing the computation across multiple cores via the message passing interface(MPI).Although these approaches work well for single-bunch tracking,scaling them to multiple bunches significantly increases the computational load,which often necessitates the use of a dedicated multi-CPU cluster.To address this challenge,alternative methods leveraging General-Purpose computing on Graphics Processing Units(GPGPU)have been proposed,enabling tracking studies on a standalone desktop personal computer(PC).However,frequent CPU-GPU interactions,including data transfers and synchronization operations during tracking,can introduce communication overheads,potentially reducing the overall effectiveness of GPU-based computations.In this study,we propose a novel approach that eliminates this overhead by performing the entire tracking simulation process exclusively on the GPU,thereby enabling the simultaneous processing of all bunches and their macro-particles.Specifically,we introduce MBTRACK2-CUDA,a Compute Unified Device Architecture(CUDA)ported version of MBTRACK2,which facilitates efficient tracking of single-and multi-bunch collective effects by leveraging the full GPU-resident computation.