This paper presents the application of a novel AI-based approach,Neural Physics,to produce high-fidelity simulations of train aerodynamics.Neural Physics is built upon convolutional neural networks(CNNs),where the wei...This paper presents the application of a novel AI-based approach,Neural Physics,to produce high-fidelity simulations of train aerodynamics.Neural Physics is built upon convolutional neural networks(CNNs),where the weights are explicitly determined by classical numerical discretisation schemes rather than by training.By leveraging the power of AI technology,this recent approach results in code that can run easily on GPUs and AI processors,achieving high computational speed without sacrificing accuracy.The approach uses an implicit large eddy simulation method based on a non-linear Petrov-Galerkin method to model the unresolved turbulence.Furthermore,for higher-order finite elements,the convolutional finite element method(ConvFEM)is used,which greatly simplifies the implementation of higher-order elements within the NN 4 DPEs approach.We demonstrate the capability of Neural Physics by simulating a freight Locomotive Class 66 and a partially loaded freight train operating in an open field environment with and without cross wind.This is the first time that ConvFEM has been applied to high-speed fluid flow problems in complex geometries.The results are validated against existing numerical results and experimental measurements,and show good agreement in terms of pressure and velocity distributions around the train body.展开更多
存内计算(CIM,Computing in Memory)是一种为缓解“内存墙”和“功耗墙”而出现的新兴架构。因CPU处理器和存储器速度发展不均衡性,冯·诺依曼架构这类中央处理器与存储器分离的结构逐渐失去其优越性。存内计算提出以计算和存储相...存内计算(CIM,Computing in Memory)是一种为缓解“内存墙”和“功耗墙”而出现的新兴架构。因CPU处理器和存储器速度发展不均衡性,冯·诺依曼架构这类中央处理器与存储器分离的结构逐渐失去其优越性。存内计算提出以计算和存储相结合的方式来减少数据的搬移,极大地提升了计算效率。MRAM作为最有潜力的新一代非易失存储器件,被视为构建高效存内计算架构的有力候选者。以MRAM为基础构建的存内计算根据计算过程的不同可分为MRAM模拟存内计算和MRAM数字存内计算。数字存内计算又可以根据数字逻辑产生的方式分为MRAM写入式存内计算、MRAM读取式存内计算以及MRAM近存计算。MRAM模拟存内计算利用高并行度摊销能耗,在单位面积上,吞吐量和能效都具有数字存内计算无法比拟的优势,但也因其易受PVT影响等特征在实际应用中有所限制。MRAM数字存内计算实现方式多样,写入式存内计算几乎消除了存储器外的数据搬移,虽然当前工艺下的MRAM所需的翻转能耗和时延过大,导致该方式一直停留在仿真阶段,但不妨碍该存内计算是缓解“内存墙”最有效的手段之一;读取式存内计算严重依赖于读取放大器的功能设计,在相关领域有所发展,但所受限制较大;近存计算是当前MRAM非易失器件和CMOS电路在计算速度和计算能效差异较大的情况下,融合两者优势的优解,在实际应用中具有巨大的益处。展开更多
基金Projects(EPSRC EP/Y005732/1,EP/Y018680/1,EP/T003189/1,EP/V040235/1,EP/Y024257/1 and EP/T000414/1)supported by UK Research and Innovation(UKRI),UKProject(APP44894/UKRI 1281)supported by UKRI councils(NERC,AHRC,ESRC,MRC and DEFRA),UK。
文摘This paper presents the application of a novel AI-based approach,Neural Physics,to produce high-fidelity simulations of train aerodynamics.Neural Physics is built upon convolutional neural networks(CNNs),where the weights are explicitly determined by classical numerical discretisation schemes rather than by training.By leveraging the power of AI technology,this recent approach results in code that can run easily on GPUs and AI processors,achieving high computational speed without sacrificing accuracy.The approach uses an implicit large eddy simulation method based on a non-linear Petrov-Galerkin method to model the unresolved turbulence.Furthermore,for higher-order finite elements,the convolutional finite element method(ConvFEM)is used,which greatly simplifies the implementation of higher-order elements within the NN 4 DPEs approach.We demonstrate the capability of Neural Physics by simulating a freight Locomotive Class 66 and a partially loaded freight train operating in an open field environment with and without cross wind.This is the first time that ConvFEM has been applied to high-speed fluid flow problems in complex geometries.The results are validated against existing numerical results and experimental measurements,and show good agreement in terms of pressure and velocity distributions around the train body.
文摘存内计算(CIM,Computing in Memory)是一种为缓解“内存墙”和“功耗墙”而出现的新兴架构。因CPU处理器和存储器速度发展不均衡性,冯·诺依曼架构这类中央处理器与存储器分离的结构逐渐失去其优越性。存内计算提出以计算和存储相结合的方式来减少数据的搬移,极大地提升了计算效率。MRAM作为最有潜力的新一代非易失存储器件,被视为构建高效存内计算架构的有力候选者。以MRAM为基础构建的存内计算根据计算过程的不同可分为MRAM模拟存内计算和MRAM数字存内计算。数字存内计算又可以根据数字逻辑产生的方式分为MRAM写入式存内计算、MRAM读取式存内计算以及MRAM近存计算。MRAM模拟存内计算利用高并行度摊销能耗,在单位面积上,吞吐量和能效都具有数字存内计算无法比拟的优势,但也因其易受PVT影响等特征在实际应用中有所限制。MRAM数字存内计算实现方式多样,写入式存内计算几乎消除了存储器外的数据搬移,虽然当前工艺下的MRAM所需的翻转能耗和时延过大,导致该方式一直停留在仿真阶段,但不妨碍该存内计算是缓解“内存墙”最有效的手段之一;读取式存内计算严重依赖于读取放大器的功能设计,在相关领域有所发展,但所受限制较大;近存计算是当前MRAM非易失器件和CMOS电路在计算速度和计算能效差异较大的情况下,融合两者优势的优解,在实际应用中具有巨大的益处。