We evaluated the performance of OpenFOAMGPT(GPT for generative pretrained transformers),which includes rating multiple large-language models.Some of the present models efficiently manage different computational fluid ...We evaluated the performance of OpenFOAMGPT(GPT for generative pretrained transformers),which includes rating multiple large-language models.Some of the present models efficiently manage different computational fluid dynamics(CFD)tasks,such as adjusting boundary conditions,turbulence models,and solver configurations,although their token cost and stability vary.Locally deployed smaller models such as the QwQ-32B(Q4 KM quantized model)struggled with generating valid solver files for complex processes.Zero-shot prompts commonly fail in simulations with intricate settings,even for large models.Challenges with boundary conditions and solver keywords stress the need for expert supervision,indicating that further development is needed to fully automate specialized CFD simulations.展开更多
基金supported by the Royal Society(Grant No.RG\R1\251236)the Fundamental Research Funds for the Central Universities of China(Grant No.JKF-2025055317102).
文摘We evaluated the performance of OpenFOAMGPT(GPT for generative pretrained transformers),which includes rating multiple large-language models.Some of the present models efficiently manage different computational fluid dynamics(CFD)tasks,such as adjusting boundary conditions,turbulence models,and solver configurations,although their token cost and stability vary.Locally deployed smaller models such as the QwQ-32B(Q4 KM quantized model)struggled with generating valid solver files for complex processes.Zero-shot prompts commonly fail in simulations with intricate settings,even for large models.Challenges with boundary conditions and solver keywords stress the need for expert supervision,indicating that further development is needed to fully automate specialized CFD simulations.