The combinations of machine learning with ab initio methods have attracted much attention for their potential to resolve the accuracy-efficiency dilemma and facilitate calculations for large-scale systems.Recently,equ...The combinations of machine learning with ab initio methods have attracted much attention for their potential to resolve the accuracy-efficiency dilemma and facilitate calculations for large-scale systems.Recently,equivariant message passing neural networks(MPNNs)that explicitly incorporate symmetry constraints have demonstrated promise for interatomic potential and density functional theory(DFT)Hamiltonian predictions.However,the high-order tensors used to represent node and edge information are coupled through the Clebsch–Gordan tensor product,leading to steep increases in computational complexity and seriously hindering the performance of equivariant MPNNs.Here,we develop high-order tensor machine-learning Hamiltonian(Hot-Ham),an E(3)equivariant MPNN framework that combines two advanced technologies:local coordinate transformation and Gaunt tensor product to efficiently model DFT Hamiltonians.These two innovations significantly reduce the complexity of tensor products from O(L^(6))to O(L^(3))or O(L^(2)log^(2)L)for the max tensor order L,and enhance the performance of MPNNs.Benchmarks on several public datasets demonstrate its state-of-the-art accuracy with relatively few parameters,and applications to multilayer twisted moire systems,heterostructures,and allotropes showcase its generalization ability and high efficiency.Our Hot-Ham method provides a new perspective for developing efficient equivariant neural networks and would be a promising approach for investigating the electronic properties of large-scale materials systems.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.12125404,T2495231,and 123B2049)the Basic Research Program of Jiangsu(Grant Nos.BK20233001,BK20241253,and BK20253009)+3 种基金the Jiangsu Funding Program for Excellent Postdoctoral Talent(Grant Nos.2024ZB002 and 2024ZB075)the Postdoctoral Fellowship Program of CPSF(Grant No.GZC20240695)the AI&AI for Science program of Nanjing University,the Artificial Intelligence and Quantum physics(AIQ)program of Nanjing Universitythe Fundamental Research Funds for the Central Universities。
文摘The combinations of machine learning with ab initio methods have attracted much attention for their potential to resolve the accuracy-efficiency dilemma and facilitate calculations for large-scale systems.Recently,equivariant message passing neural networks(MPNNs)that explicitly incorporate symmetry constraints have demonstrated promise for interatomic potential and density functional theory(DFT)Hamiltonian predictions.However,the high-order tensors used to represent node and edge information are coupled through the Clebsch–Gordan tensor product,leading to steep increases in computational complexity and seriously hindering the performance of equivariant MPNNs.Here,we develop high-order tensor machine-learning Hamiltonian(Hot-Ham),an E(3)equivariant MPNN framework that combines two advanced technologies:local coordinate transformation and Gaunt tensor product to efficiently model DFT Hamiltonians.These two innovations significantly reduce the complexity of tensor products from O(L^(6))to O(L^(3))or O(L^(2)log^(2)L)for the max tensor order L,and enhance the performance of MPNNs.Benchmarks on several public datasets demonstrate its state-of-the-art accuracy with relatively few parameters,and applications to multilayer twisted moire systems,heterostructures,and allotropes showcase its generalization ability and high efficiency.Our Hot-Ham method provides a new perspective for developing efficient equivariant neural networks and would be a promising approach for investigating the electronic properties of large-scale materials systems.