For neural network potentials(NNPs)to gain widespread use,researchers must be able to trust model outputs.However,the blackbox nature of neural networks and their inherent stochasticity are often deterrents,especially...For neural network potentials(NNPs)to gain widespread use,researchers must be able to trust model outputs.However,the blackbox nature of neural networks and their inherent stochasticity are often deterrents,especially for foundationmodels trained over broad swaths of chemical space.Uncertainty information provided at the time of prediction can help reduce aversion to NNPs.In this work,we detail two uncertainty quantification(UQ)methods.Readout ensembling,by finetuning the readout layers of an ensemble of foundation models,provides information about model uncertainty,while quantile regression,by replacing point predictions with distributional predictions,provides information about uncertainty within the underlying training data.We demonstrate our approach with the MACE-MP-0 model,applying UQ to the foundation model and a series of finetuned models.The uncertainties produced by the readout ensemble and quantile methods are demonstrated to be distinct measures by which the quality of the NNP output can be judged.展开更多
基金supported by the"Transferring exascale computational chemistry to cloud computing environment and emerging hardware technologies(TEC4)"project,which is funded by the U.S.Department of Energy,Office of Science,Office of Basic Energy Sciences,the Division of Chemical Sciences,Geosciences,and Biosciences(under FWP 82037)supported by the U.S.Department of Energy(DOE),Office of Science,Office of Basic Energy Sciences,Division of Chemical Sciences,Geosciences&Biosciences(under FWP 47319)Pacific Northwest National Laboratory(PNNL)is a multiprogram national laboratory operated for the U.S.Department of Energy(DOE)by Battelle Memorial Institute under Contract No.DE-AC05-76RL0-1830.
文摘For neural network potentials(NNPs)to gain widespread use,researchers must be able to trust model outputs.However,the blackbox nature of neural networks and their inherent stochasticity are often deterrents,especially for foundationmodels trained over broad swaths of chemical space.Uncertainty information provided at the time of prediction can help reduce aversion to NNPs.In this work,we detail two uncertainty quantification(UQ)methods.Readout ensembling,by finetuning the readout layers of an ensemble of foundation models,provides information about model uncertainty,while quantile regression,by replacing point predictions with distributional predictions,provides information about uncertainty within the underlying training data.We demonstrate our approach with the MACE-MP-0 model,applying UQ to the foundation model and a series of finetuned models.The uncertainties produced by the readout ensemble and quantile methods are demonstrated to be distinct measures by which the quality of the NNP output can be judged.