The rational design of catalyst structures tailored to target performance is an ambitious and profoundly impactful goal.Key challenges include achieving refined representations of the three-dimensional structure of ac...The rational design of catalyst structures tailored to target performance is an ambitious and profoundly impactful goal.Key challenges include achieving refined representations of the three-dimensional structure of active sites and imbuing models with robust physical interpretability.Herein,we developed a topology-based variational autoencoder framework(PGH-VAEs)to enable the interpretable inverse design of catalytic active sites.Leveraging high-entropy alloys as a case,we demonstrate that persistent GLMY homology,an advanced topological algebraic analysis tool,enables the quantification of three-dimensional structural sensitivity and establishes correlations with adsorption properties.The multi-channel PGH-VAEs illustrate how coordination and ligand effects shape the latent space and influence the adsorption energies.Building on the inverse design results from PGH-VAEs,the strategies to optimize the composition and facet structures to maximize the proportion of optimal active sites are proposed.This interpretable inverse design framework can be extended to diverse systems,paving the way for AI-driven catalyst design.展开更多
This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.Although deep neural networks have exhibited ...This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.Although deep neural networks have exhibited superior performance in various tasks,interpretability is always Achilles' heel of deep neural networks.At present,deep neural networks obtain high discrimination power at the cost of a low interpretability of their black-box representations.We believe that high model interpretability may help people break several bottlenecks of deep learning,e.g.,learning from a few annotations,learning via human–computer communications at the semantic level,and semantically debugging network representations.We focus on convolutional neural networks(CNNs),and revisit the visualization of CNN representations,methods of diagnosing representations of pre-trained CNNs,approaches for disentangling pre-trained CNN representations,learning of CNNs with disentangled representations,and middle-to-end learning based on model interpretability.Finally,we discuss prospective trends in explainable artificial intelligence.展开更多
基金supported by the Guangdong Basic and Applied Basic Research Foundation(2020A1515110843)Young S&T Talent Training Program of Guangdong Provincial Association for S&T(SKXRC202211)+3 种基金National Natural Science Foundation of China(22402163,22109003)the Major Science and Technology Infrastructure Project of Material Genome Big-science Facilities Platform supported by Municipal Development and Reform Commission of Shenzhen,Soft Science Research Project of Guangdong Province(No.2017B030301013)Natural Science Foundation of Xiamen,China(3502Z202472001)High-level Scientific Research Foundation of Hebei Province and Fundamental Research Funds for the Central Universities(20720240054).
文摘The rational design of catalyst structures tailored to target performance is an ambitious and profoundly impactful goal.Key challenges include achieving refined representations of the three-dimensional structure of active sites and imbuing models with robust physical interpretability.Herein,we developed a topology-based variational autoencoder framework(PGH-VAEs)to enable the interpretable inverse design of catalytic active sites.Leveraging high-entropy alloys as a case,we demonstrate that persistent GLMY homology,an advanced topological algebraic analysis tool,enables the quantification of three-dimensional structural sensitivity and establishes correlations with adsorption properties.The multi-channel PGH-VAEs illustrate how coordination and ligand effects shape the latent space and influence the adsorption energies.Building on the inverse design results from PGH-VAEs,the strategies to optimize the composition and facet structures to maximize the proportion of optimal active sites are proposed.This interpretable inverse design framework can be extended to diverse systems,paving the way for AI-driven catalyst design.
基金supported by the ONR MURI pro ject(No.N00014-16-1-2007)the DARPA XAI Award(No.N66001-17-2-4029)NSF IIS(No.1423305)
文摘This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations.Although deep neural networks have exhibited superior performance in various tasks,interpretability is always Achilles' heel of deep neural networks.At present,deep neural networks obtain high discrimination power at the cost of a low interpretability of their black-box representations.We believe that high model interpretability may help people break several bottlenecks of deep learning,e.g.,learning from a few annotations,learning via human–computer communications at the semantic level,and semantically debugging network representations.We focus on convolutional neural networks(CNNs),and revisit the visualization of CNN representations,methods of diagnosing representations of pre-trained CNNs,approaches for disentangling pre-trained CNN representations,learning of CNNs with disentangled representations,and middle-to-end learning based on model interpretability.Finally,we discuss prospective trends in explainable artificial intelligence.