期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
Discovering macrocycles for humid carbon capture via high-throughput computational screening
1
作者 Yutao Guan Aiting Kai Ming Liu 《Science China Chemistry》 2026年第1期1-2,共2页
Since the mid-to-late 20th century,the scientific community has increasingly recognized that the rapid rise in atmospheric greenhouse gases,particularly CO_(2)from human activities,is the primary driver of global warm... Since the mid-to-late 20th century,the scientific community has increasingly recognized that the rapid rise in atmospheric greenhouse gases,particularly CO_(2)from human activities,is the primary driver of global warming.This escalation has led to pressing climate challenges,including sea-level rise and more frequent extreme weather events[1,2].Among the limited strategies available to mitigate CO_(2)emissions,carbon capture and storage have emerged as a key approach.To this end,various adsorbents—such as metalorganic frameworks(MOFs),zeolites,and carbon materials—have been developed for CO_(2)capture[3-6]. 展开更多
关键词 global warmingthis high throughput computational screening climate challengesincluding greenhouse gases capture storage macrocycles CO extreme weather events
原文传递
High throughput computational screening and interpretable machine learning for iodine capture of metal-organic frameworks 被引量:1
2
作者 Haoyi Tan Yukun Teng Guangcun Shan 《npj Computational Materials》 2025年第1期1263-1271,共9页
The removal of leaked radioactive iodine isotopes in humid air environments holds significant importance in nuclear waste management and nuclear accident mitigation.In this study,highthroughput computational screening... The removal of leaked radioactive iodine isotopes in humid air environments holds significant importance in nuclear waste management and nuclear accident mitigation.In this study,highthroughput computational screening and machine learning were combined to reveal the iodine capture performance of 1816 metal-organic framework(MOF)materials under humid air conditions.Initially,the relationship between the structural characteristics of MOF materials(including density,surface area and pore features)and their adsorption properties was explored,with the aim of identifying the optimal structural parameters for iodine capture.Subsequently,two machine learning regression algorithms—Random Forest and CatBoost,were employed to predict the iodine adsorption capabilities of MOF materials.In addition to 6 structural features,25 molecular features(encompassing the types of metal and ligand atoms as well as bonding modes)and 8 chemical features(including heat of adsorption and Henry’s coefficient)were incorporated to enhance the prediction accuracy of the machine learning algorithms.Feature importance was assessed to determine the relative influence of various features on iodine adsorption performance,in which the Henry’s coefficient and heat of adsorption to iodine were found the two most crucial chemical factors.Furthermore,four types of molecular fingerprints were introduced for providing comprehensive and detailed structural information of MOF materials.The 20 most significant Molecular ACCess Systems(MACCS)bits were picked out,revealing that the presence of six-membered ring structures and nitrogen atomsin theMOFframeworkwere the key structural factors that enhanced iodine adsorption,followed by the presence of oxygen atoms.This work combined high-throughput computation,machine learning,and molecular fingerprints to comprehensively and systematically elucidate the multifaceted factors governing the iodine adsorption performance of MOFs in humid environments,establishing a robust and profound guideline framework for accelerating the screening and targeted design of high-performance MOF materials. 展开更多
关键词 removal leaked radioactive iodine isotopes iodine capture high throughput computational screening nuclear waste management interpretable machine learning humid air conditions metal organic frameworks machine learning
原文传递
High-throughput computational analysis of kinetic barriers to ring-closing depolymerization for aliphatic polycarbonates
3
作者 Brandi Ransom Riccardo Bosio +2 位作者 Dmitry Zubarev James L.Hedrick Nathaniel H.Park 《npj Computational Materials》 2025年第1期3180-3188,共9页
The chemical reversion of polymers via ring-closing depolymerization(RCD)to their monomeric constituents is a highly promising avenue to enable end-of-life recycling and reuse.However,most reported systems using RCD r... The chemical reversion of polymers via ring-closing depolymerization(RCD)to their monomeric constituents is a highly promising avenue to enable end-of-life recycling and reuse.However,most reported systems using RCD revolve around bespoke monomer designs to facilitate facile depolymerization,and there exists relatively few investigations into the influence of functional groups on the ability of a particular monomer to cleanly undergo depolymerization.Here,we perform computational investigations into the energy barriers for RCD of 6-membered aliphatic carbonates in different solvents.The results corroborate trends observed in prior experimental studies,validating the utility of computational investigations towards understanding RCD.Experimental evaluation of the thermal depolymerization in two of the studied polycarbonates confirmed their ability to undergo RCD.Overall,this work highlights the advantage of high-throughput energy barrier computations to provide meaningful insight into broad reactivity trends that would be highly laborious to access experimentally. 展开更多
关键词 chemical reversion computational investigations high throughput computational analysis kinetic barriers monomeric constituents aliphatic polycarbonates ring closing depolymerization bespoke monomer designs
原文传递
High-throughput computational framework for high-order anharmonic thermal transport in cubic and tetragonal crystals
4
作者 Zhi Li Huiju Lee +1 位作者 Chris Wolverton Yi Xia 《npj Computational Materials》 2025年第1期4656-4671,共16页
Accurate first-principles prediction of lattice thermal conductivity(κ_(L))remains challenging in identifying materials with extreme thermal behavior.While the harmonic approximation with threephonon scattering(HA+3p... Accurate first-principles prediction of lattice thermal conductivity(κ_(L))remains challenging in identifying materials with extreme thermal behavior.While the harmonic approximation with threephonon scattering(HA+3ph)is now routine,reliableκ_(L)prediction often requires higher-order anharmonic effects,including self-consistent phonon renormalization,three-and four-phonon scattering,and off-diagonal heat flux(SCPH+3,4ph+OD).We present a state-of-the-art highthroughput workflow that unifies these effects and apply it to 773 cubic and tetragonal crystals spanning diverse chemistries and structures.From 562 dynamically stable compounds,weassess the hierarchical impacts of higher-order anharmonicity.For around 60%of materials,HA+3ph predictions closely match those from SCPH+3,4ph+OD.SCPH generally increasesκ_(L),by over 8 times in extreme cases,whereas four-phonon scattering universally suppressesκ_(L),sometimes to 15%of the HA+3ph value.Off-diagonal contributions are negligible in high-κ_(L)systems but can rival diagonal terms in highly anharmonic low-κ_(L)compounds.We highlight four case studies,Rb_(2)TlAlH_(6),Cu_(3)VSe_(4),CuBr,and KTlCl_(4),that exhibit distinct extreme behaviors.This work delivers not only a robust workflow for high-fidelityκ_(L)dataset but also a quantitative framework to determine when higher-order effects are essential.The hierarchy ofκ_(L)results,from the HA+3ph to SCPH+3,4ph+OD level,offers a scalable,interpretable route to discovering next-generation extreme thermal materials. 展开更多
关键词 thermal transport high throughput computational framework identifying materials extreme thermal behaviorwhile high order anharmonicity harmonic approximation threephonon scattering ha ph lattice thermal conductivity l remains first principles prediction
原文传递
Optimized CUDA Implementation to Improve the Performance of Bundle Adjustment Algorithm on GPUs
5
作者 Pranay R. Kommera Suresh S. Muknahallipatna John E. McInroy 《Journal of Software Engineering and Applications》 2024年第4期172-201,共30页
The 3D reconstruction pipeline uses the Bundle Adjustment algorithm to refine the camera and point parameters. The Bundle Adjustment algorithm is a compute-intensive algorithm, and many researchers have improved its p... The 3D reconstruction pipeline uses the Bundle Adjustment algorithm to refine the camera and point parameters. The Bundle Adjustment algorithm is a compute-intensive algorithm, and many researchers have improved its performance by implementing the algorithm on GPUs. In the previous research work, “Improving Accuracy and Computational Burden of Bundle Adjustment Algorithm using GPUs,” the authors demonstrated first the Bundle Adjustment algorithmic performance improvement by reducing the mean square error using an additional radial distorting parameter and explicitly computed analytical derivatives and reducing the computational burden of the Bundle Adjustment algorithm using GPUs. The naïve implementation of the CUDA code, a speedup of 10× for the largest dataset of 13,678 cameras, 4,455,747 points, and 28,975,571 projections was achieved. In this paper, we present the optimization of the Bundle Adjustment algorithm CUDA code on GPUs to achieve higher speedup. We propose a new data memory layout for the parameters in the Bundle Adjustment algorithm, resulting in contiguous memory access. We demonstrate that it improves the memory throughput on the GPUs, thereby improving the overall performance. We also demonstrate an increase in the computational throughput of the algorithm by optimizing the CUDA kernels to utilize the GPU resources effectively. A comparative performance study of explicitly computing an algorithm parameter versus using the Jacobians instead is presented. In the previous work, the Bundle Adjustment algorithm failed to converge for certain datasets due to several block matrices of the cameras in the augmented normal equation, resulting in rank-deficient matrices. In this work, we identify the cameras that cause rank-deficient matrices and preprocess the datasets to ensure the convergence of the BA algorithm. Our optimized CUDA implementation achieves convergence of the Bundle Adjustment algorithm in around 22 seconds for the largest dataset compared to 654 seconds for the sequential implementation, resulting in a speedup of 30×. Our optimized CUDA implementation presented in this paper has achieved a 3× speedup for the largest dataset compared to the previous naïve CUDA implementation. 展开更多
关键词 Scene Reconstruction Bundle Adjustment LEVENBERG-MARQUARDT Non-Linear Least Squares Memory throughput computational throughput Contiguous Memory Access CUDA Optimization
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部