Polymer–solvent systems exhibit complex solvation behaviours encompassing a diverse range of phenomena,including swelling,gelation,and dispersion.Accurate interpretation is often hindered by subjectivity,particularly...Polymer–solvent systems exhibit complex solvation behaviours encompassing a diverse range of phenomena,including swelling,gelation,and dispersion.Accurate interpretation is often hindered by subjectivity,particularly in manual rapid screening assessments.While computer vision models hold significant promise to replace the reliance on human evaluation for inference,their adoption is limited by the lack of domain-specific datasets tailored,in our case,to polymer–solvent systems.To bridge this gap,we conducted extensive screenings of polymers with diverse physical and chemical properties across various solvents,capturing solvation characteristics through images,videos,and image–text captions.This dataset informed the development of a multi-model vision assistant,integrating computer vision and vision-language approaches to autonomously detect,infer,and contextualise polymer–solvent interactions.The system combines a 2D-CNN module for static solvation state classification,a hybrid 2D/3D-CNN module to capture temporal dynamics,and a BLIP-2-based contextualisation module to generate descriptive captions for solvation behaviours,including vial orientation,solvent discolouration,and polymer interaction states.Computationally efficient,this vision assistant provides an accurate,objective,and scalable solution in interpreting solvation behaviours,fit for autonomous platforms and high-throughput workflows in material discovery and analysis.展开更多
基金Green Rose Chemistry for their collaboration in this research and for providing both financial and technical support through an Innovate UK grant(10097443),subcontracted to the Innovation Centre in Digital Molecular Technologies(iDMT,Yusuf Hamied Department of Chemistry,University of Cambridge)Z.J.L.acknowledges support from the Marie Skłodowska-Curie Actions Innovative Training Networks through a Marie Curie Fellowship(101072732)as part of Horizon Europe 2021,underwritten by United Kingdom Research and Innovation(UKRI EP/X034763/1)+1 种基金Z.E.acknowledges financial support from the Mastercard Foundation ScholarshipThis work was further supported by the facilities and resources provided by the Innovation Centre in Digital Molecular Technologies,An element in Fig.1a was created using BioRender templates and modified prior to use.
文摘Polymer–solvent systems exhibit complex solvation behaviours encompassing a diverse range of phenomena,including swelling,gelation,and dispersion.Accurate interpretation is often hindered by subjectivity,particularly in manual rapid screening assessments.While computer vision models hold significant promise to replace the reliance on human evaluation for inference,their adoption is limited by the lack of domain-specific datasets tailored,in our case,to polymer–solvent systems.To bridge this gap,we conducted extensive screenings of polymers with diverse physical and chemical properties across various solvents,capturing solvation characteristics through images,videos,and image–text captions.This dataset informed the development of a multi-model vision assistant,integrating computer vision and vision-language approaches to autonomously detect,infer,and contextualise polymer–solvent interactions.The system combines a 2D-CNN module for static solvation state classification,a hybrid 2D/3D-CNN module to capture temporal dynamics,and a BLIP-2-based contextualisation module to generate descriptive captions for solvation behaviours,including vial orientation,solvent discolouration,and polymer interaction states.Computationally efficient,this vision assistant provides an accurate,objective,and scalable solution in interpreting solvation behaviours,fit for autonomous platforms and high-throughput workflows in material discovery and analysis.