Background:The electrocardiogram(ECG)is a valuable,noninvasive tool for monitoring heart-related conditions,providing critical insights.However,the interpretation of ECG data alongside patient information demands subs...Background:The electrocardiogram(ECG)is a valuable,noninvasive tool for monitoring heart-related conditions,providing critical insights.However,the interpretation of ECG data alongside patient information demands substantial medical expertise and resources.While deep learning methods help streamline this process,they often fall short in integrating patient data with ECG readings and do not provide the nuanced clinical suggestions and insights necessary for accurate diagnosis.Methods:Although recent advancements in multi-modal large language modeling have propelled their application scope beyond the natural language processing domain,their applicability to ECG processing remains largely unexplored,partly due to the lack of text–ECG data.To this end,we develop ECG-Language Model(ECG-LM),the first multi-modal large language model able to process natural language and understand ECG signals.The model employs a specialized ECG encoder that transforms raw ECG signals into a high-dimensional feature space,which is then aligned with the textual feature space derived from the large language model.To address the scarcity of text–ECG data,we generated text–ECG pairs by leveraging detailed ECG pattern descriptions from medical guidelines,creating a robust dataset for pre-training ECG-LM.Additionally,we fine-tune ECG-LM with public clinical conversation datasets and build an additional supervised fine-tuning dataset based on real clinical data from the hospital,aiming to provide a more comprehensive and customized user experience.Results:ECG-LM outperforms existing few-shot and zero-shot solutions in cardiovascular disease detection across all 3 tasks(diagnostic,rhythm,and form)while also demonstrating strong potential in ECG-related question answering.Conclusions:The results across various tasks demonstrate that ECG-LM effectively captures the intricate features of ECGs,showcasing its versatility in applications such as disease prediction and advanced question answering.展开更多
Background:In real-world drug discovery,human experts typically grasp molecular knowledge of drugs and proteins from multimodal sources including molecular structures,structured knowledge from knowledge bases,and unst...Background:In real-world drug discovery,human experts typically grasp molecular knowledge of drugs and proteins from multimodal sources including molecular structures,structured knowledge from knowledge bases,and unstructured knowledge from biomedical literature.Existing multimodal approaches in AI drug discovery integrate either structured or unstructured knowledge independently,which compromises the holistic understanding of biomolecules.Besides,they fail to address the missing modality problem,where multimodal information is missing for novel drugs and proteins.Methods:In this work,we present KEDD,a unified,end-to-end deep learning framework that jointly incorporates both structured and unstructured knowledge for vast AI drug discovery tasks.The framework first incorporates independent representation learning models to extract the underlying characteristics from each modality.Then,it applies a feature fusion technique to calculate the prediction results.To mitigate the missing modality problem,we leverage sparse attention and a modality masking technique to reconstruct the missing features based on top relevant molecules.Results:Benefiting from structured and unstructured knowledge,our framework achieves a deeper understanding of biomolecules.KEDD outperforms state-of-the-art models by an average of 5.2%on drug–target interaction prediction,2.6%on drug property prediction,1.2%on drug–drug interaction prediction,and 4.1%on protein–protein interaction prediction.Through qualitative analysis,we reveal KEDD’s promising potential in assisting real-world applications.Conclusions:By incorporating biomolecular expertise from multimodal knowledge,KEDD bears promise in accelerating drug discovery.展开更多
基金sponsored by Tsinghua-Toyota Joint Research Institute Inter-disciplinary Program.
文摘Background:The electrocardiogram(ECG)is a valuable,noninvasive tool for monitoring heart-related conditions,providing critical insights.However,the interpretation of ECG data alongside patient information demands substantial medical expertise and resources.While deep learning methods help streamline this process,they often fall short in integrating patient data with ECG readings and do not provide the nuanced clinical suggestions and insights necessary for accurate diagnosis.Methods:Although recent advancements in multi-modal large language modeling have propelled their application scope beyond the natural language processing domain,their applicability to ECG processing remains largely unexplored,partly due to the lack of text–ECG data.To this end,we develop ECG-Language Model(ECG-LM),the first multi-modal large language model able to process natural language and understand ECG signals.The model employs a specialized ECG encoder that transforms raw ECG signals into a high-dimensional feature space,which is then aligned with the textual feature space derived from the large language model.To address the scarcity of text–ECG data,we generated text–ECG pairs by leveraging detailed ECG pattern descriptions from medical guidelines,creating a robust dataset for pre-training ECG-LM.Additionally,we fine-tune ECG-LM with public clinical conversation datasets and build an additional supervised fine-tuning dataset based on real clinical data from the hospital,aiming to provide a more comprehensive and customized user experience.Results:ECG-LM outperforms existing few-shot and zero-shot solutions in cardiovascular disease detection across all 3 tasks(diagnostic,rhythm,and form)while also demonstrating strong potential in ECG-related question answering.Conclusions:The results across various tasks demonstrate that ECG-LM effectively captures the intricate features of ECGs,showcasing its versatility in applications such as disease prediction and advanced question answering.
基金funded by the National Key R&D Program of China(2022YFF1203002).
文摘Background:In real-world drug discovery,human experts typically grasp molecular knowledge of drugs and proteins from multimodal sources including molecular structures,structured knowledge from knowledge bases,and unstructured knowledge from biomedical literature.Existing multimodal approaches in AI drug discovery integrate either structured or unstructured knowledge independently,which compromises the holistic understanding of biomolecules.Besides,they fail to address the missing modality problem,where multimodal information is missing for novel drugs and proteins.Methods:In this work,we present KEDD,a unified,end-to-end deep learning framework that jointly incorporates both structured and unstructured knowledge for vast AI drug discovery tasks.The framework first incorporates independent representation learning models to extract the underlying characteristics from each modality.Then,it applies a feature fusion technique to calculate the prediction results.To mitigate the missing modality problem,we leverage sparse attention and a modality masking technique to reconstruct the missing features based on top relevant molecules.Results:Benefiting from structured and unstructured knowledge,our framework achieves a deeper understanding of biomolecules.KEDD outperforms state-of-the-art models by an average of 5.2%on drug–target interaction prediction,2.6%on drug property prediction,1.2%on drug–drug interaction prediction,and 4.1%on protein–protein interaction prediction.Through qualitative analysis,we reveal KEDD’s promising potential in assisting real-world applications.Conclusions:By incorporating biomolecular expertise from multimodal knowledge,KEDD bears promise in accelerating drug discovery.