期刊文献+

Multimodal Representation Learning Based on Personalized Graph-Based Fusion for Mortality Prediction Using Electronic Medical Records

原文传递
导出
摘要 Predicting mortality risk in the Intensive Care Unit(ICU)using Electronic Medical Records(EMR)is crucial for identifying patients in need of immediate attention.However,the incompleteness and the variability of EMR features for each patient make mortality prediction challenging.This study proposes a multimodal representation learning framework based on a novel personalized graph-based fusion approach to address these challenges.The proposed approach involves constructing patient-specific modality aggregation graphs to provide information about the features associated with each patient from incomplete multimodal data,enabling the effective and explainable fusion of the incomplete features.Modality-specific encoders are employed to encode each modality feature separately.To tackle the variability and incompleteness of input features among patients,a novel personalized graph-based fusion method is proposed to fuse patient-specific multimodal feature representations based on the constructed modality aggregation graphs.Furthermore,a MultiModal Gated Contrastive Representation Learning(MMGCRL)method is proposed to facilitate capturing adequate complementary information from multimodal representations and improve model performance.We evaluate the proposed framework using the large-scale ICU dataset,MIMIC-III.Experimental results demonstrate its effectiveness in mortality prediction,outperforming several state-of-the-art methods.
出处 《Big Data Mining and Analytics》 2025年第4期933-950,共18页 大数据挖掘与分析(英文)
基金 supported by the National Natural Science Foundation of China(No.U24A20256)and the Science and Technology Major Project of Changsha(No.kh2402004).

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部