摘要
多模态神经影像技术为阿尔茨海默症(Alzheimer’s disease,AD)的早期精准诊断提供了重要的技术支撑。然而,由于不同模态神经影像数据在成像原理和特征表达上存在固有异质性,模态间的信息融合面临挑战。针对这一问题,提出了一种基于3D ResNet架构的多模态融合网络(Multi-modal fusion network,MFN),用于AD的早期辅助诊断。该方法首先采用3D ResNet网络分别提取T1加权和T2加权磁共振图像的特征表示,然后设计了一种创新的跨模态特征集成模块(Cross-modal feature integration module,CFIM)。相较于多模态数据直接串联,导致维度增长无法自适应调整模态权重的问题,CFIM采用分阶段融合策略,包括全局信息融合模块、局部特征学习模块和关键因素模块。最后,融合后的多模态特征通过全连接神经网络进行分类决策。相比早期拼接的固定权重叠加和后期融合的浅层聚合,该策略能更精准地筛选出疾病诊断相关的特征。通过在阿尔茨海默症神经影像倡议(ADNI)数据库上的实验结果表明,与现有方法相比,本文方法在AD分类任务中具有较高的准确率和显著优势,且消融实验进一步验证了各模块的有效性,为多模态神经影像分析提供了新的技术思路。
Multi-modal neuroimaging technology provides crucial technical support for the early and precise diagnosis of Alzheimer’s disease(AD).However,due to the inherent heterogeneity in imaging principles and feature representations across different neuroimaging modalities,the fusion of inter-modal information poses significant challenges.To address this issue,this study proposes a multi-modal fusion network(MFN)based on a 3D ResNet architecture for the early auxiliary diagnosis of AD.The proposed method first employs a 3D ResNet to separately extract feature representations from T1-and T2-weighted magnetic resonance images.Subsequently,an innovative cross-modal feature integration module(CFIM)is designed to overcome the limitations of direct concatenation.CFIM adopts a hierarchical fusion strategy,consisting of global information fusion module,local feature learning module and key factor module.Finally,the fused multimodal features are fed into a fully connected neural network for classification.Compared to early concatenation(fixed-weight fusion)and late fusion(shallow aggregation),this strategy more effectively identifies disease-relevant diagnostic features.Experiments conducted on the Alzheimer’s disease neuroimaging initiative(ADNI)database demonstrate that the proposed method achieves higher accuracy and superior performance in AD classification tasks compared to existing approaches.Ablation studies further validate the effectiveness of each module,offering new technical insights for multi-modal neuroimaging analysis.
作者
朱厚元
郑乐乐
商浩
臧雪峰
吴少琪
周广超
孙建德
乔建苹
ZHU Houyuan;ZHENG Lele;SHANG Hao;ZANG Xuefeng;WU Shaoqi;ZHOU Guangchao;SUN Jiande;QIAO Jianping(School of Physics and Electronics,Shandong Normal University,Jinan 250307,China;School of Information Science and Engineering,Shandong Normal University,Jinan 250307,China)
出处
《数据采集与处理》
北大核心
2025年第4期912-921,共10页
Journal of Data Acquisition and Processing
基金
山东大学市校融合发展战略工程项目(JNSX2023038)。
关键词
阿尔茨海默症
3D多模态融合网络
核磁共振图像
跨模态特征集成模块
深度学习
Alzheimer’s disease(AD)
3D multi-modal fusion network
magnetic resonance images
crossmodal feature integration module
deep learning