摘要
视网膜图像配准有助于医生全面了解视网膜结构,但由于缺乏带真实标签的数据,增加了配准难度.传统配准方法效率低,且依赖固定模型和手工设计特征,难以处理复杂变形,而深度学习方法尽管高效,但多为单流结构,该结构会对特征融合产生干扰.针对现有方法在特征提取上的不足,本文提出一种双流网络,通过Transformer和CNN分别提取全局和局部特征,在多个尺度上进行特征匹配,并引入血管信息辅助训练.实验结果表明,该方法在彩色眼底数据集上显著提升了配准精度,验证了其在可变形医学图像配准中的有效性.
Retinal image registration helps doctors to comprehensively understand the structure of the retina,but the lack of data with real labels increases the difficulty of registration.Traditional registration methods are inefficient and rely on fixed models and manually designed features,making it difficult to handle complex deformations,while deep learning methods,although efficient,are mostly single stream structures that can interfere with feature fusion.In response to the shortcomings of existing methods in feature extraction,this paper proposes a dual stream network that extracts global and local features through Transformer and CNN respectively,performs feature matching at multiple scales,and introduces vascular information to assist in training.The experimental results show that this method significantly improves the registration accuracy on color fundus datasets,verifying its effectiveness in deformable medical image registration.
作者
吴水淼
陈强
Wu Shuimiao;Chen Qiang(School of Computer Science and Engineering,Nanjing University of Science and Technology,Nanjing 210094,China)
出处
《南京师大学报(自然科学版)》
北大核心
2025年第4期118-127,共10页
Journal of Nanjing Normal University(Natural Science Edition)
基金
国家自然科学基金项目(92370109、6217223)
中央高校基本科研基金项目(30921013105)。