摘要
为解决跨模态匹配中因成像机理差异和视角变化导致的特征失配问题,提出了一种面向SAR与可见光异源图像的自适应匹配方法。为避免难以设计的手工特征和昂贵的数据与标注成本,创新性地将LightGlue深度匹配器首次迁移至SAR-可见光匹配场景。针对跨模态特征表征差异,设计了级联式特征对齐预处理管道,通过非局部均值去噪、高斯滤波和Lab空间CLAHE增强,提升特征的一致性。针对跨视角旋转变化,提出基于多指标约束的两阶段动态步长搜索算法,通过动态指数衰减策略优化角度精度,通过双阶段并行策略优化计算效率。在高分辨率机载图像上进行实验,结果表明,本文方法的匹配成功率可达84.21%,角度估计精度达±1.875°,匹配耗时缩短至15.33 s,为异源图像配准提供了新的技术范式。
To address the feature mismatch problem caused by imaging mechanism differences and viewpoint variations in cross-modal matching,an adaptive matching method for SAR and optical heterogeneous images is proposed.To avoid the challenges of handcrafted feature design and the high cost of data annotation,the LightGlue deep matcher is innovatively adapted to the SAR-optical matching scenario for the first time.For cross-modal feature representation discrepancies,a cascaded feature alignment preprocessing pipeline is designed,incorporating non-local means denoising,Gaussian filtering,and CLAHE enhancement in Lab color space to improve feature consistency.To handle crossviewpoint rotation variations,a two-stage dynamic step search algorithm based on multi-metric constraints is proposed,optimizing angular accuracy via dynamic exponential decay and computational efficiency through a dual-stage parallel strategy.Experiments on high-resolution aerial images demonstrate that the proposed method achieves a matching success rate of 84.21%,an angular estimation accuracy of±1.875°,and reduces matching time to 15.33 seconds,providing a novel technical paradigm for heterogeneous image registration.
作者
周双倩
杨子骏
马蔼彤
周慧婕
牛轶峰
ZHOU Shuangqian;YANG Zijun;MA Aitong;ZHOU Huijie;NIU Yifeng(College of Intelligence Science and Technology,National University of Defense Technology,Changsha Hunan 410073,China)
出处
《海军航空大学学报》
2026年第1期197-206,共10页
Journal of Naval Aviation University
基金
国家自然科学基金(62576349)。