期刊文献+

窗口融合特征对比度的光学遥感目标检测 被引量:7

Optical remote sensing object detection based on fused feature contrast of subwindows
在线阅读 下载PDF
导出
摘要 提出了一种基于窗口融合特征对比度的光学遥感目标检测方法。首先,在训练图像上生成大量不同尺寸的滑动窗,计算了各窗口的多尺度显著度、仿射协变区域对比度、边缘密度对比度以及超像素完整度4项特征分值,在确认集上基于窗口重合度和后验概率最大化学习各个特征的阈值参数。然后,采用Naive Bayes框架进行特征融合,并训练分类器。在目标检测阶段首先计算测试图像中各窗口的多尺度显著度分值,初步筛选出显著度高且符合待检测目标尺寸比例的部分窗口。然后计算初选窗口集的其余3项特征,再根据训练好的分类模型计算各个窗口的后验概率。最后,挑选出局部高分值的候选区域并进行判断合并,得到最终目标检测结果。针对飞机、油罐、舰船等3类遥感目标的检测结果显示:4类特征在单独描述3类目标时表现出的性能各有差异,最高检测准确率为74.21%-80.32%,而融合方案能够综合考虑目标自身特点,准确率提高至80.78-87.30%。与固定数量滑动窗方法相比,准确率从约80%提高到约85%,虚警率从20%左右降低为3%左右。最终高分值区域数降低约90%,测试时间减少约25%。得到的结果显示该方法大大提高了目标检测精度和算法效率。 A detection algorithm for optical remote sensing targets was proposed based on the fused features contrast of subwindows. Firstly, a large number of varisized sliding windows were generated in a training image, and four types of scores related to multi-scale saliency, affine invariant regioncontrast, edge density and superpixel straddling were computed within each window. The feature pa- rameters were learned on validation sets by maximizing localization accuracy and posterior probability. Then, all the features were combined in a Naive Bayesian framework and a classifier was trained. In the target detection step, the multi-scale saliency score was firstly computed within all the windows of test images, and partial windows with higher saliency and proper sizes matching to the objects to be detected were selected preliminarily. Furthermore, other scores were computed within the selected windows, and the posterior probability of each window was computed by using the trained classifier. Finally, windows with high local scores were selected and merged and the final detection results were obtained. The detection experiments were performed on three types of remote targets including planes, oilcans and ships, and the results show that each type of feature appears different properties for targets described, the highest accuracy is 74.21% to 80.32%. The proposed method outperforms all the single feature methods and the accuracy is improved to 80. 87% to 87.30%. By compared with the fixed number sliding window algorithm, the accuracy rate is improved from about 80% to 85% and the false alarm rate is reduced from about 20 % to 3 %. Furthermore, the proposed method shows a 90% reduction in the number of windows and 25% reduction in the detection time due to the selec- tion in the intermediary stage. It concludes that the method improves detection accuracy and algorithm efficiency greatly.
出处 《光学精密工程》 EI CAS CSCD 北大核心 2016年第8期2067-2077,共11页 Optics and Precision Engineering
基金 国家自然科学基金资助项目(No.41301480 No.41301382) 陕西省自然科学基础研究计划资助项目(No.2014JQ5181) 陕西省教育厅专项科研计划资助项目(No.14JK1573)
关键词 光学遥感 目标检测 融合特征对比度 窗口 显著度 仿射协变 边缘密度 optical remote sensing object detection fused feature contrast subwindow saliency~ af-fine invariant edge density
  • 相关文献

参考文献11

二级参考文献152

  • 1李慧,王云鹏,李岩,王兴芳.基于SVM和PWC的遥感影像混合像元分解[J].测绘学报,2009,38(4):318-323. 被引量:15
  • 2吴波,张良培,李平湘.基于支撑向量机概率输出的高光谱影像混合像元分解[J].武汉大学学报(信息科学版),2006,31(1):51-54. 被引量:15
  • 3吴波,张良培,李平湘.基于支撑向量回归的高光谱混合像元非线性分解[J].遥感学报,2006,10(3):312-318. 被引量:29
  • 4Leibe B, Leonardis A, and Schiele B. Robust object detection with interleaved categorization and segmentation [J]. International Journal of Computer Vision Special Issue on Learning for Recognition and Recognition for Learning, 2008, 77(1): 259-289.
  • 5Felzenszwalb P F and Schwartz J D. Hierarchical matching of deformable shapes [C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Minnesota, USA, 2007: 1-8.
  • 6Fergus R, Perona P, and Zisserman A. Weakly supervised scale-invariant learning of models for visual recognition [J]. International Journal of Computer Vision, 2007, 71(3): 273-303.
  • 7Opelt A, Pinz A, and Zisserman A. Incremental learning of object detectors using a visual shape alphabet [C]. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, New York, USA, 2006: 3-10.
  • 8Shotton J, Blake A, and Cipolla R. Multi-scale categorical object recognition using contour fragments [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30(7): 1270-1281.
  • 9Mahamud S, Williams L R, Thornber K, and Xu K. Segmentation of multiple salient closed contours from real images [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(4): 433-444.
  • 10Kumar S. Models for learning spatial interactions in natural images for context-based classification [D]. [Ph.D. dissertation], The Robotics Institute, Carnegie Mellon University, 2005.

共引文献136

同被引文献37

引证文献7

二级引证文献88

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部