期刊文献+

深度相对度量学习的视觉跟踪 被引量:8

Deep relative metric learning for visual tracking
原文传递
导出
摘要 传统的基于检测的跟踪方法虽然具有较好的鲁棒性,但是这种直接将目标与背景简单分类的方式不能合理地考虑跟踪目标与背景的结构相对关系,而这种相对结构判别性信息的缺乏使得跟踪算法极易发生跟踪漂移,为了缓解这种问题,本文提出了一个基于深度相对度量学习的视觉跟踪方法.利用一个对称且权值共享的深度卷积神经网络构建深度相对度量学习模型,通过这个模型来挖掘跟踪目标在大尺度的图像块里的结构相对关系,然后在Bayes跟踪框架下利用这种相对度量最大值确定跟踪目标,整个跟踪算法简洁有效.通过在跟踪的基准视频序列库上的实验结果验证了本文算法在跟踪精度和跟踪成功率上的高性能. While traditional tracking-by-detection methods have some robustness in object tracking, their simple classification of the target and the background cannot model the relative structural relationship between the target and the background. It is the lack of relative structural discriminative information that always causes a tracker drifting away. In order to alleviate the tracking drifting problem, we propose a new visual travel approach based on deep relative metric learning. In this study, we design a deep relative metric learning model with a symmetric and shared-weight deep convolutional neural network. Through such a network, we can explore the relative structural relationship between the target and the background in large-scale image patches. Then, the highest score of relative metric is used to locate the tracking object in the Bayesian tracking framework. The whole tracking algorithm is simple and effective. Experimental results on the tracking benchmark show that the proposed algorithm achieves a better tracking precision rate and success rate than other state-of-the-art tracking methods.
出处 《中国科学:信息科学》 CSCD 北大核心 2018年第1期60-78,共19页 Scientia Sinica(Informationis)
基金 国家自然科学基金(批准号:61572296 61603372 61572498 614722227 61303086 61307041 61672327) 国家重点基础研究发展计划(973计划)(批准号:2012CB316304) 山东省自然科学基金(批准号:ZR2015FL020) 模式识别国家重点实验室开放课题(批准号:201600024)资助项目
关键词 相对属性 度量学习 卷积神经网络 视觉跟踪 relative attribute, metric learning, convolutional neural network (CNN), visual tracking
  • 相关文献

参考文献2

二级参考文献72

  • 1Adam A, Rivlin E, Shimshoni I. Robust fragments-based tracking using the integral histogram. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, 2006. 798-805.
  • 2Comaniciu D, Ramesh V, Meer P. Kernel-based object tracking. IEEE Trans Pattern Anal Mach Intell, 2003, 25: 564-577.
  • 3Kwon J, Lee K M. Visual tracking decomposition. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, 2010. 1269-1276.
  • 4Wang H Z, Suter D, Schindler K, et al. Adaptive object tracking based on an effective appearance filter. IEEE Trans Pattern Anal Mach Intel, 2007, 29: 1661-1667.
  • 5Grabner H, Grabner M, Bischof H. Real-time tracking via on-line boosting. In: Proceedings of British Machine Vision Conference, Edinburgh, 2006. 6-15.
  • 6Babenko B, Yang M H, Belongie S. Robust object tracking with online multiple instance learning. IEEE Trans Pattern Anal Mach Intel, 2011, 33: 1619-1632.
  • 7Zhang K H, Zhang L, Yang M H. Real-time compressive tracking. In: Proceedings of European Conference on Computer Vision, Berlin Heidelberg, 2012. 864-877.
  • 8Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-detection. IEEE Trans Pattern Anal Mach Intel, 2012, 34: 1409-1422.
  • 9Li X, Shen C H, Dick A, et al. Learning compact binary codes for visual tracking. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Portland, 2013. 2419-2426.
  • 10Ross D A, Lim J, Lin R S, et al. Incremental learning for robust visual tracking. Int J Comput Vis, 2008, 77: 125-141.

共引文献7

同被引文献44

引证文献8

二级引证文献21

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部