期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Long‐time target tracking algorithm based on re‐detection multi‐feature fusion 被引量:1
1
作者 Junsuo Qu Chenxue Tang +2 位作者 Yuan Zhang Kai Zhou Abolfazl Razi 《IET Cyber-Systems and Robotics》 EI 2022年第1期38-50,共13页
This paper considers the problem of long-term target tracking in complex scenes when tracking failures are unavoidable due to illumination change,target deformation,scale change,motion blur,and other factors.More spec... This paper considers the problem of long-term target tracking in complex scenes when tracking failures are unavoidable due to illumination change,target deformation,scale change,motion blur,and other factors.More specifically,a target tracking algorithm,called re-detection multi-feature fusion,is proposed based on the fusion of scale-adaptive kernel correlation filtering and re-detection.The target tracking algorithm trains three kernel correlation filters based on the histogram of oriented gradients,colour name,and local binary pattern features and then obtains the fusion weight of response graphs corresponding to different features based on average peak correlation energy criterion and uses weighted average to complete the position estimation of the tracked target.In order to deal with the problem that the target is occluded and disappears in the tracking process,a random fern classifier is trained to perform re-detection when the target is occluded.After comparing the OTB-50 target tracking dataset,the experimental results show that the proposed tracker can track the target well in the occlusion attribute video sequence in the OTB-100 test dataset and has a certain improvement in tracking accuracy and success rate compared with the traditional correlation filter tracker. 展开更多
关键词 machine learning pedestrian identification ROBUSTNESS visual surveillance visual tracking
原文传递
Novel vision-LiDAR fusion framework for human action recognition based on dynamic lateral connection 被引量:1
2
作者 Fei Yan Guangyao Jin +4 位作者 Zheng Mu Shouxing Zhang Yinghao Cai Tao Lu Yan Zhuang 《IET Cyber-Systems and Robotics》 2024年第4期21-31,共11页
In the past decades,substantial progress has been made in human action recognition.However,most existing studies and datasets for human action recognition utilise still images or videos as the primary modality.Image-b... In the past decades,substantial progress has been made in human action recognition.However,most existing studies and datasets for human action recognition utilise still images or videos as the primary modality.Image-based approaches can be easily impacted by adverse environmental conditions.In this paper,the authors propose combining RGB images and point clouds from LiDAR sensors for human action recognition.A dynamic lateral convolutional network(DLCN)is proposed to fuse features from multi-modalities.The RGB features and the geometric information from the point clouds closely interact with each other in the DLCN,which is complementary in action recognition.The experimental results on the JRDB-Act dataset demonstrate that the proposed DLCN outperforms the state-of-the-art approaches of human action recognition.The authors show the potential of the proposed DLCN in various complex scenarios,which is highly valuable in real-world applications. 展开更多
关键词 neural network-based pedestrian identification sensor fusion
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部