期刊文献+
共找到59篇文章
< 1 2 3 >
每页显示 20 50 100
Effective convolution mixed Transformer Siamese network for robust visual tracking
1
作者 Lin Chen Yungang Liu Yuan Wang 《Control Theory and Technology》 2025年第2期221-236,共16页
Siamese tracking algorithms usually take convolutional neural networks(CNNs)as feature extractors owing to their capability of extracting deep discriminative features.However,the convolution kernels in CNNs have limit... Siamese tracking algorithms usually take convolutional neural networks(CNNs)as feature extractors owing to their capability of extracting deep discriminative features.However,the convolution kernels in CNNs have limited receptive fields,making it difficult to capture global feature dependencies which is important for object detection,especially when the target undergoes large-scale variations or movement.In view of this,we develop a novel network called effective convolution mixed Transformer Siamese network(SiamCMT)for visual tracking,which integrates CNN-based and Transformer-based architectures to capture both local information and long-range dependencies.Specifically,we design a Transformer-based module named lightweight multi-head attention(LWMHA)which can be flexibly embedded into stage-wise CNNs and improve the network’s representation ability.Additionally,we introduce a stage-wise feature aggregation mechanism which integrates features learned from multiple stages.By leveraging both location and semantic information,this mechanism helps the SiamCMT to better locate and find the target.Moreover,to distinguish the contribution of different channels,a channel-wise attention mechanism is introduced to enhance the important channels and suppress the others.Extensive experiments on seven challenging benchmarks,i.e.,OTB2015,UAV123,GOT10K,LaSOT,DTB70,UAVTrack112_L,and VOT2018,demonstrate the effectiveness of the proposed algorithm.Specially,the proposed method outperforms the baseline by 3.5%and 3.1%in terms of precision and success rates with a real-time speed of 59.77 FPS on UAV123. 展开更多
关键词 visual tracking Siamese network TRANSFORMER Feature aggregation Channel-wise attention
原文传递
Robust visual tracking using temporal regularization correlation filter with high-confidence strategy
2
作者 Xiao-Gang Dong Ke-Xuan Li +2 位作者 Hong-Xia Mao Chen Hu Tian Pu 《Journal of Electronic Science and Technology》 2025年第2期81-96,共16页
Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking ro... Target tracking is an essential task in contemporary computer vision applications.However,its effectiveness is susceptible to model drift,due to the different appearances of targets,which often compromises tracking robustness and precision.In this paper,a universally applicable method based on correlation filters is introduced to mitigate model drift in complex scenarios.It employs temporal-confidence samples as a priori to guide the model update process and ensure its precision and consistency over a long period.An improved update mechanism based on the peak side-lobe to peak correlation energy(PSPCE)criterion is proposed,which selects high-confidence samples along the temporal dimension to update temporal-confidence samples.Extensive experiments on various benchmarks demonstrate that the proposed method achieves a competitive performance compared with the state-of-the-art methods.Especially when the target appearance changes significantly,our method is more robust and can achieve a balance between precision and speed.Specifically,on the object tracking benchmark(OTB-100)dataset,compared to the baseline,the tracking precision of our model improves by 8.8%,8.8%,5.1%,5.6%,and 6.9%for background clutter,deformation,occlusion,rotation,and illumination variation,respectively.The results indicate that this proposed method can significantly enhance the robustness and precision of target tracking in dynamic and challenging environments,offering a reliable solution for applications such as real-time monitoring,autonomous driving,and precision guidance. 展开更多
关键词 Appearance changes Correlation filter High-confidence strategy Temporal regularization visual tracking
在线阅读 下载PDF
Teacher-student learning of generative adversarial network-guided diffractive neural networks for visual tracking and imaging
3
作者 Hang Su Yanping He +3 位作者 Baoli Li Haitao Luan Min Gu Xinyuan Fang 《Advanced Photonics Nexus》 2024年第6期87-97,共11页
Efficiently tracking and imaging interested moving targets is crucial across various applications,from autonomous systems to surveillance.However,persistent challenges remain in various fields,including environmental ... Efficiently tracking and imaging interested moving targets is crucial across various applications,from autonomous systems to surveillance.However,persistent challenges remain in various fields,including environmental intricacies,limitations in perceptual technologies,and privacy considerations.We present a teacher-student learning model,the generative adversarial network(GAN)-guided diffractive neural network(DNN),which performs visual tracking and imaging of the interested moving target.The GAN,as a teacher model,empowers efficient acquisition of the skill to differentiate the specific target of interest in the domains of visual tracking and imaging.The DNN-based student model learns to master the skill to differentiate the interested target from the GAN.The process of obtaining a GAN-guided DNN starts with capturing moving objects effectively using an event camera with high temporal resolution and low latency.Then,the generative power of GAN is utilized to generate data with position-tracking capability for the interested moving target,subsequently serving as labels to the training of the DNN.The DNN learns to image the target during training while retaining the target’s positional information.Our experimental demonstration highlights the efficacy of the GAN-guided DNN in visual tracking and imaging of the interested moving target.We expect the GAN-guided DNN can significantly enhance autonomous systems and surveillance. 展开更多
关键词 visual tracking diffractive neural network generative adversarial network teacher-student learning event-based camera optical machine learning
在线阅读 下载PDF
3-D visual tracking based on CMAC neural network and Kalman filter 被引量:3
4
作者 王化明 罗翔 朱剑英 《Journal of Southeast University(English Edition)》 EI CAS 2003年第1期58-63,共6页
In this paper, the Kalman filter is used to predict image feature positionaround which an image-processing window is then established to diminish feature-searching area andto heighten the image-processing speed. Accor... In this paper, the Kalman filter is used to predict image feature positionaround which an image-processing window is then established to diminish feature-searching area andto heighten the image-processing speed. According to the fundamentals of image-based visual servoing(IBVS), the cerebellar model articulation controller (CMAC) neural network is inserted into thevisual servo control loop to implement the nonlinear mapping from the error signal in the imagespace to the control signal in the input space instead of the iterative adjustment and complicatedinverse solution of the image Jacobian. Simulation results show that the feature point can bepredicted efficiently using the Kalman filter and on-line supervised learning can be realized usingCMAC neural network; end-effector can track the target object very well. 展开更多
关键词 visual tracking CMAC neural network Kalman filter
在线阅读 下载PDF
Advances in Deep Learning Methods for Visual Tracking:Literature Review and Fundamentals 被引量:5
5
作者 Xiao-Qin Zhang Run-Hua Jiang +3 位作者 Chen-Xiang Fan Tian-Yu Tong Tao Wang Peng-Cheng Huang 《International Journal of Automation and computing》 EI CSCD 2021年第3期311-333,共23页
Recently,deep learning has achieved great success in visual tracking tasks,particularly in single-object tracking.This paper provides a comprehensive review of state-of-the-art single-object tracking algorithms based ... Recently,deep learning has achieved great success in visual tracking tasks,particularly in single-object tracking.This paper provides a comprehensive review of state-of-the-art single-object tracking algorithms based on deep learning.First,we introduce basic knowledge of deep visual tracking,including fundamental concepts,existing algorithms,and previous reviews.Second,we briefly review existing deep learning methods by categorizing them into data-invariant and data-adaptive methods based on whether they can dynamically change their model parameters or architectures.Then,we conclude with the general components of deep trackers.In this way,we systematically analyze the novelties of several recently proposed deep trackers.Thereafter,popular datasets such as Object Tracking Benchmark(OTB)and Visual Object Tracking(VOT)are discussed,along with the performances of several deep trackers.Finally,based on observations and experimental results,we discuss three different characteristics of deep trackers,i.e.,the relationships between their general components,exploration of more effective tracking frameworks,and interpretability of their motion estimation components. 展开更多
关键词 Deep learning visual tracking data-invariant data-adaptive general components
原文传递
OPTIMIZED MEANSHIFT TARGET REFERENCE MODEL BASED ON IMPROVED PIXEL WEIGHTING IN VISUAL TRACKING 被引量:4
6
作者 Chen Ken Song Kangkang +1 位作者 Kyoungho Choi Guo Yunyan 《Journal of Electronics(China)》 2013年第3期283-289,共7页
The generic Meanshift is susceptible to interference of background pixels with the target pixels in the kernel of the reference model, which compromises the tracking performance. In this paper, we enhance the target c... The generic Meanshift is susceptible to interference of background pixels with the target pixels in the kernel of the reference model, which compromises the tracking performance. In this paper, we enhance the target color feature by attenuating the background color within the kernel through enlarging the pixel weightings which map to the pixels on the target. This way, the background pixel interference is largely suppressed in the color histogram in the course of constructing the target reference model. In addition, the proposed method also reduces the number of Meanshift iterations, which speeds up the algorithmic convergence. The two tests validate the proposed approach with improved tracking robustness on real-world video sequences. 展开更多
关键词 visual tracking MEANSHIFT Color feature histogram Pixel weighting tracking robust-hess
在线阅读 下载PDF
Visual tracking based on transfer learning of deep salience information 被引量:3
7
作者 Haorui Zuo Zhiyong Xu +1 位作者 Jianlin Zhang Ge Jia 《Opto-Electronic Advances》 2020年第9期30-40,共11页
In this paper,we propose a new visual tracking method in light of salience information and deep learning.Salience detection is used to exploit features with salient information of the image.Complicated representations... In this paper,we propose a new visual tracking method in light of salience information and deep learning.Salience detection is used to exploit features with salient information of the image.Complicated representations of image features can be gained by the function of every layer in convolution neural network(CNN).The characteristic of biology vision in attention-based salience is similar to the neuroscience features of convolution neural network.This motivates us to improve the representation ability of CNN with functions of salience detection.We adopt the fully-convolution networks(FCNs)to perform salience detection.We take parts of the network structure to perform salience extraction,which promotes the classification ability of the model.The network we propose shows great performance in tracking with the salient information.Compared with other excellent algorithms,our algorithm can track the target better in the open tracking datasets.We realize the 0.5592 accuracy on visual object tracking 2015(VOT15)dataset.For unmanned aerial vehicle 123(UAV123)dataset,the precision and success rate of our tracker is 0.710 and 0.429. 展开更多
关键词 convolution neural network transfer learning salience detection visual tracking
在线阅读 下载PDF
Sensor planning method for visual tracking in 3D camera networks 被引量:1
8
作者 Anlong Ming Xin Chen 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2014年第6期1107-1116,共10页
Most sensors or cameras discussed in the sensor network community are usually 3D homogeneous, even though their2 D coverage areas in the ground plane are heterogeneous. Meanwhile, observed objects of camera networks a... Most sensors or cameras discussed in the sensor network community are usually 3D homogeneous, even though their2 D coverage areas in the ground plane are heterogeneous. Meanwhile, observed objects of camera networks are usually simplified as 2D points in previous literature. However in actual application scenes, not only cameras are always heterogeneous with different height and action radiuses, but also the observed objects are with 3D features(i.e., height). This paper presents a sensor planning formulation addressing the efficiency enhancement of visual tracking in 3D heterogeneous camera networks that track and detect people traversing a region. The problem of sensor planning consists of three issues:(i) how to model the 3D heterogeneous cameras;(ii) how to rank the visibility, which ensures that the object of interest is visible in a camera's field of view;(iii) how to reconfigure the 3D viewing orientations of the cameras. This paper studies the geometric properties of 3D heterogeneous camera networks and addresses an evaluation formulation to rank the visibility of observed objects. Then a sensor planning method is proposed to improve the efficiency of visual tracking. Finally, the numerical results show that the proposed method can improve the tracking performance of the system compared to the conventional strategies. 展开更多
关键词 camera model sensor planning camera network visual tracking
在线阅读 下载PDF
Real-Time Visual Tracking with Compact Shape and Color Feature 被引量:1
9
作者 Zhenguo Gao Shixiong Xia +4 位作者 Yikun Zhang Rui Yao Jiaqi Zhao Qiang Niu Haifeng Jiang 《Computers, Materials & Continua》 SCIE EI 2018年第6期509-521,共13页
The colour feature is often used in the object tracking.The tracking methods extract the colour features of the object and the background,and distinguish them by a classifier.However,these existing methods simply use ... The colour feature is often used in the object tracking.The tracking methods extract the colour features of the object and the background,and distinguish them by a classifier.However,these existing methods simply use the colour information of the target pixels and do not consider the shape feature of the target,so that the description capability of the feature is weak.Moreover,incorporating shape information often leads to large feature dimension,which is not conducive to real-time object tracking.Recently,the emergence of visual tracking methods based on deep learning has also greatly increased the demand for computing resources of the algorithm.In this paper,we propose a real-time visual tracking method with compact shape and colour feature,which forms low dimensional compact shape and colour feature by fusing the shape and colour characteristics of the candidate object region,and reduces the dimensionality of the combined feature through the Hash function.The structural classification function is trained and updated online with dynamic data flow for adapting to the new frames.Further,the classification and prediction of the object are carried out with structured classification function.The experimental results demonstrate that the proposed tracker performs superiorly against several state-of-the-art algorithms on the challenging benchmark dataset OTB-100 and OTB-13. 展开更多
关键词 visual tracking compact feature colour feature structural learning
在线阅读 下载PDF
Robust visual tracking algorithm based on Monte Carlo approach with integrated attributes 被引量:1
10
作者 席涛 张胜修 颜诗源 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2010年第6期771-775,共5页
To improve the reliability and accuracy of visual tracker,a robust visual tracking algorithm based on multi-cues fusion under Bayesian framework is proposed.The weighed color and texture cues of the object are applied... To improve the reliability and accuracy of visual tracker,a robust visual tracking algorithm based on multi-cues fusion under Bayesian framework is proposed.The weighed color and texture cues of the object are applied to describe the moving object.An adjustable observation model is incorporated into particle filtering,which utilizes the properties of particle filter for coping with non-linear,non-Gaussian assumption and the ability to predict the position of the moving object in a cluttered environment and two complementary attributes are employed to estimate the matching similarity dynamically in term of the likelihood ratio factors;furthermore tunes the weight values according to the confidence map of the color and texture feature on-line adaptively to reconfigure the optimal observation likelihood model,which ensured attaining the maximum likelihood ratio in the tracking scenario even if in the situations where the object is occluded or illumination,pose and scale are time-variant.The experimental result shows that the algorithm can track a moving object accurately while the reliability of tracking in a challenging case is validated in the experimentation. 展开更多
关键词 visual tracking particle fiher gabor wavelet monte carlo approach multi-cues fusion
在线阅读 下载PDF
A creative design of robotic visual tracking system in tailed welded blanks based on TRIZ 被引量:1
11
作者 张雷 赵明扬 +1 位作者 邹媛媛 赵立华 《China Welding》 EI CAS 2006年第4期23-25,共3页
According to the main tools of TRIZ, the theory of inventive problem solving, a new flowchart of the product conceptual design process to solve contradiction in TRIZ is proposed. In order to realize autonomous moving ... According to the main tools of TRIZ, the theory of inventive problem solving, a new flowchart of the product conceptual design process to solve contradiction in TRIZ is proposed. In order to realize autonomous moving and automatic weld seam tracking for welding robot in Tailed Welded Blanks, a creative design of robotic visual tracking system bused on CMOS has been developed by using the flowchart. The new system is not only used to inspect the workpiece ahead of a welding torch and measure the joint orientation and lateral deviation caused by curvature or discontinuity in the joint part, but also to record and measure the image size of the weld pool. Moreover, the hardware and software components are discussed in brief. 展开更多
关键词 visual tracking creative design TRIZ
在线阅读 下载PDF
MULTI-TARGET VISUAL TRACKING AND OCCLUSION DETECTION BY COMBINING BHATTACHARYYA COEFFICIENT AND KALMAN FILTER INNOVATION 被引量:1
12
作者 Chen Ken Chul Gyu Jhun 《Journal of Electronics(China)》 2013年第3期275-282,共8页
This paper introduces an approach for visual tracking of multi-target with occlusion occurrence. Based on the author's previous work in which the Overlap Coefficient (OC) is used to detect the occlusion, in this p... This paper introduces an approach for visual tracking of multi-target with occlusion occurrence. Based on the author's previous work in which the Overlap Coefficient (OC) is used to detect the occlusion, in this paper a method of combining Bhattacharyya Coefficient (BC) and Kalman filter innovation term is proposed as the criteria for jointly detecting the occlusion occurrence. Fragmentation of target is introduced in order to closely monitor the occlusion development. In the course of occlusion, the Kalman predictor is applied to determine the location of the occluded target, and the criterion for checking the re-appearance of the occluded target is also presented. The proposed approach is put to test on a standard video sequence, suggesting the satisfactory performance in multi-target tracking. 展开更多
关键词 visual tracking Multi-target occlusion Bhattacharyya Coefficient (BC) Kalman filter
在线阅读 下载PDF
Hierarchical Template Matching for Robust Visual Tracking with Severe Occlusions 被引量:1
13
作者 Lizuo Jin Tirui Wu +1 位作者 Feng Liu Gang Zeng 《ZTE Communications》 2012年第4期54-59,共6页
To tackle the problem of severe occlusions in visual tracking, we propose a hierarchical template-matching method based on a layered appearance model. This model integrates holistic- and part-region matching in order ... To tackle the problem of severe occlusions in visual tracking, we propose a hierarchical template-matching method based on a layered appearance model. This model integrates holistic- and part-region matching in order to locate an object in a coarse-to-fine manner. Furthermore, in order to reduce ambiguity in object localization, only the discriminative parts of an object' s appearance template are chosen for similarity computing with respect to their cornerness measurements. The similarity between parts is computed in a layer-wise manner, and from this, occlusions can be evaluated. When the object is partly occluded, it can be located accurately by matching candidate regions with the appearance template. When it is completely occluded, its location can be predicted from its historical motion information using a Kalman filter. The proposed tracker is tested on several practical image sequences, and the experimental results show that it can consistently provide accurate object location for stable tracking, even for severe occlusions. 展开更多
关键词 visual tracking hierarchical template matching layeredappearance model occlusion analysis
在线阅读 下载PDF
Hybrid Efficient Convolution Operators for Visual Tracking 被引量:1
14
作者 Yu Wang 《Journal on Artificial Intelligence》 2021年第2期63-72,共10页
Visual tracking is a classical computer vision problem with many applications.Efficient convolution operators(ECO)is one of the most outstanding visual tracking algorithms in recent years,it has shown great performanc... Visual tracking is a classical computer vision problem with many applications.Efficient convolution operators(ECO)is one of the most outstanding visual tracking algorithms in recent years,it has shown great performance using discriminative correlation filter(DCF)together with HOG,color maps and VGGNet features.Inspired by new deep learning models,this paper propose a hybrid efficient convolution operators integrating fully convolution network(FCN)and residual network(ResNet)for visual tracking,where FCN and ResNet are introduced in our proposed method to segment the objects from backgrounds and extract hierarchical feature maps of objects,respectively.Compared with the traditional VGGNet,our approach has higher accuracy for dealing with the issues of segmentation and image size.The experiments show that our approach would obtain better performance than ECO in terms of precision plot and success rate plot on OTB-2013 and UAV123 datasets. 展开更多
关键词 visual tracking deep learning convolutional neural network hybrid convolution operator
在线阅读 下载PDF
Robust visual tracking for manipulators withunknown intrinsic and extrinsic parameters
15
作者 Chaoli WANG Xueming DING 《控制理论与应用(英文版)》 EI 2007年第4期420-426,共7页
This paper addresses the robust visual tracking of multi-feature points for a 3D manipulator with unknown intrinsic and extrinsic parameters of the vision system. This class of control systems are highly nonlinear con... This paper addresses the robust visual tracking of multi-feature points for a 3D manipulator with unknown intrinsic and extrinsic parameters of the vision system. This class of control systems are highly nonlinear control systems characterized as time-varying and strong coupling in states and unknown parameters. It is first pointed out that not only is the Jacobian image matrix nonsingular, but also its minimum singular value has a positive limit. This provides the foundation of kinematics and dynamics control of manipulators with visual feedback. Second, the Euler angle expressed rotation transformation is employed to estimate a subspace of the parameter space of the vision system. Based on the two results above, and arbitrarily chosen parameters in this subspace, the tracking controllers are proposed so that the image errors can be made as small as desired so long as the control gain is allowed to be large. The controller does not use visual velocity to achieve high and robust performance with low sampling rate of the vision system. The obtained results are proved by Lyapunov direct method. Experiments are included to demonstrate the effectiveness of the proposed controller. 展开更多
关键词 ROBUST visual tracking MANIPULATOR CAMERA Intrinsic and extrinsic parameters
在线阅读 下载PDF
An Adaptive Padding Correlation Filter With Group Feature Fusion for Robust Visual Tracking
16
作者 Zihang Feng Liping Yan +1 位作者 Yuanqing Xia Bo Xiao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第10期1845-1860,共16页
In recent visual tracking research,correlation filter(CF)based trackers become popular because of their high speed and considerable accuracy.Previous methods mainly work on the extension of features and the solution o... In recent visual tracking research,correlation filter(CF)based trackers become popular because of their high speed and considerable accuracy.Previous methods mainly work on the extension of features and the solution of the boundary effect to learn a better correlation filter.However,the related studies are insufficient.By exploring the potential of trackers in these two aspects,a novel adaptive padding correlation filter(APCF)with feature group fusion is proposed for robust visual tracking in this paper based on the popular context-aware tracking framework.In the tracker,three feature groups are fused by use of the weighted sum of the normalized response maps,to alleviate the risk of drift caused by the extreme change of single feature.Moreover,to improve the adaptive ability of padding for the filter training of different object shapes,the best padding is selected from the preset pool according to tracking precision over the whole video,where tracking precision is predicted according to the prediction model trained by use of the sequence features of the first several frames.The sequence features include three traditional features and eight newly constructed features.Extensive experiments demonstrate that the proposed tracker is superior to most state-of-the-art correlation filter based trackers and has a stable improvement compared to the basic trackers. 展开更多
关键词 Adaptive padding context information correlation filter(CF) feature group fusion robust visual tracking
在线阅读 下载PDF
Robust Visual Tracking with Hierarchical Deep Features Weighted Fusion
17
作者 Dianwei Wang Chunxiang Xu +3 位作者 Daxiang Li Ying Liu Zhijie Xu Jing Wang 《Journal of Beijing Institute of Technology》 EI CAS 2019年第4期770-776,共7页
To solve the problem of low robustness of trackers under significant appearance changes in complex background,a novel moving target tracking method based on hierarchical deep features weighted fusion and correlation f... To solve the problem of low robustness of trackers under significant appearance changes in complex background,a novel moving target tracking method based on hierarchical deep features weighted fusion and correlation filter is proposed.Firstly,multi-layer features are extracted by a deep model pre-trained on massive object recognition datasets.The linearly separable features of Relu3-1,Relu4-1 and Relu5-4 layers from VGG-Net-19 are especially suitable for target tracking.Then,correlation filters over hierarchical convolutional features are learned to generate their correlation response maps.Finally,a novel approach of weight adjustment is presented to fuse response maps.The maximum value of the final response map is just the location of the target.Extensive experiments on the object tracking benchmark datasets demonstrate the high robustness and recognition precision compared with several state-of-the-art trackers under the different conditions. 展开更多
关键词 visual tracking convolution neural network correlation filter feature fusion
在线阅读 下载PDF
3D Object Visual Tracking for the 220 kV/330 kV High-Voltage Live-Line Insulator Cleaning Robot
18
作者 张健 杨汝清 《Journal of Donghua University(English Edition)》 EI CAS 2009年第3期264-269,共6页
The 3D object visual tracking problem is studied for the robot vision system of the 220kV/330kV high-voltage live-line insulator cleaning robot. The SUSAN Edge based Scale Invariant Feature (SESIF) algorithm based 3D ... The 3D object visual tracking problem is studied for the robot vision system of the 220kV/330kV high-voltage live-line insulator cleaning robot. The SUSAN Edge based Scale Invariant Feature (SESIF) algorithm based 3D objects visual tracking is achieved in three stages: the first frame stage,tracking stage,and recovering stage. An SESIF based objects recognition algorithm is proposed to find initial location at both the first frame stage and recovering stage. An SESIF and Lie group based visual tracking algorithm is used to track 3D object. Experiments verify the algorithm's robustness. This algorithm will be used in the second generation of the 220kV/330kV high-voltage live-line insulator cleaning robot. 展开更多
关键词 HIGH-VOLTAGE live-line ROBOTICS SUSAN Edge based Scale Invariant Feature SESIF) object recognition visual tracking Lie group
在线阅读 下载PDF
Visual Tracking System for Welding Seams
19
作者 赵增顺 王继贞 程学珍 《Journal of Measurement Science and Instrumentation》 CAS 2010年第3期242-246,共5页
To track the narrow butt welding seams in container manufacture, a visual tracking system based on smart camera is proposed in this paper. A smart camera is used as the sensor to detect the welding seam. The feature e... To track the narrow butt welding seams in container manufacture, a visual tracking system based on smart camera is proposed in this paper. A smart camera is used as the sensor to detect the welding seam. The feature extraction algorithm is designed with the consideration of the characteristics of the smart caraera, which is used to compute the error between the welding torch and the welding seam. Visual control system based on image is preseated, which employs a programmable controller to control a stepper motor to eliminate the tracking error detected by the smart camera. Experiments are conducted to demonstrate the effectiveness of the vision system. 展开更多
关键词 visual tracking smart camera Welding Seams tracking error
在线阅读 下载PDF
2D Part-Based Visual Tracking of Hydraulic Excavators
20
作者 Bo Xiao Ruiqi Chen Zhenhua Zhu 《World Journal of Engineering and Technology》 2016年第3期101-111,共11页
Visual tracking has been widely applied in construction industry and attracted signifi-cant interests recently. Lots of research studies have adopted visual tracking techniques on the surveillance of construction work... Visual tracking has been widely applied in construction industry and attracted signifi-cant interests recently. Lots of research studies have adopted visual tracking techniques on the surveillance of construction workforce, project productivity and construction safety. Until now, visual tracking algorithms have gained promising performance when tracking un-articulated equipment in construction sites. However, state-of-art tracking algorithms have unguaranteed performance in tracking articulated equipment, such as backhoes and excavators. The stretching buckets and booms are the main obstacles of successfully tracking articulated equipment. In order to fill this knowledge gap, the part-based tracking algorithms are introduced in this paper for tracking articulated equipment in construction sites. The part-based tracking is able to track different parts of target equipment while using multiple tracking algorithms at the same sequence. Some existing tracking methods have been chosen according to their outstanding performance in the computer vision community. Then, the part-based algorithms were created on the basis of selected visual tracking methods and tested by real construction sequences. In this way, the tracking performance was evaluated from effectiveness and robustness aspects. Throughout the quantification analysis, the tracking performance of articulated equipment was much more improved by using the part-based tracking algorithms. 展开更多
关键词 visual tracking Hydraulic Excavators Construction Safety Part-Based tracking
在线阅读 下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部