Most sensors or cameras discussed in the sensor network community are usually 3D homogeneous, even though their2 D coverage areas in the ground plane are heterogeneous. Meanwhile, observed objects of camera networks a...Most sensors or cameras discussed in the sensor network community are usually 3D homogeneous, even though their2 D coverage areas in the ground plane are heterogeneous. Meanwhile, observed objects of camera networks are usually simplified as 2D points in previous literature. However in actual application scenes, not only cameras are always heterogeneous with different height and action radiuses, but also the observed objects are with 3D features(i.e., height). This paper presents a sensor planning formulation addressing the efficiency enhancement of visual tracking in 3D heterogeneous camera networks that track and detect people traversing a region. The problem of sensor planning consists of three issues:(i) how to model the 3D heterogeneous cameras;(ii) how to rank the visibility, which ensures that the object of interest is visible in a camera's field of view;(iii) how to reconfigure the 3D viewing orientations of the cameras. This paper studies the geometric properties of 3D heterogeneous camera networks and addresses an evaluation formulation to rank the visibility of observed objects. Then a sensor planning method is proposed to improve the efficiency of visual tracking. Finally, the numerical results show that the proposed method can improve the tracking performance of the system compared to the conventional strategies.展开更多
Three-dimensional(3 D) visual tracking of a multicopter(where the camera is fixed while the multicopter is moving) means continuously recovering the six-degree-of-freedom pose of the multicopter relative to the camera...Three-dimensional(3 D) visual tracking of a multicopter(where the camera is fixed while the multicopter is moving) means continuously recovering the six-degree-of-freedom pose of the multicopter relative to the camera. It can be used in many applications,such as precision terminal guidance and control algorithm validation for multicopters. However, it is difficult for many researchers to build a 3 D visual tracking system for multicopters(VTSMs) by using cheap and off-the-shelf cameras. This paper firstly gives an overview of the three key technologies of a 3 D VTSMs: multi-camera placement, multi-camera calibration and pose estimation for multicopters. Then, some representative 3 D visual tracking systems for multicopters are introduced. Finally, the future development of the 3D VTSMs is analyzed and summarized.展开更多
In this paper, we propose a method to select the observation position in visual servoing with an eye-in-vehicle configuration for the manipulator. In traditional visual servoing, the images taken by the camera may hav...In this paper, we propose a method to select the observation position in visual servoing with an eye-in-vehicle configuration for the manipulator. In traditional visual servoing, the images taken by the camera may have various problems, including being out of view, large perspective aberrance, improper projection area of object in images and so on. In this paper, we propose a method to determine the observation position to solve these problems. A mobile robot system with pan-tilt camera is designed, which calculates the observation position based on an observation and then moves there. Both simulation and experimental results are provided to validate the effectiveness of the proposed method.展开更多
A real-time arc welding robot visual control system based on a local network with a multi-level hierarchy is developed in this paper. It consists of an intelligence and human-machine interface level, a motion planning...A real-time arc welding robot visual control system based on a local network with a multi-level hierarchy is developed in this paper. It consists of an intelligence and human-machine interface level, a motion planning level, a motion control level and a servo control level. The last three levels form a local real-time open robot controller, which realizes motion planning and motion control of a robot. A camera calibration method based on the relative movement of the end-effector connected to a robot is proposed and a method for tracking weld seam based on the structured light stereovision is provided. Combining the parameters of the cameras and laser plane, three groups of position values in Cartesian space are obtained for each feature point in a stripe projected on the weld seam. The accurate three-dimensional position of the edge points in the weld seam can be calculated from the obtained parameters with an information fusion algorithm. By calculating the weld seam parameter from position and image data, the movement parameters of the robot used for tracking can be determined. A swing welding experiment of type V groove weld is successfully conducted, the results of which show that the system has high resolution seam tracking in real-time, and works stably and efficiently.展开更多
This paper addresses the robust visual tracking of multi-feature points for a 3D manipulator with unknown intrinsic and extrinsic parameters of the vision system. This class of control systems are highly nonlinear con...This paper addresses the robust visual tracking of multi-feature points for a 3D manipulator with unknown intrinsic and extrinsic parameters of the vision system. This class of control systems are highly nonlinear control systems characterized as time-varying and strong coupling in states and unknown parameters. It is first pointed out that not only is the Jacobian image matrix nonsingular, but also its minimum singular value has a positive limit. This provides the foundation of kinematics and dynamics control of manipulators with visual feedback. Second, the Euler angle expressed rotation transformation is employed to estimate a subspace of the parameter space of the vision system. Based on the two results above, and arbitrarily chosen parameters in this subspace, the tracking controllers are proposed so that the image errors can be made as small as desired so long as the control gain is allowed to be large. The controller does not use visual velocity to achieve high and robust performance with low sampling rate of the vision system. The obtained results are proved by Lyapunov direct method. Experiments are included to demonstrate the effectiveness of the proposed controller.展开更多
First,the constitution of traditional visual sensor is presented.The linear camera model is introduced and the transform matrix between the image coordinate system and the world coordinate system is established.The ba...First,the constitution of traditional visual sensor is presented.The linear camera model is introduced and the transform matrix between the image coordinate system and the world coordinate system is established.The basic principle of camera calibration is expatiated based on the linear camera model. On the basis of a detailed analysis of camera model,a new-style visual sensor for measurement is advanced.It can realize the real time control of the zoom of camera lens by step motor according to the size of objects.Moreover,re-calibration could be avoided and the transform matrix can be acquired by calculating,which can greatly simplify camera calibration process and save the time. Clearer images are gained,so the measurement system precision could be greatly improved.The basic structure of the visual sensor zoom is introduced,including the constitute mode and the movement rule of the fixed former part,zoom part,compensatory part and the fixed latter port.The realization method of zoom controlled by step motor is introduced. Finally,the constitution of the new-style visual sensor is introduced,including hardware and software.The hardware system is composed by manual zoom,CCD camera,image card,gearing,step motor,step motor driver and computer.The realization of software is introduced,including the composed module of software and the workflow of measurement system in the form of structured block diagram.展开更多
In this paper, we proposed an optimized real-time hybrid cooperative multi-camera tracking system for large-scale au-tomate surveillance based on embedded smart cameras including stationary cameras and moving pan/tilt...In this paper, we proposed an optimized real-time hybrid cooperative multi-camera tracking system for large-scale au-tomate surveillance based on embedded smart cameras including stationary cameras and moving pan/tilt/zoom (PTZ) cameras embedded with TI DSP TMS320DM6446 for intelligent visual analysis. Firstly, the overlapping areas and projection relations between adjacent cameras' field of view (FOV) is calculated. Based on the relations of FOV ob-tained and tracking information of each single camera, a homography based target handover procedure is done for long-term multi-camera tracking. After that, we fully implemented the tracking system on the embedded platform de-veloped by our group. Finally, to reduce the huge computational complexity, a novel hierarchical optimization method is proposed. Experimental results demonstrate the robustness and real-time efficiency in dynamic real-world environ-ments and the computational burden is significantly reduced by 98.84%. Our results demonstrate that our proposed sys-tem is capable of tracking targets effectively and achieve large-scale surveillance with clear detailed close-up visual features capturing and recording in dynamic real-life environments.展开更多
为减少因船舶偏离航道而造成的搁浅、碰撞航标或桥墩等水上交通事故,提出了一种基于多目相机自动识别航道的桥区航行异常船舶预警方法。基于YOLOv5(You Only Look Once version 5)目标检测算法,联动变、定焦相机识别并定位航标和船舶,...为减少因船舶偏离航道而造成的搁浅、碰撞航标或桥墩等水上交通事故,提出了一种基于多目相机自动识别航道的桥区航行异常船舶预警方法。基于YOLOv5(You Only Look Once version 5)目标检测算法,联动变、定焦相机识别并定位航标和船舶,跟踪并记录船舶航迹点,计算船舶的速度和航向并推算船位。提出了一种基于视频船舶航迹点的密度聚类识别航道两侧航标的方法,实现航道自适应可视化。基于船位推算识别并预警航行状态异常的船舶。实验结果表明:航标、船舶的检测正确率分别达84.8%、90.3%,相较单一相机检测模型,正确率分别提高了32.1%、5.5%;能够自适应可视化航道并识别、预警航行异常船舶。展开更多
基金supported by the National Natural Science Foundationof China(61100207)the National Key Technology Research and Development Program of the Ministry of Science and Technology of China(2014BAK14B03)+1 种基金the Fundamental Research Funds for the Central Universities(2013PT132013XZ12)
文摘Most sensors or cameras discussed in the sensor network community are usually 3D homogeneous, even though their2 D coverage areas in the ground plane are heterogeneous. Meanwhile, observed objects of camera networks are usually simplified as 2D points in previous literature. However in actual application scenes, not only cameras are always heterogeneous with different height and action radiuses, but also the observed objects are with 3D features(i.e., height). This paper presents a sensor planning formulation addressing the efficiency enhancement of visual tracking in 3D heterogeneous camera networks that track and detect people traversing a region. The problem of sensor planning consists of three issues:(i) how to model the 3D heterogeneous cameras;(ii) how to rank the visibility, which ensures that the object of interest is visible in a camera's field of view;(iii) how to reconfigure the 3D viewing orientations of the cameras. This paper studies the geometric properties of 3D heterogeneous camera networks and addresses an evaluation formulation to rank the visibility of observed objects. Then a sensor planning method is proposed to improve the efficiency of visual tracking. Finally, the numerical results show that the proposed method can improve the tracking performance of the system compared to the conventional strategies.
基金supported by the National Key Research and Development Program of China (No. 2017YFB1300102)National Natural Science Foundation of China (No. 61803025)
文摘Three-dimensional(3 D) visual tracking of a multicopter(where the camera is fixed while the multicopter is moving) means continuously recovering the six-degree-of-freedom pose of the multicopter relative to the camera. It can be used in many applications,such as precision terminal guidance and control algorithm validation for multicopters. However, it is difficult for many researchers to build a 3 D visual tracking system for multicopters(VTSMs) by using cheap and off-the-shelf cameras. This paper firstly gives an overview of the three key technologies of a 3 D VTSMs: multi-camera placement, multi-camera calibration and pose estimation for multicopters. Then, some representative 3 D visual tracking systems for multicopters are introduced. Finally, the future development of the 3D VTSMs is analyzed and summarized.
基金supported by Natural Science Foundation of China (No. 61773374)Key Research and Development Program of China (No. 2017YFB1300104)
文摘In this paper, we propose a method to select the observation position in visual servoing with an eye-in-vehicle configuration for the manipulator. In traditional visual servoing, the images taken by the camera may have various problems, including being out of view, large perspective aberrance, improper projection area of object in images and so on. In this paper, we propose a method to determine the observation position to solve these problems. A mobile robot system with pan-tilt camera is designed, which calculates the observation position based on an observation and then moves there. Both simulation and experimental results are provided to validate the effectiveness of the proposed method.
基金Supported by National Natural Science Foundation of China(60873032, 61105095, 61203361) Doctoral Program Foundation of Ministry of Education of China (20100142110020)+3 种基金 the Specialized Research Fund for the Doctoral Program of Higher Education of China (20100073120020) Postdoctoral Science Foundation of China (2012M511095) Shanghai Municipal Natural Science Foundation (11ZR1418400) Shanghai Postdoctoral Science Foundation(12R21414200)
基金This work was supported by the National High Technology Research and Development Program of China under Grant 2002AA422160 by the National Key Fundamental Research and the Devel-opment Project of China (973) under Grant 2002CB312200.
文摘A real-time arc welding robot visual control system based on a local network with a multi-level hierarchy is developed in this paper. It consists of an intelligence and human-machine interface level, a motion planning level, a motion control level and a servo control level. The last three levels form a local real-time open robot controller, which realizes motion planning and motion control of a robot. A camera calibration method based on the relative movement of the end-effector connected to a robot is proposed and a method for tracking weld seam based on the structured light stereovision is provided. Combining the parameters of the cameras and laser plane, three groups of position values in Cartesian space are obtained for each feature point in a stripe projected on the weld seam. The accurate three-dimensional position of the edge points in the weld seam can be calculated from the obtained parameters with an information fusion algorithm. By calculating the weld seam parameter from position and image data, the movement parameters of the robot used for tracking can be determined. A swing welding experiment of type V groove weld is successfully conducted, the results of which show that the system has high resolution seam tracking in real-time, and works stably and efficiently.
基金This work was supported by The National Science Foundation(No.60474009),Shu Guang Program(No.05SG48)Scientific Programm ofShanghai Education Committee(No.07zz90).
文摘This paper addresses the robust visual tracking of multi-feature points for a 3D manipulator with unknown intrinsic and extrinsic parameters of the vision system. This class of control systems are highly nonlinear control systems characterized as time-varying and strong coupling in states and unknown parameters. It is first pointed out that not only is the Jacobian image matrix nonsingular, but also its minimum singular value has a positive limit. This provides the foundation of kinematics and dynamics control of manipulators with visual feedback. Second, the Euler angle expressed rotation transformation is employed to estimate a subspace of the parameter space of the vision system. Based on the two results above, and arbitrarily chosen parameters in this subspace, the tracking controllers are proposed so that the image errors can be made as small as desired so long as the control gain is allowed to be large. The controller does not use visual velocity to achieve high and robust performance with low sampling rate of the vision system. The obtained results are proved by Lyapunov direct method. Experiments are included to demonstrate the effectiveness of the proposed controller.
文摘First,the constitution of traditional visual sensor is presented.The linear camera model is introduced and the transform matrix between the image coordinate system and the world coordinate system is established.The basic principle of camera calibration is expatiated based on the linear camera model. On the basis of a detailed analysis of camera model,a new-style visual sensor for measurement is advanced.It can realize the real time control of the zoom of camera lens by step motor according to the size of objects.Moreover,re-calibration could be avoided and the transform matrix can be acquired by calculating,which can greatly simplify camera calibration process and save the time. Clearer images are gained,so the measurement system precision could be greatly improved.The basic structure of the visual sensor zoom is introduced,including the constitute mode and the movement rule of the fixed former part,zoom part,compensatory part and the fixed latter port.The realization method of zoom controlled by step motor is introduced. Finally,the constitution of the new-style visual sensor is introduced,including hardware and software.The hardware system is composed by manual zoom,CCD camera,image card,gearing,step motor,step motor driver and computer.The realization of software is introduced,including the composed module of software and the workflow of measurement system in the form of structured block diagram.
文摘In this paper, we proposed an optimized real-time hybrid cooperative multi-camera tracking system for large-scale au-tomate surveillance based on embedded smart cameras including stationary cameras and moving pan/tilt/zoom (PTZ) cameras embedded with TI DSP TMS320DM6446 for intelligent visual analysis. Firstly, the overlapping areas and projection relations between adjacent cameras' field of view (FOV) is calculated. Based on the relations of FOV ob-tained and tracking information of each single camera, a homography based target handover procedure is done for long-term multi-camera tracking. After that, we fully implemented the tracking system on the embedded platform de-veloped by our group. Finally, to reduce the huge computational complexity, a novel hierarchical optimization method is proposed. Experimental results demonstrate the robustness and real-time efficiency in dynamic real-world environ-ments and the computational burden is significantly reduced by 98.84%. Our results demonstrate that our proposed sys-tem is capable of tracking targets effectively and achieve large-scale surveillance with clear detailed close-up visual features capturing and recording in dynamic real-life environments.
文摘为减少因船舶偏离航道而造成的搁浅、碰撞航标或桥墩等水上交通事故,提出了一种基于多目相机自动识别航道的桥区航行异常船舶预警方法。基于YOLOv5(You Only Look Once version 5)目标检测算法,联动变、定焦相机识别并定位航标和船舶,跟踪并记录船舶航迹点,计算船舶的速度和航向并推算船位。提出了一种基于视频船舶航迹点的密度聚类识别航道两侧航标的方法,实现航道自适应可视化。基于船位推算识别并预警航行状态异常的船舶。实验结果表明:航标、船舶的检测正确率分别达84.8%、90.3%,相较单一相机检测模型,正确率分别提高了32.1%、5.5%;能够自适应可视化航道并识别、预警航行异常船舶。