To track human across non-overlapping cameras in depression angles for applications such as multi-airplane visual human tracking and urban multi-camera surveillance,an adaptive human tracking method is proposed,focusi...To track human across non-overlapping cameras in depression angles for applications such as multi-airplane visual human tracking and urban multi-camera surveillance,an adaptive human tracking method is proposed,focusing on both feature representation and human tracking mechanism.Feature representation describes individual by using both improved local appearance descriptors and statistical geometric parameters.The improved feature descriptors can be extracted quickly and make the human feature more discriminative.Adaptive human tracking mechanism is based on feature representation and it arranges the human image blobs in field of view into matrix.Primary appearance models are created to include the maximum inter-camera appearance information captured from different visual angles.The persons appeared in camera are first filtered by statistical geometric parameters.Then the one among the filtered persons who has the maximum matching scale with the primary models is determined to be the target person.Subsequently,the image blobs of the target person are used to update and generate new primary appearance models for the next camera,thus being robust to visual angle changes.Experimental results prove the excellence of the feature representation and show the good generalization capability of tracking mechanism as well as its robustness to condition variables.展开更多
This paper proposes a self-position estimate algorithm for the multiple mobile robots; each robot uses two omnidirectional cameras and an accelerometer. In recent years, the Great East Japan Earthquake and large-scale...This paper proposes a self-position estimate algorithm for the multiple mobile robots; each robot uses two omnidirectional cameras and an accelerometer. In recent years, the Great East Japan Earthquake and large-scale disasters have occurred frequently in Japan. From this, development of the searching robot which supports the rescue team to perform a relief activity at a large-scale disaster is indispensable. Then, this research has developed the searching robot group system with two or more mobile robots. In this research, the searching robot equips with two omnidirectional cameras and an accelerometer. In order to perform distance measurement using two omnidirectional cameras, each parameter of an omnidirectional camera and the position and posture between two omnidirectional cameras have to be calibrated in advance. If there are few mobile robots, the calibration time of each omnidirectional camera does not pose a problem. However, if the calibration is separately performed when using two or more robots in a disaster site, etc., it will take huge calibration time. Then, this paper proposed the algorithm which estimates a mobile robot's position and the parameter of the position and posture between two omnidirectional cameras simultaneously. The algorithm proposed in this paper extended Nonlinear Transformation (NLT) Method. This paper conducted the simulation experiment to check the validity of the proposed algorithm. In some simulation experiments, one mobile robot moves and observes the circumference of another mobile robot which has stopped at a certain place. This paper verified whether the mobile robot can estimate position using the measurement value when the number of observation times becomes 10 times in n/18 of observation intervals. The result of the simulation shows the effectiveness of the algorithm.展开更多
An adaptive human tracking method across spatially separated surveillance cameras with non-overlapping fields of views (FOVs) is proposed. The method relies on the two cues of the human appearance model and spatio-t...An adaptive human tracking method across spatially separated surveillance cameras with non-overlapping fields of views (FOVs) is proposed. The method relies on the two cues of the human appearance model and spatio-temporal information between cameras. For the human appearance model, an HSV color histogram is extracted from different human body parts (head, torso, and legs), then a weighted algorithm is used to compute the similarity distance of two people. Finally, a similarity sorting algorithm with two thresholds is exploited to find the correspondence. The spatio- temporal information is established in the learning phase and is updated incrementally according to the latest correspondence. The experimental results prove that the proposed human tracking method is effective without requiring camera calibration and it becomes more accurate over time as new observations are accumulated.展开更多
The performance of decoding algorithm is one of the important influential factors to determine the communication quality of optical camera communication(OCC) system. In this paper, we first propose a decoding algorith...The performance of decoding algorithm is one of the important influential factors to determine the communication quality of optical camera communication(OCC) system. In this paper, we first propose a decoding algorithm with adaptive thresholding based on the captured pixel values under an ideal environment, and then we further propose a decoding algorithm with multiple features, which is more suitable under the existence of the interference of light sources. The algorithm firstly determines the light-emitting diode(LED) array profile information by removing the interfering light sources through geometric features, and then identifies the LED state by calculating two grayscale features, the average gray ratio(AGR) and the gradient radial inwardness(GRI) of the LEDs, and finally obtains the LED state matrix. The experimental results show that the bit error ratio(BER) of the decoding algorithm with multiple features decreases from 1×10^(-2) to 5×10^(-4) at 80 m.展开更多
This paper proposed an algorithm on simultaneous position estimation and calibration of omnidirectional camera parameters for a group of multiple mobile robots. It is aimed at developing of exploration and information...This paper proposed an algorithm on simultaneous position estimation and calibration of omnidirectional camera parameters for a group of multiple mobile robots. It is aimed at developing of exploration and information gathering robotic system in unknown environment. Here, each mobile robot is not possible to know its own position. It can only estimate its own position by using the measurement value including white noise acquired by two omnidirectional cameras mounted on it. Each mobile robot is able to obtain the distance to those robots observed from the images of two omnidirectional cameras while making calibration during moving but not in advance. Simulation of three robots moving straightly shows the effectiveness of the proposed algorithm.展开更多
This paper proposes the cooperative position estimation of a group of mobile robots, which pertbrms disaster relief tasks in a wide area. When searching the wide area, it becomes important to know a robot's position ...This paper proposes the cooperative position estimation of a group of mobile robots, which pertbrms disaster relief tasks in a wide area. When searching the wide area, it becomes important to know a robot's position correctly. However, for each mobile robot, it is impossible to know its own position correctly. Therefore, each mobile robot estimates its position from the data of sensor equipped on it. Generally, the sensor data is incorrect since there is sensor noise, etc. This research considers two types of the sensor data errors from omnidirectional camera. One is the error of white noise of the image captured by omnidirectional camera and so on. Another is the error of position and posture between two omnidirectional cameras. To solve the error of latter case, we proposed a self-position estimation algorithm for multiple mobile robots using two omnidirectional cameras and an accelerometer. On the other hand, to solve the error of the former case, this paper proposed an algorithm of cooperative position estimation for multiple mobile robots. In this algorithm, each mobile robot uses two omnidirectional cameras to observe the surrounding mobile robot and get the relative position between mobile robots. Each mobile robot estimates its position with only measurement data of each other mobile robots. The algorithm is based on a Bayesian filtering. Simulations of the proposed cooperative position estimation algorithm for multiple mobile robots are performed. The results show that position estimation is possible by only using measurement value from each other robot.展开更多
RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of...RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of applications using RGB-D camera,it is necessary to calibrate it first.To the best of our knowledge,at present,there is no existing a systemic summary related to RGB-D camera calibration methods.Therefore,a systemic review of RGB-D camera calibration is concluded as follows.Firstly,the mechanism of obtained measurement and the related principle of RGB-D camera calibration methods are presented.Subsequently,as some specific applications need to fuse depth and color information,the calibration methods of relative pose between depth camera and RGB camera are introduced in Section 2.Then the depth correction models within RGB-D cameras are summarized and compared respectively in Section 3.Thirdly,considering that the angle of the view field of RGB-D camera is smaller and limited to some specific applications,we discuss the calibration models of relative pose among multiple RGB-D cameras in Section 4.At last,the direction and trend of RGB-D camera calibration are prospected and concluded.展开更多
基金funded by the Natural Science Foundation of Jiangsu Province(No.BK2012389)the National Natural Science Foundation of China(Nos.71303110,91024024)the Foundation of Graduate Innovation Center in NUAA(Nos.kfjj201471,kfjj201473)
文摘To track human across non-overlapping cameras in depression angles for applications such as multi-airplane visual human tracking and urban multi-camera surveillance,an adaptive human tracking method is proposed,focusing on both feature representation and human tracking mechanism.Feature representation describes individual by using both improved local appearance descriptors and statistical geometric parameters.The improved feature descriptors can be extracted quickly and make the human feature more discriminative.Adaptive human tracking mechanism is based on feature representation and it arranges the human image blobs in field of view into matrix.Primary appearance models are created to include the maximum inter-camera appearance information captured from different visual angles.The persons appeared in camera are first filtered by statistical geometric parameters.Then the one among the filtered persons who has the maximum matching scale with the primary models is determined to be the target person.Subsequently,the image blobs of the target person are used to update and generate new primary appearance models for the next camera,thus being robust to visual angle changes.Experimental results prove the excellence of the feature representation and show the good generalization capability of tracking mechanism as well as its robustness to condition variables.
文摘This paper proposes a self-position estimate algorithm for the multiple mobile robots; each robot uses two omnidirectional cameras and an accelerometer. In recent years, the Great East Japan Earthquake and large-scale disasters have occurred frequently in Japan. From this, development of the searching robot which supports the rescue team to perform a relief activity at a large-scale disaster is indispensable. Then, this research has developed the searching robot group system with two or more mobile robots. In this research, the searching robot equips with two omnidirectional cameras and an accelerometer. In order to perform distance measurement using two omnidirectional cameras, each parameter of an omnidirectional camera and the position and posture between two omnidirectional cameras have to be calibrated in advance. If there are few mobile robots, the calibration time of each omnidirectional camera does not pose a problem. However, if the calibration is separately performed when using two or more robots in a disaster site, etc., it will take huge calibration time. Then, this paper proposed the algorithm which estimates a mobile robot's position and the parameter of the position and posture between two omnidirectional cameras simultaneously. The algorithm proposed in this paper extended Nonlinear Transformation (NLT) Method. This paper conducted the simulation experiment to check the validity of the proposed algorithm. In some simulation experiments, one mobile robot moves and observes the circumference of another mobile robot which has stopped at a certain place. This paper verified whether the mobile robot can estimate position using the measurement value when the number of observation times becomes 10 times in n/18 of observation intervals. The result of the simulation shows the effectiveness of the algorithm.
基金The National Natural Science Foundation of China(No. 60972001 )the Science and Technology Plan of Suzhou City(No. SG201076)
文摘An adaptive human tracking method across spatially separated surveillance cameras with non-overlapping fields of views (FOVs) is proposed. The method relies on the two cues of the human appearance model and spatio-temporal information between cameras. For the human appearance model, an HSV color histogram is extracted from different human body parts (head, torso, and legs), then a weighted algorithm is used to compute the similarity distance of two people. Finally, a similarity sorting algorithm with two thresholds is exploited to find the correspondence. The spatio- temporal information is established in the learning phase and is updated incrementally according to the latest correspondence. The experimental results prove that the proposed human tracking method is effective without requiring camera calibration and it becomes more accurate over time as new observations are accumulated.
基金supported by the Department of Science and Technology of Jilin Province (No.20200401122GX)。
文摘The performance of decoding algorithm is one of the important influential factors to determine the communication quality of optical camera communication(OCC) system. In this paper, we first propose a decoding algorithm with adaptive thresholding based on the captured pixel values under an ideal environment, and then we further propose a decoding algorithm with multiple features, which is more suitable under the existence of the interference of light sources. The algorithm firstly determines the light-emitting diode(LED) array profile information by removing the interfering light sources through geometric features, and then identifies the LED state by calculating two grayscale features, the average gray ratio(AGR) and the gradient radial inwardness(GRI) of the LEDs, and finally obtains the LED state matrix. The experimental results show that the bit error ratio(BER) of the decoding algorithm with multiple features decreases from 1×10^(-2) to 5×10^(-4) at 80 m.
文摘This paper proposed an algorithm on simultaneous position estimation and calibration of omnidirectional camera parameters for a group of multiple mobile robots. It is aimed at developing of exploration and information gathering robotic system in unknown environment. Here, each mobile robot is not possible to know its own position. It can only estimate its own position by using the measurement value including white noise acquired by two omnidirectional cameras mounted on it. Each mobile robot is able to obtain the distance to those robots observed from the images of two omnidirectional cameras while making calibration during moving but not in advance. Simulation of three robots moving straightly shows the effectiveness of the proposed algorithm.
文摘This paper proposes the cooperative position estimation of a group of mobile robots, which pertbrms disaster relief tasks in a wide area. When searching the wide area, it becomes important to know a robot's position correctly. However, for each mobile robot, it is impossible to know its own position correctly. Therefore, each mobile robot estimates its position from the data of sensor equipped on it. Generally, the sensor data is incorrect since there is sensor noise, etc. This research considers two types of the sensor data errors from omnidirectional camera. One is the error of white noise of the image captured by omnidirectional camera and so on. Another is the error of position and posture between two omnidirectional cameras. To solve the error of latter case, we proposed a self-position estimation algorithm for multiple mobile robots using two omnidirectional cameras and an accelerometer. On the other hand, to solve the error of the former case, this paper proposed an algorithm of cooperative position estimation for multiple mobile robots. In this algorithm, each mobile robot uses two omnidirectional cameras to observe the surrounding mobile robot and get the relative position between mobile robots. Each mobile robot estimates its position with only measurement data of each other mobile robots. The algorithm is based on a Bayesian filtering. Simulations of the proposed cooperative position estimation algorithm for multiple mobile robots are performed. The results show that position estimation is possible by only using measurement value from each other robot.
基金National Natural Science Foundation of China(41801379)。
文摘RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of applications using RGB-D camera,it is necessary to calibrate it first.To the best of our knowledge,at present,there is no existing a systemic summary related to RGB-D camera calibration methods.Therefore,a systemic review of RGB-D camera calibration is concluded as follows.Firstly,the mechanism of obtained measurement and the related principle of RGB-D camera calibration methods are presented.Subsequently,as some specific applications need to fuse depth and color information,the calibration methods of relative pose between depth camera and RGB camera are introduced in Section 2.Then the depth correction models within RGB-D cameras are summarized and compared respectively in Section 3.Thirdly,considering that the angle of the view field of RGB-D camera is smaller and limited to some specific applications,we discuss the calibration models of relative pose among multiple RGB-D cameras in Section 4.At last,the direction and trend of RGB-D camera calibration are prospected and concluded.