In various imaging applications such as autonomous vehicles and drones,autofocus lenses are indispensable for capturing clear images.However,conventional camera calibration methods typically rely either on processing ...In various imaging applications such as autonomous vehicles and drones,autofocus lenses are indispensable for capturing clear images.However,conventional camera calibration methods typically rely either on processing multiple images at a fixed focal length or on detecting multi-plane markers in a single image and then applying multi-image calibration models.This paper proposes a flexible and accurate calibration approach that extracts subpixel saddle points from a single image containing three non-coplanar calibration boards.To compute accurate homography matrices for the three boards,outliers are removed by eliminating chessboard points that deviated from the fitted grid lines according to their row and column positions.Initial estimates of the intrinsic parameters and the poses of the three planar chessboards are obtained using the three homography matrices in combination with Zhang’s calibration method.During parameter refinement,a multi-objective optimization function is constructed,incorporating three error terms:(1)Reprojection error of the inlier grid points;(2)Mechanism-driven error derived from the relationship between homography matrices and camera parameters;(3)Cross-planar linearity constraint error,which preserves the pre-imaging collinearity of any five points across different planes after projection.For weight selection in the optimization process,confidence intervals of the detected grid points are analyzed by horizontally rotating the reprojection lines to reduce bias introduced by line slope.The optimal weights are determined by minimizing the number of points whose confidence intervals does not intersect the reprojected lines.When multiple candidates yield similar reprojection performance,the parameter set with the smallest reprojection error is selected as the final result.This method efficiently estimates both intrinsic and extrinsic camera parameters.Simulations and real-world experiments validate the high precision and effectiveness of the proposed approach.Our technique is straightforward,practical,and holds significant theoretical and practical value for rapid and reliable camera calibration.展开更多
A novel algorithm for vehicle average velocity detection through automatic and dynamic camera calibration based on dark channel in homogenous fog weather condition is presented in this paper. Camera fixed in the middl...A novel algorithm for vehicle average velocity detection through automatic and dynamic camera calibration based on dark channel in homogenous fog weather condition is presented in this paper. Camera fixed in the middle of the road should be calibrated in homogenous fog weather condition, and can be used in any weather condition. Unlike other researches in velocity calculation area, our traffic model only includes road plane and vehicles in motion. Painted lines in scene image are neglected because sometimes there are no traffic lanes, especially in un-structured traffic scene. Once calibrated, scene distance will be got and can be used to calculate vehicles average velocity. Three major steps are included in our algorithm. Firstly, current video frame is recognized to discriminate current weather condition based on area search method (ASM). If it is homogenous fog, average pixel value from top to bottom in the selected area will change in the form of edge spread function (ESF). Secondly, traffic road surface plane will be found by generating activity map created by calculating the expected value of the absolute intensity difference between two adjacent frames. Finally, scene transmission image is got by dark channel prior theory, camera s intrinsic and extrinsic parameters are calculated based on the parameter calibration formula deduced from monocular model and scene transmission image. In this step, several key points with particular transmission value for generating necessary calculation equations on road surface are selected to calibrate the camera. Vehicles pixel coordinates are transformed to camera coordinates. Distance between vehicles and the camera will be calculated, and then average velocity for each vehicle is got. At the end of this paper, calibration results and vehicles velocity data for nine vehicles in different weather conditions are given. Comparison with other algorithms verifies the effectiveness of our algorithm.展开更多
Camera calibration is a critical process in photogrammetry and a necessary step to acquire 3D information from a 2D image. In this paper, a flexible approach for CCD camera calibration using 2D direct linear transform...Camera calibration is a critical process in photogrammetry and a necessary step to acquire 3D information from a 2D image. In this paper, a flexible approach for CCD camera calibration using 2D direct linear transformation (DLT) and bundle adjustment is proposed. The proposed approach assumes that the camera interior orientation elements are known, and addresses a new closed form solution in planar object space based on homogenous coordinate representation and matrix factorization. Homogeneous coordinate representation offers a direct matrix correspondence between the parameters of the 2D DLT and the collinearity equation. The matrix factorization starts by recovering the elements of the rotation matrix and then solving for the camera position with the collinearity equation. Camera calibration with high precision is addressed by bundle adjustment using the initial values of the camera orientation elements. The results show that the calibration precision of principal point and focal length is about 0.2 and 0.3 pixels respectivelv, which can meet the requirements of close-range photogrammetry with high accuracy.展开更多
A new versatile camera calibration technique for machine vision usingoff-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, anew camera distortion rectification technology base...A new versatile camera calibration technique for machine vision usingoff-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, anew camera distortion rectification technology based on line-rectification is proposed. Afull-camera-distortion model is introduced and a linear algorithm is provided to obtain thesolution. After the camera rectification intrinsic and extrinsic parameters are obtained based onthe relationship between the homograph and absolute conic. This technology needs neither ahigh-accuracy three-dimensional calibration block, nor a complicated translation or rotationplatform. Both simulations and experiments show that this method is effective and robust.展开更多
To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of t...To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of the ellipses are extracted,and the real concentric circle center projection equation is established by exploiting the cross ratio invariance of the collinear points.Subsequently,since the infinite lines passing through the centers of the marks are parallel,the other center projection coordinates are expressed as the solution problem of linear equations.The problem of projection deviation caused by using the center of the ellipse as the real circle center projection is addressed,and the results are utilized as the true image points to achieve the high precision camera calibration.As demonstrated by the simulations and practical experiments,the proposed method performs a better location and calibration performance by achieving the actual center projection of circular marks.The relevant results confirm the precision and robustness of the proposed approach.展开更多
Camera calibration is critical in computer vision measurement system, affecting the accuracy of the whole system. Many camera calibration methods have been proposed, but they cannot consider precision and operation co...Camera calibration is critical in computer vision measurement system, affecting the accuracy of the whole system. Many camera calibration methods have been proposed, but they cannot consider precision and operation complexity at the same time. In this paper, a new technique is proposed to calibrate camera. Firstly, the global calibration method is described in de-tail. It requires the camera to observe a checkerboard pattern shown at a few different orientations. The checkerboard corners are obtained by Harris algorithm. With direct linear transformation and non-linear optimal algorithm, the global calibration pa-rameters are obtained. Then, a sub-regional method is proposed. Those corners are divided into two groups, middle corners and edge corners, which are used to calibrate the corresponding area to get two sets of calibration parameters. Finally, some experimental images are used to test the proposed method. Experimental results demonstrate that the average projection error of sub-region method is decreased at least 16% compared with the global calibration method. The proposed technique is simple and accurate. It is suitable for the industrial computer vision measurement.展开更多
In this paper,we use 1D rotating objects to calibrate camera.The calibration object has three collinear points.It is not necessary for the object to rotate around one of its endpoints as before;instead,it rotates arou...In this paper,we use 1D rotating objects to calibrate camera.The calibration object has three collinear points.It is not necessary for the object to rotate around one of its endpoints as before;instead,it rotates around the middle point in a plane.In this instance,we can use two calibration constraints to compute the intrinsic parameters of a camera.In addition,when the 1D object moves in a plane randomly,the proposed technique remains valid to compute the intrinsic parameters of a camera.Experiments with simulated data as well as with real images show that our technique is accurate and robust.展开更多
In this paper, we introduce a novel class of coplanar conics, the pencil of which can doubly contact to calibrate camera and estimate pose. We first analyze the properties of con-axes and con-eccentricity ellipses, wh...In this paper, we introduce a novel class of coplanar conics, the pencil of which can doubly contact to calibrate camera and estimate pose. We first analyze the properties of con-axes and con-eccentricity ellipses, which consist of a naturM extending pattern of concentric circles. Then the general case that two ellipses have two repeated complex intersection points is presented. This degenerate configuration results in a one-parameter family of homographies which map the planar pattern to its image. Although it is unable to compute the complete homography, an indirect 3-degree polynomial or 5-degree polynomial constraint on intrinsic parameters from one image can also be used for camera calibration and pose estimation under the minimal conditions. Furthermore, this nonlinear problem can be treated as a polynomial optimization problem (POP) and the global optimization solution can be also obtained by using SparsePOP (a sparse semidefinite programming relaxation of POPs), Finally, the experiments with simulated data and real images are shown to verify the correctness and robustness of the proposed technique.展开更多
In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibrati...In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibration,overfitting or local minimum from fullparameter optimization,and calibration errors due to neglecting the center of distortion,a stepwise camera calibration method incorporating compensation for eccentricity error was proposed to enhance monocular camera calibration precision.Initially,the multiimage distortion correction method calculated the common center of distortion and coefficients,improving precision,stability,and efficiency compared to single-image distortion correction methods.Subsequently,the projection point of the circle’s center was compared with the center of the contour’s projection to iteratively correct the eccentricity error,leading to more precise and stable calibration.Finally,nonlinear optimization refined the calibration parameters to minimize reprojection error and boosts precision.These processes achieved stepwise camera calibration,which enhanced robustness.In addition,the module comparison experiment showed that both the eccentricity error compensation and the camera parameter optimization could improve the calibration precision,but the latter had a greater impact.The combined use of the two methods further improved the precision and stability.Simulations and experiments confirmed that the proposed method achieved high precision,stability,and robustness,suitable for high-precision visual measurements.展开更多
RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of...RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of applications using RGB-D camera,it is necessary to calibrate it first.To the best of our knowledge,at present,there is no existing a systemic summary related to RGB-D camera calibration methods.Therefore,a systemic review of RGB-D camera calibration is concluded as follows.Firstly,the mechanism of obtained measurement and the related principle of RGB-D camera calibration methods are presented.Subsequently,as some specific applications need to fuse depth and color information,the calibration methods of relative pose between depth camera and RGB camera are introduced in Section 2.Then the depth correction models within RGB-D cameras are summarized and compared respectively in Section 3.Thirdly,considering that the angle of the view field of RGB-D camera is smaller and limited to some specific applications,we discuss the calibration models of relative pose among multiple RGB-D cameras in Section 4.At last,the direction and trend of RGB-D camera calibration are prospected and concluded.展开更多
Instead of traditionally using a 3D physical model with many control points on it, a calibration plate with printed chess grid and movable along its normal direction is implemented to provide large area 3D control poi...Instead of traditionally using a 3D physical model with many control points on it, a calibration plate with printed chess grid and movable along its normal direction is implemented to provide large area 3D control points with variable Z values. Experiments show that the approach presented is effective for reconstructing 3D color objects in computer vision system.展开更多
A flexible camera calibration technique using 2D-DLT and bundle adjustment with planar scenes is proposed.The equation of principal line under image coordinate system represented with 2D-DLT parameters is educed using...A flexible camera calibration technique using 2D-DLT and bundle adjustment with planar scenes is proposed.The equation of principal line under image coordinate system represented with 2D-DLT parameters is educed using the correspondence between collinearity equations and 2D-DLT.A novel algorithm to obtain the initial value of principal point is put forward.Proof of Critical Motion Sequences for calibration is given in detail.The practical decomposition algorithm of exterior parameters using initial values of principal point,focal length and 2D-DLT parameters is discussed elaborately.Planar-scene camera calibration algorithm with bundle adjustment is addressed.Very good results have been obtained with both computer simulations and real data calibration.The calibration result can be used in some high precision applications,such as reverse engineering and industrial inspection.展开更多
This paper focuses on the problem of calibrating a pinhole camera from images of profile of a revolution. In this paper, the symmet ry of images of profiles of revolution has been extensively exploited and a prac tica...This paper focuses on the problem of calibrating a pinhole camera from images of profile of a revolution. In this paper, the symmet ry of images of profiles of revolution has been extensively exploited and a prac tical and accurate technique of camera calibration from profiles alone has been developed. Compared with traditional techniques for camera calibration, for inst ance, it may involve taking images of some precisely machined calibration patter n (such as a calibration grid), or edge detection for determining vanish points which are often far from images center or even do not physically exist, or calcu lation of fundamental matrix and Kruppa equations which can be numerically unsta ble, the method presented here uses just profiles of revolution, which are commo nly found in daily life (e.g. bowls and vases), to make the process easier as a result of the reduced cost and increased accessibility of the calibration object s. This paper firstly analyzes the relationship between the symmetry property of profile of revolution and the intrinsic parameters of a camera, and then shows how to use images of profile of revolution to provide enough information for det ermining intrinsic parameters. During the process, high-accurate profile extrac tion algorithm has also been used. Finally, results from real data are presented , demonstrating the efficiency and accuracy of the proposed methods.展开更多
The ability to build an imaging process is crucial to vision measurement.The non-parametric imaging model describes an imaging process as a pixel cluster,in which each pixel is related to a spatial ray originated from...The ability to build an imaging process is crucial to vision measurement.The non-parametric imaging model describes an imaging process as a pixel cluster,in which each pixel is related to a spatial ray originated from an object point.However,a non-parametric model requires a sophisticated calculation process or high-cost devices to obtain a massive quantity of parameters.These disadvantages limit the application of camera models.Therefore,we propose a novel camera model calibration method based on a single-axis rotational target.The rotational vision target offers 3D control points with no need for detailed information of poses of the rotational target.Radial basis function(RBF)network is introduced to map 3D coordinates to 2D image coordinates.We subsequently derive the optimization formulization of imaging model parameters and compute the parameter from the given control points.The model is extended to adapt the stereo camera that is widely used in vision measurement.Experiments have been done to evaluate the performance of the proposed camera calibration method.The results show that the proposed method has superiority in accuracy and effectiveness in comparison with the traditional methods.展开更多
LiDAR and camera are two of the most common sensors used in the fields of robot perception,autonomous driving,augmented reality,and virtual reality,where these sensors are widely used to perform various tasks such as ...LiDAR and camera are two of the most common sensors used in the fields of robot perception,autonomous driving,augmented reality,and virtual reality,where these sensors are widely used to perform various tasks such as odometry estimation and 3D reconstruction.Fusing the information from these two sensors can significantly increase the robustness and accuracy of these perception tasks.The extrinsic calibration between cameras and LiDAR is a fundamental prerequisite for multimodal systems.Recently,extensive studies have been conducted on the calibration of extrinsic parameters.Although several calibration methods facilitate sensor fusion,a comprehensive summary for researchers and,especially,non-expert users is lacking.Thus,we present an overview of extrinsic calibration and discuss diverse calibration methods from the perspective of calibration system design.Based on the calibration information sources,this study classifies these methods as target-based or targetless.For each type of calibration method,further classification was performed according to the diverse types of features or constraints used in the calibration process,and their detailed implementations and key characteristics were introduced.Thereafter,calibration-accuracy evaluation methods are presented.Finally,we comprehensively compare the advantages and disadvantages of each calibration method and suggest directions for practical applications and future research.展开更多
Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking di...Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking different colour and depth inputs is required. Our approach removes the numerical difficulties of using non-linear optimization in previous methods which explicitly resolve camera intrinsics as well as the transformation between depth and colour cameras. A matrix of hybrid parameters is introduced to linearize our optimization. The hybrid parameters offer a transformation from a depth parametric space (depth camera image) to a colour parametric space (colour camera image) by combining the intrinsic parameters of depth camera and a rotation transformation from depth camera to colour camera. Both the rotation transformation and intrinsic parameters can be explicitly calculated from our hybrid parameters with the help of a standard QR factorisation. We test our algorithm with both synthesized data and real-world data where ground-truth depth information is captured by Microsoft Kinect. The experiments show that our approach can provide comparable accuracy of calibration with the state-of-the-art algorithms while taking much less computation time (1/50 of Herrera's method and 1/10 of Raposo's method) due to the advantage of using hybrid parameters.展开更多
The basic idea of calibrating a camera system in previous approaches is to determine camera parameters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration...The basic idea of calibrating a camera system in previous approaches is to determine camera parameters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in which camera parameters are determined by a set of 3D lines.A set of constraints is derived on camera parameters in terms of perspective line mapping.From these con- straints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Lin,Huang and Faugeras for camera location determination in which at least 8 line correspondences are re- quired for linear computation of camera location.Since line segments in an image can be located easi- ly and more accurately than points,the use of lines as calibration reference tends to ease the compu- tation in image preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.展开更多
The focal plane of a collimator used for the geometric calibration of an optical camera is a key element in the calibration process.The traditional focal plane of the collimator has only a single aperture light lead-i...The focal plane of a collimator used for the geometric calibration of an optical camera is a key element in the calibration process.The traditional focal plane of the collimator has only a single aperture light lead-in,resulting in a relatively unreliable calibration accuracy.Here we demonstrate a multi-aperture micro-electro-mechanical system(MEMS)light lead-in device that is located at the optical focal plane of the collimator used to calibrate the geometric distortion in cameras.Without additional volume or power consumption,the random errors of this calibration system are decreased by the multi-image matrix.With this new construction and a method for implementing the system,the reliability of high-accuracy calibration of optical cameras is guaranteed.展开更多
Inadequate geometric accuracy of cameras is the main constraint to improving the precision of infrared horizon sensors with a large field of view(FOV).An enormous FOV with a blind area in the center greatly limits the...Inadequate geometric accuracy of cameras is the main constraint to improving the precision of infrared horizon sensors with a large field of view(FOV).An enormous FOV with a blind area in the center greatly limits the accuracy and feasibility of traditional geometric calibration methods.A novel camera calibration method for infrared horizon sensors is presented and validated in this paper.Three infrared targets are used as control points.The camera is mounted on a rotary table.As the table rotates,these control points will be evenly distributed in the entire FOV.Compared with traditional methods that combine a collimator and a rotary table which cannot effectively cover a large FOV and require harsh experimental equipment,this method is easier to implement at a low cost.A corresponding three-step parameter estimation algorithm is proposed to avoid precisely measuring the positions of the camera and the control points.Experiments are implemented with 10 infrared horizon sensors to verify the effectiveness of the calibration method.The results show that the proposed method is highly stable,and that the calibration accuracy is at least 30%higher than those of existing methods.展开更多
This paper proposes a novel self-calibration method for a large-FoV(Field-of-View)camera using a real star image.First,based on the classic equisolid-angle projection model and polynomial distortion model,the inclinat...This paper proposes a novel self-calibration method for a large-FoV(Field-of-View)camera using a real star image.First,based on the classic equisolid-angle projection model and polynomial distortion model,the inclination of the optical axis is thoroughly considered with respect to the image plane,and a rigorous imaging model including 8 unknown intrinsic parameters is built.Second,the basic calibration equation based on star vector observations is presented.Third,the partial derivative expressions of all 11 camera parameters for linearizing the calibration equation are deduced in detail,and an iterative solution using the least squares method is given.Furtherly,simulation experiment is designed,results of which shows the new model has a better performance than the old model.At last,three experiments were conducted at night in central China and 671 valid star images were collected.The results indicate that the new method obtains a mean magnitude of reprojection error of 0.251 pixels at a 120°FoV,which improves the calibration accuracy by 38.6%compared with the old calibration model(not considering the inclination of the optical axis).When the FoV drops below 20°,the mean magnitude of the reprojection error decreases to 0.15 pixels for both the new model and the old model.Since stars instead of manual control points are used,the new method can realize self-calibration,which might be significant for the long-duration navigation of vehicles in some unfamiliar or extreme environments,such as those of Mars or Earth’s moon.展开更多
基金supported by the Research on the Reform of Curriculum Assessment Methods for College Mathematics Platform Courses(No.53111104016)。
文摘In various imaging applications such as autonomous vehicles and drones,autofocus lenses are indispensable for capturing clear images.However,conventional camera calibration methods typically rely either on processing multiple images at a fixed focal length or on detecting multi-plane markers in a single image and then applying multi-image calibration models.This paper proposes a flexible and accurate calibration approach that extracts subpixel saddle points from a single image containing three non-coplanar calibration boards.To compute accurate homography matrices for the three boards,outliers are removed by eliminating chessboard points that deviated from the fitted grid lines according to their row and column positions.Initial estimates of the intrinsic parameters and the poses of the three planar chessboards are obtained using the three homography matrices in combination with Zhang’s calibration method.During parameter refinement,a multi-objective optimization function is constructed,incorporating three error terms:(1)Reprojection error of the inlier grid points;(2)Mechanism-driven error derived from the relationship between homography matrices and camera parameters;(3)Cross-planar linearity constraint error,which preserves the pre-imaging collinearity of any five points across different planes after projection.For weight selection in the optimization process,confidence intervals of the detected grid points are analyzed by horizontally rotating the reprojection lines to reduce bias introduced by line slope.The optimal weights are determined by minimizing the number of points whose confidence intervals does not intersect the reprojected lines.When multiple candidates yield similar reprojection performance,the parameter set with the smallest reprojection error is selected as the final result.This method efficiently estimates both intrinsic and extrinsic camera parameters.Simulations and real-world experiments validate the high precision and effectiveness of the proposed approach.Our technique is straightforward,practical,and holds significant theoretical and practical value for rapid and reliable camera calibration.
基金supported by National High Technology Research and Development Program of China(863 Program)(No. 2011AA110301)National Natural Science Foundation of China(No. 61079001)the Ph. D. Programs Foundation of Ministry of Education of China(No. 20111103110017)
文摘A novel algorithm for vehicle average velocity detection through automatic and dynamic camera calibration based on dark channel in homogenous fog weather condition is presented in this paper. Camera fixed in the middle of the road should be calibrated in homogenous fog weather condition, and can be used in any weather condition. Unlike other researches in velocity calculation area, our traffic model only includes road plane and vehicles in motion. Painted lines in scene image are neglected because sometimes there are no traffic lanes, especially in un-structured traffic scene. Once calibrated, scene distance will be got and can be used to calculate vehicles average velocity. Three major steps are included in our algorithm. Firstly, current video frame is recognized to discriminate current weather condition based on area search method (ASM). If it is homogenous fog, average pixel value from top to bottom in the selected area will change in the form of edge spread function (ESF). Secondly, traffic road surface plane will be found by generating activity map created by calculating the expected value of the absolute intensity difference between two adjacent frames. Finally, scene transmission image is got by dark channel prior theory, camera s intrinsic and extrinsic parameters are calculated based on the parameter calibration formula deduced from monocular model and scene transmission image. In this step, several key points with particular transmission value for generating necessary calculation equations on road surface are selected to calibrate the camera. Vehicles pixel coordinates are transformed to camera coordinates. Distance between vehicles and the camera will be calculated, and then average velocity for each vehicle is got. At the end of this paper, calibration results and vehicles velocity data for nine vehicles in different weather conditions are given. Comparison with other algorithms verifies the effectiveness of our algorithm.
基金Project 2005A030 supported by the Youth Science and Research Foundation from China University of Mining & Technology
文摘Camera calibration is a critical process in photogrammetry and a necessary step to acquire 3D information from a 2D image. In this paper, a flexible approach for CCD camera calibration using 2D direct linear transformation (DLT) and bundle adjustment is proposed. The proposed approach assumes that the camera interior orientation elements are known, and addresses a new closed form solution in planar object space based on homogenous coordinate representation and matrix factorization. Homogeneous coordinate representation offers a direct matrix correspondence between the parameters of the 2D DLT and the collinearity equation. The matrix factorization starts by recovering the elements of the rotation matrix and then solving for the camera position with the collinearity equation. Camera calibration with high precision is addressed by bundle adjustment using the initial values of the camera orientation elements. The results show that the calibration precision of principal point and focal length is about 0.2 and 0.3 pixels respectivelv, which can meet the requirements of close-range photogrammetry with high accuracy.
文摘A new versatile camera calibration technique for machine vision usingoff-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, anew camera distortion rectification technology based on line-rectification is proposed. Afull-camera-distortion model is introduced and a linear algorithm is provided to obtain thesolution. After the camera rectification intrinsic and extrinsic parameters are obtained based onthe relationship between the homograph and absolute conic. This technology needs neither ahigh-accuracy three-dimensional calibration block, nor a complicated translation or rotationplatform. Both simulations and experiments show that this method is effective and robust.
基金supported by the Aerospace Science and Technology Joint Fund(6141B061505)the National Natural Science Foundation of China(61473100).
文摘To address the eccentric error of circular marks in camera calibration,a circle location method based on the invariance of collinear points and pole–polar constraint is proposed in this paper.Firstly,the centers of the ellipses are extracted,and the real concentric circle center projection equation is established by exploiting the cross ratio invariance of the collinear points.Subsequently,since the infinite lines passing through the centers of the marks are parallel,the other center projection coordinates are expressed as the solution problem of linear equations.The problem of projection deviation caused by using the center of the ellipse as the real circle center projection is addressed,and the results are utilized as the true image points to achieve the high precision camera calibration.As demonstrated by the simulations and practical experiments,the proposed method performs a better location and calibration performance by achieving the actual center projection of circular marks.The relevant results confirm the precision and robustness of the proposed approach.
基金Tianjin Research Program of Application Foundation and Advanced Technology(No.14JCYBJC18600,No.14JCZDJC39700)the National Key Scientific Instrument and Equipment Development Project(No.2013YQ17053903)
文摘Camera calibration is critical in computer vision measurement system, affecting the accuracy of the whole system. Many camera calibration methods have been proposed, but they cannot consider precision and operation complexity at the same time. In this paper, a new technique is proposed to calibrate camera. Firstly, the global calibration method is described in de-tail. It requires the camera to observe a checkerboard pattern shown at a few different orientations. The checkerboard corners are obtained by Harris algorithm. With direct linear transformation and non-linear optimal algorithm, the global calibration pa-rameters are obtained. Then, a sub-regional method is proposed. Those corners are divided into two groups, middle corners and edge corners, which are used to calibrate the corresponding area to get two sets of calibration parameters. Finally, some experimental images are used to test the proposed method. Experimental results demonstrate that the average projection error of sub-region method is decreased at least 16% compared with the global calibration method. The proposed technique is simple and accurate. It is suitable for the industrial computer vision measurement.
文摘In this paper,we use 1D rotating objects to calibrate camera.The calibration object has three collinear points.It is not necessary for the object to rotate around one of its endpoints as before;instead,it rotates around the middle point in a plane.In this instance,we can use two calibration constraints to compute the intrinsic parameters of a camera.In addition,when the 1D object moves in a plane randomly,the proposed technique remains valid to compute the intrinsic parameters of a camera.Experiments with simulated data as well as with real images show that our technique is accurate and robust.
基金the National Basic Research Program (973) of China(No.2011CB302203)the National Natural Science Foundation of China(No.60833009)
文摘In this paper, we introduce a novel class of coplanar conics, the pencil of which can doubly contact to calibrate camera and estimate pose. We first analyze the properties of con-axes and con-eccentricity ellipses, which consist of a naturM extending pattern of concentric circles. Then the general case that two ellipses have two repeated complex intersection points is presented. This degenerate configuration results in a one-parameter family of homographies which map the planar pattern to its image. Although it is unable to compute the complete homography, an indirect 3-degree polynomial or 5-degree polynomial constraint on intrinsic parameters from one image can also be used for camera calibration and pose estimation under the minimal conditions. Furthermore, this nonlinear problem can be treated as a polynomial optimization problem (POP) and the global optimization solution can be also obtained by using SparsePOP (a sparse semidefinite programming relaxation of POPs), Finally, the experiments with simulated data and real images are shown to verify the correctness and robustness of the proposed technique.
文摘In visual measurement,high-precision camera calibration often employs circular targets.To address issues in mainstream methods,such as the eccentricity error of the circle from using the circle’s center for calibration,overfitting or local minimum from fullparameter optimization,and calibration errors due to neglecting the center of distortion,a stepwise camera calibration method incorporating compensation for eccentricity error was proposed to enhance monocular camera calibration precision.Initially,the multiimage distortion correction method calculated the common center of distortion and coefficients,improving precision,stability,and efficiency compared to single-image distortion correction methods.Subsequently,the projection point of the circle’s center was compared with the center of the contour’s projection to iteratively correct the eccentricity error,leading to more precise and stable calibration.Finally,nonlinear optimization refined the calibration parameters to minimize reprojection error and boosts precision.These processes achieved stepwise camera calibration,which enhanced robustness.In addition,the module comparison experiment showed that both the eccentricity error compensation and the camera parameter optimization could improve the calibration precision,but the latter had a greater impact.The combined use of the two methods further improved the precision and stability.Simulations and experiments confirmed that the proposed method achieved high precision,stability,and robustness,suitable for high-precision visual measurements.
基金National Natural Science Foundation of China(41801379)。
文摘RGB-D camera is a new type of sensor,which can obtain the depth and texture information in an unknown 3D scene simultaneously,and they have been applied in various fields widely.In fact,when implementing such kinds of applications using RGB-D camera,it is necessary to calibrate it first.To the best of our knowledge,at present,there is no existing a systemic summary related to RGB-D camera calibration methods.Therefore,a systemic review of RGB-D camera calibration is concluded as follows.Firstly,the mechanism of obtained measurement and the related principle of RGB-D camera calibration methods are presented.Subsequently,as some specific applications need to fuse depth and color information,the calibration methods of relative pose between depth camera and RGB camera are introduced in Section 2.Then the depth correction models within RGB-D cameras are summarized and compared respectively in Section 3.Thirdly,considering that the angle of the view field of RGB-D camera is smaller and limited to some specific applications,we discuss the calibration models of relative pose among multiple RGB-D cameras in Section 4.At last,the direction and trend of RGB-D camera calibration are prospected and concluded.
基金Supported by the Natural Science Foundation of China (69775022)the State High-Technology Development program of China(863 306ZT04 06 3)
文摘Instead of traditionally using a 3D physical model with many control points on it, a calibration plate with printed chess grid and movable along its normal direction is implemented to provide large area 3D control points with variable Z values. Experiments show that the approach presented is effective for reconstructing 3D color objects in computer vision system.
基金Supported by the Research Foundation of Doctoral Position Speciality of Universities(20010486011)
文摘A flexible camera calibration technique using 2D-DLT and bundle adjustment with planar scenes is proposed.The equation of principal line under image coordinate system represented with 2D-DLT parameters is educed using the correspondence between collinearity equations and 2D-DLT.A novel algorithm to obtain the initial value of principal point is put forward.Proof of Critical Motion Sequences for calibration is given in detail.The practical decomposition algorithm of exterior parameters using initial values of principal point,focal length and 2D-DLT parameters is discussed elaborately.Planar-scene camera calibration algorithm with bundle adjustment is addressed.Very good results have been obtained with both computer simulations and real data calibration.The calibration result can be used in some high precision applications,such as reverse engineering and industrial inspection.
文摘This paper focuses on the problem of calibrating a pinhole camera from images of profile of a revolution. In this paper, the symmet ry of images of profiles of revolution has been extensively exploited and a prac tical and accurate technique of camera calibration from profiles alone has been developed. Compared with traditional techniques for camera calibration, for inst ance, it may involve taking images of some precisely machined calibration patter n (such as a calibration grid), or edge detection for determining vanish points which are often far from images center or even do not physically exist, or calcu lation of fundamental matrix and Kruppa equations which can be numerically unsta ble, the method presented here uses just profiles of revolution, which are commo nly found in daily life (e.g. bowls and vases), to make the process easier as a result of the reduced cost and increased accessibility of the calibration object s. This paper firstly analyzes the relationship between the symmetry property of profile of revolution and the intrinsic parameters of a camera, and then shows how to use images of profile of revolution to provide enough information for det ermining intrinsic parameters. During the process, high-accurate profile extrac tion algorithm has also been used. Finally, results from real data are presented , demonstrating the efficiency and accuracy of the proposed methods.
基金Science and Technology on Electro-Optic Control Laboratory and the Fund of Aeronautical Science(No.201951048001)。
文摘The ability to build an imaging process is crucial to vision measurement.The non-parametric imaging model describes an imaging process as a pixel cluster,in which each pixel is related to a spatial ray originated from an object point.However,a non-parametric model requires a sophisticated calculation process or high-cost devices to obtain a massive quantity of parameters.These disadvantages limit the application of camera models.Therefore,we propose a novel camera model calibration method based on a single-axis rotational target.The rotational vision target offers 3D control points with no need for detailed information of poses of the rotational target.Radial basis function(RBF)network is introduced to map 3D coordinates to 2D image coordinates.We subsequently derive the optimization formulization of imaging model parameters and compute the parameter from the given control points.The model is extended to adapt the stereo camera that is widely used in vision measurement.Experiments have been done to evaluate the performance of the proposed camera calibration method.The results show that the proposed method has superiority in accuracy and effectiveness in comparison with the traditional methods.
基金Supported by Beijing Natural Science Foundation(Grant No.L241012)the National Natural Science Foundation of China(Grant No.62572468).
文摘LiDAR and camera are two of the most common sensors used in the fields of robot perception,autonomous driving,augmented reality,and virtual reality,where these sensors are widely used to perform various tasks such as odometry estimation and 3D reconstruction.Fusing the information from these two sensors can significantly increase the robustness and accuracy of these perception tasks.The extrinsic calibration between cameras and LiDAR is a fundamental prerequisite for multimodal systems.Recently,extensive studies have been conducted on the calibration of extrinsic parameters.Although several calibration methods facilitate sensor fusion,a comprehensive summary for researchers and,especially,non-expert users is lacking.Thus,we present an overview of extrinsic calibration and discuss diverse calibration methods from the perspective of calibration system design.Based on the calibration information sources,this study classifies these methods as target-based or targetless.For each type of calibration method,further classification was performed according to the diverse types of features or constraints used in the calibration process,and their detailed implementations and key characteristics were introduced.Thereafter,calibration-accuracy evaluation methods are presented.Finally,we comprehensively compare the advantages and disadvantages of each calibration method and suggest directions for practical applications and future research.
文摘Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking different colour and depth inputs is required. Our approach removes the numerical difficulties of using non-linear optimization in previous methods which explicitly resolve camera intrinsics as well as the transformation between depth and colour cameras. A matrix of hybrid parameters is introduced to linearize our optimization. The hybrid parameters offer a transformation from a depth parametric space (depth camera image) to a colour parametric space (colour camera image) by combining the intrinsic parameters of depth camera and a rotation transformation from depth camera to colour camera. Both the rotation transformation and intrinsic parameters can be explicitly calculated from our hybrid parameters with the help of a standard QR factorisation. We test our algorithm with both synthesized data and real-world data where ground-truth depth information is captured by Microsoft Kinect. The experiments show that our approach can provide comparable accuracy of calibration with the state-of-the-art algorithms while taking much less computation time (1/50 of Herrera's method and 1/10 of Raposo's method) due to the advantage of using hybrid parameters.
文摘The basic idea of calibrating a camera system in previous approaches is to determine camera parameters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in which camera parameters are determined by a set of 3D lines.A set of constraints is derived on camera parameters in terms of perspective line mapping.From these con- straints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Lin,Huang and Faugeras for camera location determination in which at least 8 line correspondences are re- quired for linear computation of camera location.Since line segments in an image can be located easi- ly and more accurately than points,the use of lines as calibration reference tends to ease the compu- tation in image preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.
基金This work is supported by National Science Foundation of China(no.61505093,61505190)the National Key Research and Development Plan(2016YFC0103600).
文摘The focal plane of a collimator used for the geometric calibration of an optical camera is a key element in the calibration process.The traditional focal plane of the collimator has only a single aperture light lead-in,resulting in a relatively unreliable calibration accuracy.Here we demonstrate a multi-aperture micro-electro-mechanical system(MEMS)light lead-in device that is located at the optical focal plane of the collimator used to calibrate the geometric distortion in cameras.Without additional volume or power consumption,the random errors of this calibration system are decreased by the multi-image matrix.With this new construction and a method for implementing the system,the reliability of high-accuracy calibration of optical cameras is guaranteed.
文摘Inadequate geometric accuracy of cameras is the main constraint to improving the precision of infrared horizon sensors with a large field of view(FOV).An enormous FOV with a blind area in the center greatly limits the accuracy and feasibility of traditional geometric calibration methods.A novel camera calibration method for infrared horizon sensors is presented and validated in this paper.Three infrared targets are used as control points.The camera is mounted on a rotary table.As the table rotates,these control points will be evenly distributed in the entire FOV.Compared with traditional methods that combine a collimator and a rotary table which cannot effectively cover a large FOV and require harsh experimental equipment,this method is easier to implement at a low cost.A corresponding three-step parameter estimation algorithm is proposed to avoid precisely measuring the positions of the camera and the control points.Experiments are implemented with 10 infrared horizon sensors to verify the effectiveness of the calibration method.The results show that the proposed method is highly stable,and that the calibration accuracy is at least 30%higher than those of existing methods.
基金co-supported by the National Natural Science Foundation of China(Nos.42074013 and 41704006)。
文摘This paper proposes a novel self-calibration method for a large-FoV(Field-of-View)camera using a real star image.First,based on the classic equisolid-angle projection model and polynomial distortion model,the inclination of the optical axis is thoroughly considered with respect to the image plane,and a rigorous imaging model including 8 unknown intrinsic parameters is built.Second,the basic calibration equation based on star vector observations is presented.Third,the partial derivative expressions of all 11 camera parameters for linearizing the calibration equation are deduced in detail,and an iterative solution using the least squares method is given.Furtherly,simulation experiment is designed,results of which shows the new model has a better performance than the old model.At last,three experiments were conducted at night in central China and 671 valid star images were collected.The results indicate that the new method obtains a mean magnitude of reprojection error of 0.251 pixels at a 120°FoV,which improves the calibration accuracy by 38.6%compared with the old calibration model(not considering the inclination of the optical axis).When the FoV drops below 20°,the mean magnitude of the reprojection error decreases to 0.15 pixels for both the new model and the old model.Since stars instead of manual control points are used,the new method can realize self-calibration,which might be significant for the long-duration navigation of vehicles in some unfamiliar or extreme environments,such as those of Mars or Earth’s moon.