In recent years,three-dimensional reconstruction technologies that employ multiple cameras have continued to evolve significantly,enabling remote collaboration among users in extended Reality(XR)environments.In additi...In recent years,three-dimensional reconstruction technologies that employ multiple cameras have continued to evolve significantly,enabling remote collaboration among users in extended Reality(XR)environments.In addition,methods for deploying multiple cameras for motion capture of users(e.g.,performers)are widely used in computer graphics.As the need to minimize and optimize the number of cameras grows to reduce costs,various technologies and research approaches focused on Optimal Camera Placement(OCP)are continually being proposed.However,as most existing studies assume homogeneous camera setups,there is a growing demand for studies on heterogeneous camera setups.For instance,technical demands keep emerging in scenarios with minimal camera configurations,especially regarding cost factors,the physical placement of cameras given the spatial structure,and image capture strategies for heterogeneous cameras,such as high-resolution RGB cameras and depth cameras.In this study,we propose a pre-visualization and simulation method for the optimal placement of heterogeneous cameras in XR environments,accounting for both the specifications of heterogeneous cameras(e.g.,field of view)and the physical configuration(e.g.,wall configuration)in real-world spaces.The proposed method performs a visibility analysis of cameras by considering each camera’s field-of-view volume,resolution,and unique characteristics,along with physicalspace constraints.This approach enables the optimal position and rotation of each camera to be recommended,along with the minimum number of cameras required.In the results of our study conducted in heterogeneous camera combinations,the proposed method achieved 81.7%~82.7%coverage of the target visual information using only 2~3 cameras.In contrast,single(or homogeneous)-typed cameras were required to use 11 cameras for 81.6%coverage.Accordingly,we found that camera deployment resources can be reduced with the proposed approaches.展开更多
Photomechanics is a crucial branch of solid mechanics.The localization of point targets constitutes a fundamental problem in optical experimental mechanics,with extensive applications in various missions of unmanned a...Photomechanics is a crucial branch of solid mechanics.The localization of point targets constitutes a fundamental problem in optical experimental mechanics,with extensive applications in various missions of unmanned aerial vehicles.Localizing moving targets is crucial for analyzing their motion characteristics and dynamic properties.Reconstructing the trajectories of points from asynchronous cameras is a significant challenge.It encompasses two coupled sub-problems:Trajectory reconstruction and camera synchronization.Present methods typically address only one of these sub-problems individually.This paper proposes a 3D trajectory reconstruction method for point targets based on asynchronous cameras,simultaneously solving both sub-problems.Firstly,we extend the trajectory intersection method to asynchronous cameras to resolve the limitation of traditional triangulation that requires camera synchronization.Secondly,we develop models for camera temporal information and target motion,based on imaging mechanisms and target dynamics characteristics.The parameters are optimized simultaneously to achieve trajectory reconstruction without accurate time parameters.Thirdly,we optimize the camera rotations alongside the camera time information and target motion parameters,using tighter and more continuous constraints on moving points.The reconstruction accuracy is significantly improved,especially when the camera rotations are inaccurate.Finally,the simulated and real-world experimental results demonstrate the feasibility and accuracy of the proposed method.The real-world results indicate that the proposed algorithm achieved a localization error of 112.95 m at an observation distance range of 15-20 km.展开更多
The estimation of orientation parameters and correction of lens distortion are crucial problems in the field of Unmanned Aerial Vehicles(UAVs)photogrammetry.In recent years,the utilization of UAVs for aerial photogram...The estimation of orientation parameters and correction of lens distortion are crucial problems in the field of Unmanned Aerial Vehicles(UAVs)photogrammetry.In recent years,the utilization of UAVs for aerial photogrammetry has witnessed a surge in popularity.Typically,UAVs are equipped with low-cost non-metric cameras and a Position and Orientation System(POS).Unfortunately,the Interior Orientation Parameters(IOPs)of the non-metric cameras are not fixed.Whether the lens distortions are large or small,they effect the image coordinates accordingly.Additionally,Inertial Measurement Units(IMUs)often have observation errors.To address these challenges and improve parameter estimation for UAVs Light Detection and Ranging(LiDAR)and photogrammetry,this paper analyzes the accuracy of POS observations obtained from Global Navigation Satellite System Real Time Kinematic(GNSS-RTK)and IMU data.A method that incorporates additional known conditions for parameter estimation,a series of algorithms to simultaneously solve for IOPs,Exterior Orientation Parameters(EOPs),and camera lens distortion correction parameters are proposed.Extensive experiments demonstrate that the coordinates measured by GNSS-RTK can be directly used as linear EOPs;however,angular EOP measurements from IMUs exhibit relatively large errors compared to adjustment results and require correction during the adjustment process.The IOPs of non-metric cameras vary slightly between images but need to be treated as unknown parameters in high precision applications.Furthermore,it is found that the Ebner systematic error model is sensitive to the choice of the magnification parameter of the photographic baseline length in images,it should be set as less than or equal to one third of the photographic baseline to ensure stable solutions.展开更多
High resolution remote sensing data has been applied in many fields such as national security, economic construction and in the daily life of the general public around the world, creating a huge market. Commercial rem...High resolution remote sensing data has been applied in many fields such as national security, economic construction and in the daily life of the general public around the world, creating a huge market. Commercial remote sensing cameras have been developed vigorously throughout the world over the last few decades, resulting in resolutions down to 0.31 m. In 2010, the Chinese government approved the implementation of the China High-resolution Earth Observation System(CHEOS) Major Special Project, giving priority to development of high resolution remote sensing satellites. More than half of CHEOS has been constructed to date and 5 satellites operate in orbit. These cameras have different characteristics. A number of innovative technologies have been adopted, which have led to camera performance increasing in leaps and bounds. The products and the production capability enables the remote sensing technical level to increase making it on a par with Europe and the US.展开更多
To transfer the color data from a device (video camera) dependent color space into a device? independent color space, a multilayer feedforward network with the error backpropagation (BP) learning rule, was regarded ...To transfer the color data from a device (video camera) dependent color space into a device? independent color space, a multilayer feedforward network with the error backpropagation (BP) learning rule, was regarded as a nonlinear transformer realizing the mapping from the RGB color space to CIELAB color space. A variety of mapping accuracy were obtained with different network structures. BP neural networks can provide a satisfactory mapping accuracy in the field of color space transformation for video cameras.展开更多
Because of its characteristics of simple algorithm and hardware, optical flow-based motion estimation has become a hot research field, especially in GPS-denied environment. Optical flow could be used to obtain the air...Because of its characteristics of simple algorithm and hardware, optical flow-based motion estimation has become a hot research field, especially in GPS-denied environment. Optical flow could be used to obtain the aircraft motion information, but the six-(degree of freedom)(6-DOF) motion still couldn't be accurately estimated by existing methods. The purpose of this work is to provide a motion estimation method based on optical flow from forward and down looking cameras, which doesn't rely on the assumption of level flight. First, the distribution and decoupling method of optical flow from forward camera are utilized to get attitude. Then, the resulted angular velocities are utilized to obtain the translational optical flow of the down camera, which can eliminate the influence of rotational motion on velocity estimation. Besides, the translational motion estimation equation is simplified by establishing the relation between the depths of feature points and the aircraft altitude. Finally, simulation results show that the method presented is accurate and robust.展开更多
A novel color compensation method for multi-view video coding (MVC) is proposed, which efficiently exploits the inter-view dependencies between views with the existence of color mismatch caused by the diversity of cam...A novel color compensation method for multi-view video coding (MVC) is proposed, which efficiently exploits the inter-view dependencies between views with the existence of color mismatch caused by the diversity of cameras. A color compensation model is developed in RGB channels and then extended to YCbCr channels for practical use. A modified inter-view reference picture is constructed based on the color compensation model, which is more similar to the coding picture than the original inter-view reference picture. Moreover, the color compensation factors can be derived in both encoder and decoder, therefore no additional data need to be transmitted to the decoder. The experimental results show that the proposed method improves the coding efficiency of MVC and maintains good subjective quality.展开更多
This paper describes a multiple camera-based method to reconstruct the 3D shape of a human foot. From a foot database, an initial 3D model of the foot represented by a cloud of points is built. The shape parameters, w...This paper describes a multiple camera-based method to reconstruct the 3D shape of a human foot. From a foot database, an initial 3D model of the foot represented by a cloud of points is built. The shape parameters, which can characterize more than 92% of a foot, are defined by using the principal component analysis method. Then, using "active shape models", the initial 3D model is adapted to the real foot captured in multiple images by applying some constraints (edge points' distance and color variance). We insist here on the experiment part where we demonstrate the efficiency of the proposed method on a plastic foot model, and also on real human feet with various shapes. We propose and compare different ways of texturing the foot which is needed for reconstruction. We present an experiment performed on the plastic foot model and on human feet and propose two different ways to improve the final 3D shapers accuracy according to the previous experiments' results. The first improvement proposed is the densification of the cloud of points used to represent the initial model and the foot database. The second improvement concerns the projected patterns used to texture the foot. We conclude by showing the obtained results for a human foot with the average computed shape error being only 1.06 mm.展开更多
This paper presents a real-time, dynamic system that uses high resolution gimbals and motorized lenses with position encoders on their zoom and focus elements to “recalibrate” the system as needed to track a target....This paper presents a real-time, dynamic system that uses high resolution gimbals and motorized lenses with position encoders on their zoom and focus elements to “recalibrate” the system as needed to track a target. Systems that initially calibrate for a mapping between pixels of a wide field of view (FOV) master camera and the pan-tilt (PT) settings of a steerable narrow FOV slave camera assume that the target is travelling on a plane. As the target travels through the FOV of the master camera, the slave cameras PT settings are then adjusted to keep the target centered within its FOV. In this paper, we describe a system we have developed that allows both cameras to move and extract the 3D coordinates of the target. This is done with only a single initial calibration between pairs of cameras and high-resolution pan-tilt-zoom (PTZ) platforms. Using the information from the PT settings of the PTZ platform as well as the precalibrated settings from a preset zoom lens, the 3D coordinates of the target are extracted and compared to those of a laser range finder and static-dynamic camera pair accuracies.展开更多
This paper proposes a self-position estimate algorithm for the multiple mobile robots; each robot uses two omnidirectional cameras and an accelerometer. In recent years, the Great East Japan Earthquake and large-scale...This paper proposes a self-position estimate algorithm for the multiple mobile robots; each robot uses two omnidirectional cameras and an accelerometer. In recent years, the Great East Japan Earthquake and large-scale disasters have occurred frequently in Japan. From this, development of the searching robot which supports the rescue team to perform a relief activity at a large-scale disaster is indispensable. Then, this research has developed the searching robot group system with two or more mobile robots. In this research, the searching robot equips with two omnidirectional cameras and an accelerometer. In order to perform distance measurement using two omnidirectional cameras, each parameter of an omnidirectional camera and the position and posture between two omnidirectional cameras have to be calibrated in advance. If there are few mobile robots, the calibration time of each omnidirectional camera does not pose a problem. However, if the calibration is separately performed when using two or more robots in a disaster site, etc., it will take huge calibration time. Then, this paper proposed the algorithm which estimates a mobile robot's position and the parameter of the position and posture between two omnidirectional cameras simultaneously. The algorithm proposed in this paper extended Nonlinear Transformation (NLT) Method. This paper conducted the simulation experiment to check the validity of the proposed algorithm. In some simulation experiments, one mobile robot moves and observes the circumference of another mobile robot which has stopped at a certain place. This paper verified whether the mobile robot can estimate position using the measurement value when the number of observation times becomes 10 times in n/18 of observation intervals. The result of the simulation shows the effectiveness of the algorithm.展开更多
THE large-scale TV program, Gems of the Country, has had several airings on prime time CCTV, and has been warmly received each time, winning the unanimous praise of viewers. The program actively promotes Chinese natio...THE large-scale TV program, Gems of the Country, has had several airings on prime time CCTV, and has been warmly received each time, winning the unanimous praise of viewers. The program actively promotes Chinese national culture, boosting national morale, and bringing the splendid culture of China to the world.The initiator, chief planner and chief director of this program is Li Dongge, a graduate of the Xi’an University of展开更多
This paper shows the method of estimating spatiotemporal distribution of pedestrians by using watch cameras. We estimate the distribution without tracking technology, with pedestrian's privacy protected and in Umeda ...This paper shows the method of estimating spatiotemporal distribution of pedestrians by using watch cameras. We estimate the distribution without tracking technology, with pedestrian's privacy protected and in Umeda underground mall. Lately spatiotemporal distribution of pedestrians has being increasingly important in the field of urban planning, disaster prevention planning, marketing and so on. Although many researchers have tried to capture the information of location as dealing with some sensors, some problems still remain, such as the investment of sensors, the restriction of the number of people who has the device they are able to capture. From such background, we develop an original labelling algorithm and estimate the spatiotemporal distribution of pedestrians and the information of the passing time and the direction of pedestrians from sequential images of a watch camera.展开更多
Central catadioptric cameras are widely used in virtual reality and robot navigation,and the camera calibration is a prerequisite for these applications.In this paper,we propose an easy calibration method for central ...Central catadioptric cameras are widely used in virtual reality and robot navigation,and the camera calibration is a prerequisite for these applications.In this paper,we propose an easy calibration method for central catadioptric cameras with a 2D calibration pattern.Firstly,the bounding ellipse of the catadioptric image and field of view(FOV)are used to obtain the initial estimation of the intrinsic parameters.Then,the explicit relationship between the central catadioptric and the pinhole model is used to initialize the extrinsic parameters.Finally,the intrinsic and extrinsic parameters are refined by nonlinear optimization.The proposed method does not need any fitting of partial visible conic,and the projected images of 2D calibration pattern can easily cover the whole image,so our method is easy and robust.Experiments with simulated data as well as real images show the satisfactory performance of our proposed calibration method.展开更多
AIM:To explore the feasibility of dual camera capsule (DCC)small-bowel(SB)imaging and to examine if two cameras complement each other to detect more SB lesions.METHODS:Forty-one eligible,consecutive patients underwent...AIM:To explore the feasibility of dual camera capsule (DCC)small-bowel(SB)imaging and to examine if two cameras complement each other to detect more SB lesions.METHODS:Forty-one eligible,consecutive patients underwent DCC SB imaging.Two experienced investigators examined the videos and compared the total number of detected lesions to the number of lesions detected by each camera separately.Examination tolerability was assessed using a questionnaire.RESULTS:One patient was excluded.DCC cameras detected 68 positive findings(POS)in 20(50%)cases.Fifty of them were detected by the"yellow"camera,48 by the"green"and 28 by both cameras;44%(n=22)of the"yellow"camera’s POS were not detected by the"green"camera and 42%(n=20)of the"green" camera’s POS were not detected by the"yellow"camera.In two cases,only one camera detected significant findings.All participants had 216 findings of unknown significance(FUS).The"yellow","green"and both cameras detected 171,161,and 116 FUS,respectively;32%(n=55)of the"yellow"camera’s FUS were not detected by the"green"camera and 28%(n=45)of the"green"camera’s FUS were not detected by the "yellow"camera.There were no complications related to the examination,and 97.6%of the patients would repeat the examination,if necessary.CONCLUSION:DCC SB examination is feasible and well tolerated.The two cameras complement each other to detect more SB lesions.展开更多
The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo m...The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo mapping camera equipped on lunar orbiter before launching.In this work,an imaging simulation method consid-ering the attitude jitter is presented.The impact analysis of different attitude jitter on terrain undulation is conduct-ed by simulating jitter at three attitude angles,respectively.The proposed simulation method is based on the rigor-ous sensor model,using the lunar digital elevation model(DEM)and orthoimage as reference data.The orbit and attitude of the lunar stereo mapping camera are simulated while considering the attitude jitter.Two-dimensional simulated stereo images are generated according to the position and attitude of the orbiter in a given orbit.Experi-mental analyses were conducted by the DEM with the simulated stereo image.The simulation imaging results demonstrate that the proposed method can ensure imaging efficiency without losing the accuracy of topographic mapping.The effect of attitude jitter on the stereo mapping accuracy of the simulated images was analyzed through a DEM comparison.展开更多
Accurate vehicle localization is a key technology for autonomous driving tasks in indoor parking lots,such as automated valet parking.Additionally,infrastructure-based cooperative driving systems have become a means t...Accurate vehicle localization is a key technology for autonomous driving tasks in indoor parking lots,such as automated valet parking.Additionally,infrastructure-based cooperative driving systems have become a means to realizing intelligent driving.In this paper,we propose a novel and practical vehicle localization system using infrastructure-based RGB-D cameras for indoor parking lots.In the proposed system,we design a depth data preprocessing method with both simplicity and efficiency to reduce the computational burden resulting from a large amount of data.Meanwhile,the hardware synchronization for all cameras in the sensor network is not implemented owing to the disadvantage that it is extremely cumbersome and would significantly reduce the scalability of our system in mass deployments.Hence,to address the problem of data distortion accompanying vehicle motion,we propose a vehicle localization method by performing template point cloud registration in distributed depth data.Finally,a complete hardware system was built to verify the feasibility of our solution in a real-world environment.Experiments in an indoor parking lot demonstrated the effectiveness and accuracy of the proposed vehicle localization system,with a maximum root mean squared error of 5 cm at 15Hz compared with the ground truth.展开更多
Three-dimensional(3D)modeling is an important topic in computer graphics and computer vision.In recent years,the introduction of consumer-grade depth cameras has resulted in profound advances in 3D modeling.Starting w...Three-dimensional(3D)modeling is an important topic in computer graphics and computer vision.In recent years,the introduction of consumer-grade depth cameras has resulted in profound advances in 3D modeling.Starting with the basic data structure,this survey reviews the latest developments of 3D modeling based on depth cameras,including research works on camera tracking,3D object and scene reconstruction,and high-quality texture reconstruction.We also discuss the future work and possible solutions for 3D modeling based on the depth camera.展开更多
It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibra...It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibrate a camera with large field of view(FOV).In this paper,a method is proposed to construct a virtual large reference plate with high precision.Firstly,a high precision datum plane is constructed with a laser interferometer and one-dimensional air guideway,and then the reference plate is positioned at different locations and orientations in the FOV of the camera.The feature points of reference plate are projected to the datum plane to obtain a virtual large reference plate with high-precision.The camera is moved to several positions to get different virtual reference plates,and the camera is calibrated with the virtual reference plates.The experimental results show that the mean re-projection error of the camera calibrated with the proposed method is 0.062 pixels.The length of a scale bar with standard length of 959.778mm was measured with a vision system composed of two calibrated cameras,and the length measurement error is 0.389mm.展开更多
Due to the electronic rolling shutter, high-speed Complementary Metal-Oxide Semiconductor( CMOS) aerial cameras are generally subject to geometric distortions,which cannot be perfectly corrected by conventional vision...Due to the electronic rolling shutter, high-speed Complementary Metal-Oxide Semiconductor( CMOS) aerial cameras are generally subject to geometric distortions,which cannot be perfectly corrected by conventional vision-based algorithms. In this paper we propose a novel approach to address the problem of rolling shutter distortion in aerial imaging. A mathematical model is established by the coordinate transformation method. It can directly calculate the pixel distortion when an aerial camera is imaging at arbitrary gesture angles.Then all pixel distortions form a distortion map over the whole CMOS array and the map is exploited in the image rectification process incorporating reverse projection. The error analysis indicates that within the margin of measuring errors,the final calculation error of our model is less than 1/2 pixel. The experimental results show that our approach yields good rectification performance in a series of images with different distortions. We demonstrate that our method outperforms other vision-based algorithms in terms of the computational complexity,which makes it more suitable for aerial real-time imaging.展开更多
Theγ-rays are widely and abundantly present in strong nuclear radiation environments,and when they act on the camera equipment used to obtain environmental visual information on nuclear robots,radiation effects will ...Theγ-rays are widely and abundantly present in strong nuclear radiation environments,and when they act on the camera equipment used to obtain environmental visual information on nuclear robots,radiation effects will occur,which will degrade the performance of the camera system,reduce the imaging quality,and even cause catastrophic consequences.Color reducibility is an important index for evaluating the imaging quality of color camera,but its degradation mechanism in a nuclear radiation environment is still unclear.In this paper,theγ-ray irradiation experiments of CMOS cameras were carried out to analyse the degradation law of the camera’s color reducibility with cumulative irradiation and reveal the degradation mechanism of the color information of the CMOS camera underγ-ray irradiation.The results show that the spectral response of CMOS image sensor(CIS)and the spectral transmittance of lens after irradiation affect the values of a^(*)and b^(*)in the LAB color model.While the full well capacity(FWC)of CIS and transmittance of lens affect the value of L^(*)in the LAB color model,thus increase color difference and reduce brightness,the combined effect of color difference and brightness degradation will reduce the color reducibility of CMOS cameras.Therefore,the degradation of the color information of the CMOS camera afterγ-ray irradiation mainly comes from the changes in the FWC and spectral response of CIS,and the spectral transmittance of lens.展开更多
基金supported by the 2024 Research Fund of University of Ulsan.
文摘In recent years,three-dimensional reconstruction technologies that employ multiple cameras have continued to evolve significantly,enabling remote collaboration among users in extended Reality(XR)environments.In addition,methods for deploying multiple cameras for motion capture of users(e.g.,performers)are widely used in computer graphics.As the need to minimize and optimize the number of cameras grows to reduce costs,various technologies and research approaches focused on Optimal Camera Placement(OCP)are continually being proposed.However,as most existing studies assume homogeneous camera setups,there is a growing demand for studies on heterogeneous camera setups.For instance,technical demands keep emerging in scenarios with minimal camera configurations,especially regarding cost factors,the physical placement of cameras given the spatial structure,and image capture strategies for heterogeneous cameras,such as high-resolution RGB cameras and depth cameras.In this study,we propose a pre-visualization and simulation method for the optimal placement of heterogeneous cameras in XR environments,accounting for both the specifications of heterogeneous cameras(e.g.,field of view)and the physical configuration(e.g.,wall configuration)in real-world spaces.The proposed method performs a visibility analysis of cameras by considering each camera’s field-of-view volume,resolution,and unique characteristics,along with physicalspace constraints.This approach enables the optimal position and rotation of each camera to be recommended,along with the minimum number of cameras required.In the results of our study conducted in heterogeneous camera combinations,the proposed method achieved 81.7%~82.7%coverage of the target visual information using only 2~3 cameras.In contrast,single(or homogeneous)-typed cameras were required to use 11 cameras for 81.6%coverage.Accordingly,we found that camera deployment resources can be reduced with the proposed approaches.
基金supported by the Hunan Provin〓〓cial Natural Science Foundation for Excellent Young Scholars(Grant No.2023JJ20045)the National Natural Science Foundation of China(Grant No.12372189)。
文摘Photomechanics is a crucial branch of solid mechanics.The localization of point targets constitutes a fundamental problem in optical experimental mechanics,with extensive applications in various missions of unmanned aerial vehicles.Localizing moving targets is crucial for analyzing their motion characteristics and dynamic properties.Reconstructing the trajectories of points from asynchronous cameras is a significant challenge.It encompasses two coupled sub-problems:Trajectory reconstruction and camera synchronization.Present methods typically address only one of these sub-problems individually.This paper proposes a 3D trajectory reconstruction method for point targets based on asynchronous cameras,simultaneously solving both sub-problems.Firstly,we extend the trajectory intersection method to asynchronous cameras to resolve the limitation of traditional triangulation that requires camera synchronization.Secondly,we develop models for camera temporal information and target motion,based on imaging mechanisms and target dynamics characteristics.The parameters are optimized simultaneously to achieve trajectory reconstruction without accurate time parameters.Thirdly,we optimize the camera rotations alongside the camera time information and target motion parameters,using tighter and more continuous constraints on moving points.The reconstruction accuracy is significantly improved,especially when the camera rotations are inaccurate.Finally,the simulated and real-world experimental results demonstrate the feasibility and accuracy of the proposed method.The real-world results indicate that the proposed algorithm achieved a localization error of 112.95 m at an observation distance range of 15-20 km.
基金Natural Science Foundation of Hunan Province,China(No.2024JJ8335)Open Topic of Hunan Geospatial Information Engineering and Technology Research Center,China(No.HNGIET2023004).
文摘The estimation of orientation parameters and correction of lens distortion are crucial problems in the field of Unmanned Aerial Vehicles(UAVs)photogrammetry.In recent years,the utilization of UAVs for aerial photogrammetry has witnessed a surge in popularity.Typically,UAVs are equipped with low-cost non-metric cameras and a Position and Orientation System(POS).Unfortunately,the Interior Orientation Parameters(IOPs)of the non-metric cameras are not fixed.Whether the lens distortions are large or small,they effect the image coordinates accordingly.Additionally,Inertial Measurement Units(IMUs)often have observation errors.To address these challenges and improve parameter estimation for UAVs Light Detection and Ranging(LiDAR)and photogrammetry,this paper analyzes the accuracy of POS observations obtained from Global Navigation Satellite System Real Time Kinematic(GNSS-RTK)and IMU data.A method that incorporates additional known conditions for parameter estimation,a series of algorithms to simultaneously solve for IOPs,Exterior Orientation Parameters(EOPs),and camera lens distortion correction parameters are proposed.Extensive experiments demonstrate that the coordinates measured by GNSS-RTK can be directly used as linear EOPs;however,angular EOP measurements from IMUs exhibit relatively large errors compared to adjustment results and require correction during the adjustment process.The IOPs of non-metric cameras vary slightly between images but need to be treated as unknown parameters in high precision applications.Furthermore,it is found that the Ebner systematic error model is sensitive to the choice of the magnification parameter of the photographic baseline length in images,it should be set as less than or equal to one third of the photographic baseline to ensure stable solutions.
文摘High resolution remote sensing data has been applied in many fields such as national security, economic construction and in the daily life of the general public around the world, creating a huge market. Commercial remote sensing cameras have been developed vigorously throughout the world over the last few decades, resulting in resolutions down to 0.31 m. In 2010, the Chinese government approved the implementation of the China High-resolution Earth Observation System(CHEOS) Major Special Project, giving priority to development of high resolution remote sensing satellites. More than half of CHEOS has been constructed to date and 5 satellites operate in orbit. These cameras have different characteristics. A number of innovative technologies have been adopted, which have led to camera performance increasing in leaps and bounds. The products and the production capability enables the remote sensing technical level to increase making it on a par with Europe and the US.
文摘To transfer the color data from a device (video camera) dependent color space into a device? independent color space, a multilayer feedforward network with the error backpropagation (BP) learning rule, was regarded as a nonlinear transformer realizing the mapping from the RGB color space to CIELAB color space. A variety of mapping accuracy were obtained with different network structures. BP neural networks can provide a satisfactory mapping accuracy in the field of color space transformation for video cameras.
基金Project(2012CB720003)supported by the National Basic Research Program of ChinaProjects(61320106010,61127007,61121003,61573019)supported by the National Natural Science Foundation of ChinaProject(2013DFE13040)supported by the Special Program for International Science and Technology Cooperation from Ministry of Science and Technology of China
文摘Because of its characteristics of simple algorithm and hardware, optical flow-based motion estimation has become a hot research field, especially in GPS-denied environment. Optical flow could be used to obtain the aircraft motion information, but the six-(degree of freedom)(6-DOF) motion still couldn't be accurately estimated by existing methods. The purpose of this work is to provide a motion estimation method based on optical flow from forward and down looking cameras, which doesn't rely on the assumption of level flight. First, the distribution and decoupling method of optical flow from forward camera are utilized to get attitude. Then, the resulted angular velocities are utilized to obtain the translational optical flow of the down camera, which can eliminate the influence of rotational motion on velocity estimation. Besides, the translational motion estimation equation is simplified by establishing the relation between the depths of feature points and the aircraft altitude. Finally, simulation results show that the method presented is accurate and robust.
基金Project supported by the National Natural Science Foundation of China (No. 60772134)the Innovation Foundation of Xidian University,China (No. Chuang 05018)
文摘A novel color compensation method for multi-view video coding (MVC) is proposed, which efficiently exploits the inter-view dependencies between views with the existence of color mismatch caused by the diversity of cameras. A color compensation model is developed in RGB channels and then extended to YCbCr channels for practical use. A modified inter-view reference picture is constructed based on the color compensation model, which is more similar to the coding picture than the original inter-view reference picture. Moreover, the color compensation factors can be derived in both encoder and decoder, therefore no additional data need to be transmitted to the decoder. The experimental results show that the proposed method improves the coding efficiency of MVC and maintains good subjective quality.
基金This work was supported by Grant-in-Aid for Scientific Research (C) (No.17500119)
文摘This paper describes a multiple camera-based method to reconstruct the 3D shape of a human foot. From a foot database, an initial 3D model of the foot represented by a cloud of points is built. The shape parameters, which can characterize more than 92% of a foot, are defined by using the principal component analysis method. Then, using "active shape models", the initial 3D model is adapted to the real foot captured in multiple images by applying some constraints (edge points' distance and color variance). We insist here on the experiment part where we demonstrate the efficiency of the proposed method on a plastic foot model, and also on real human feet with various shapes. We propose and compare different ways of texturing the foot which is needed for reconstruction. We present an experiment performed on the plastic foot model and on human feet and propose two different ways to improve the final 3D shapers accuracy according to the previous experiments' results. The first improvement proposed is the densification of the cloud of points used to represent the initial model and the foot database. The second improvement concerns the projected patterns used to texture the foot. We conclude by showing the obtained results for a human foot with the average computed shape error being only 1.06 mm.
文摘This paper presents a real-time, dynamic system that uses high resolution gimbals and motorized lenses with position encoders on their zoom and focus elements to “recalibrate” the system as needed to track a target. Systems that initially calibrate for a mapping between pixels of a wide field of view (FOV) master camera and the pan-tilt (PT) settings of a steerable narrow FOV slave camera assume that the target is travelling on a plane. As the target travels through the FOV of the master camera, the slave cameras PT settings are then adjusted to keep the target centered within its FOV. In this paper, we describe a system we have developed that allows both cameras to move and extract the 3D coordinates of the target. This is done with only a single initial calibration between pairs of cameras and high-resolution pan-tilt-zoom (PTZ) platforms. Using the information from the PT settings of the PTZ platform as well as the precalibrated settings from a preset zoom lens, the 3D coordinates of the target are extracted and compared to those of a laser range finder and static-dynamic camera pair accuracies.
文摘This paper proposes a self-position estimate algorithm for the multiple mobile robots; each robot uses two omnidirectional cameras and an accelerometer. In recent years, the Great East Japan Earthquake and large-scale disasters have occurred frequently in Japan. From this, development of the searching robot which supports the rescue team to perform a relief activity at a large-scale disaster is indispensable. Then, this research has developed the searching robot group system with two or more mobile robots. In this research, the searching robot equips with two omnidirectional cameras and an accelerometer. In order to perform distance measurement using two omnidirectional cameras, each parameter of an omnidirectional camera and the position and posture between two omnidirectional cameras have to be calibrated in advance. If there are few mobile robots, the calibration time of each omnidirectional camera does not pose a problem. However, if the calibration is separately performed when using two or more robots in a disaster site, etc., it will take huge calibration time. Then, this paper proposed the algorithm which estimates a mobile robot's position and the parameter of the position and posture between two omnidirectional cameras simultaneously. The algorithm proposed in this paper extended Nonlinear Transformation (NLT) Method. This paper conducted the simulation experiment to check the validity of the proposed algorithm. In some simulation experiments, one mobile robot moves and observes the circumference of another mobile robot which has stopped at a certain place. This paper verified whether the mobile robot can estimate position using the measurement value when the number of observation times becomes 10 times in n/18 of observation intervals. The result of the simulation shows the effectiveness of the algorithm.
文摘THE large-scale TV program, Gems of the Country, has had several airings on prime time CCTV, and has been warmly received each time, winning the unanimous praise of viewers. The program actively promotes Chinese national culture, boosting national morale, and bringing the splendid culture of China to the world.The initiator, chief planner and chief director of this program is Li Dongge, a graduate of the Xi’an University of
基金Partially Supported by Grant-in-Aid for Scientific Research(A)(No.25240004)
文摘This paper shows the method of estimating spatiotemporal distribution of pedestrians by using watch cameras. We estimate the distribution without tracking technology, with pedestrian's privacy protected and in Umeda underground mall. Lately spatiotemporal distribution of pedestrians has being increasingly important in the field of urban planning, disaster prevention planning, marketing and so on. Although many researchers have tried to capture the information of location as dealing with some sensors, some problems still remain, such as the investment of sensors, the restriction of the number of people who has the device they are able to capture. From such background, we develop an original labelling algorithm and estimate the spatiotemporal distribution of pedestrians and the information of the passing time and the direction of pedestrians from sequential images of a watch camera.
基金Supported by National Natural Science Foundation of China(60575019)the National High Technology Research and Development Program of China(863 Program)(2006AA01Zl16)Institute of Automation Chinese Academy of Sciences Innovation Fund For Young Scientists
文摘Central catadioptric cameras are widely used in virtual reality and robot navigation,and the camera calibration is a prerequisite for these applications.In this paper,we propose an easy calibration method for central catadioptric cameras with a 2D calibration pattern.Firstly,the bounding ellipse of the catadioptric image and field of view(FOV)are used to obtain the initial estimation of the intrinsic parameters.Then,the explicit relationship between the central catadioptric and the pinhole model is used to initialize the extrinsic parameters.Finally,the intrinsic and extrinsic parameters are refined by nonlinear optimization.The proposed method does not need any fitting of partial visible conic,and the projected images of 2D calibration pattern can easily cover the whole image,so our method is easy and robust.Experiments with simulated data as well as real images show the satisfactory performance of our proposed calibration method.
文摘AIM:To explore the feasibility of dual camera capsule (DCC)small-bowel(SB)imaging and to examine if two cameras complement each other to detect more SB lesions.METHODS:Forty-one eligible,consecutive patients underwent DCC SB imaging.Two experienced investigators examined the videos and compared the total number of detected lesions to the number of lesions detected by each camera separately.Examination tolerability was assessed using a questionnaire.RESULTS:One patient was excluded.DCC cameras detected 68 positive findings(POS)in 20(50%)cases.Fifty of them were detected by the"yellow"camera,48 by the"green"and 28 by both cameras;44%(n=22)of the"yellow"camera’s POS were not detected by the"green"camera and 42%(n=20)of the"green" camera’s POS were not detected by the"yellow"camera.In two cases,only one camera detected significant findings.All participants had 216 findings of unknown significance(FUS).The"yellow","green"and both cameras detected 171,161,and 116 FUS,respectively;32%(n=55)of the"yellow"camera’s FUS were not detected by the"green"camera and 28%(n=45)of the"green"camera’s FUS were not detected by the "yellow"camera.There were no complications related to the examination,and 97.6%of the patients would repeat the examination,if necessary.CONCLUSION:DCC SB examination is feasible and well tolerated.The two cameras complement each other to detect more SB lesions.
基金Supported by the National Natural Science Foundation of China(42221002,42171432)Shanghai Municipal Science and Technology Major Project(2021SHZDZX0100)the Fundamental Research Funds for the Central Universities.
文摘The geometric accuracy of topographic mapping with high-resolution remote sensing images is inevita-bly affected by the orbiter attitude jitter.Therefore,it is necessary to conduct preliminary research on the stereo mapping camera equipped on lunar orbiter before launching.In this work,an imaging simulation method consid-ering the attitude jitter is presented.The impact analysis of different attitude jitter on terrain undulation is conduct-ed by simulating jitter at three attitude angles,respectively.The proposed simulation method is based on the rigor-ous sensor model,using the lunar digital elevation model(DEM)and orthoimage as reference data.The orbit and attitude of the lunar stereo mapping camera are simulated while considering the attitude jitter.Two-dimensional simulated stereo images are generated according to the position and attitude of the orbiter in a given orbit.Experi-mental analyses were conducted by the DEM with the simulated stereo image.The simulation imaging results demonstrate that the proposed method can ensure imaging efficiency without losing the accuracy of topographic mapping.The effect of attitude jitter on the stereo mapping accuracy of the simulated images was analyzed through a DEM comparison.
基金the National Natural Science Foundation of China(No.62173228)。
文摘Accurate vehicle localization is a key technology for autonomous driving tasks in indoor parking lots,such as automated valet parking.Additionally,infrastructure-based cooperative driving systems have become a means to realizing intelligent driving.In this paper,we propose a novel and practical vehicle localization system using infrastructure-based RGB-D cameras for indoor parking lots.In the proposed system,we design a depth data preprocessing method with both simplicity and efficiency to reduce the computational burden resulting from a large amount of data.Meanwhile,the hardware synchronization for all cameras in the sensor network is not implemented owing to the disadvantage that it is extremely cumbersome and would significantly reduce the scalability of our system in mass deployments.Hence,to address the problem of data distortion accompanying vehicle motion,we propose a vehicle localization method by performing template point cloud registration in distributed depth data.Finally,a complete hardware system was built to verify the feasibility of our solution in a real-world environment.Experiments in an indoor parking lot demonstrated the effectiveness and accuracy of the proposed vehicle localization system,with a maximum root mean squared error of 5 cm at 15Hz compared with the ground truth.
基金National Natural Science Foundation of China(61732016).
文摘Three-dimensional(3D)modeling is an important topic in computer graphics and computer vision.In recent years,the introduction of consumer-grade depth cameras has resulted in profound advances in 3D modeling.Starting with the basic data structure,this survey reviews the latest developments of 3D modeling based on depth cameras,including research works on camera tracking,3D object and scene reconstruction,and high-quality texture reconstruction.We also discuss the future work and possible solutions for 3D modeling based on the depth camera.
文摘It is well known that the accuracy of camera calibration is constrained by the size of the reference plate,it is difficult to fabricate large reference plates with high precision.Therefore,it is non-trivial to calibrate a camera with large field of view(FOV).In this paper,a method is proposed to construct a virtual large reference plate with high precision.Firstly,a high precision datum plane is constructed with a laser interferometer and one-dimensional air guideway,and then the reference plate is positioned at different locations and orientations in the FOV of the camera.The feature points of reference plate are projected to the datum plane to obtain a virtual large reference plate with high-precision.The camera is moved to several positions to get different virtual reference plates,and the camera is calibrated with the virtual reference plates.The experimental results show that the mean re-projection error of the camera calibrated with the proposed method is 0.062 pixels.The length of a scale bar with standard length of 959.778mm was measured with a vision system composed of two calibrated cameras,and the length measurement error is 0.389mm.
基金Sponsored by the National Natural Science Foundation of China(Grant No.60902067)the Foundation for Science & Technology Research Project of Jilin Province(Grant No.11ZDGG001)
文摘Due to the electronic rolling shutter, high-speed Complementary Metal-Oxide Semiconductor( CMOS) aerial cameras are generally subject to geometric distortions,which cannot be perfectly corrected by conventional vision-based algorithms. In this paper we propose a novel approach to address the problem of rolling shutter distortion in aerial imaging. A mathematical model is established by the coordinate transformation method. It can directly calculate the pixel distortion when an aerial camera is imaging at arbitrary gesture angles.Then all pixel distortions form a distortion map over the whole CMOS array and the map is exploited in the image rectification process incorporating reverse projection. The error analysis indicates that within the margin of measuring errors,the final calculation error of our model is less than 1/2 pixel. The experimental results show that our approach yields good rectification performance in a series of images with different distortions. We demonstrate that our method outperforms other vision-based algorithms in terms of the computational complexity,which makes it more suitable for aerial real-time imaging.
基金National Natural Science Foundation of China(11805269)West Light Talent Training Plan of the Chinese Academy of Sciences(2022-XBQNXZ-010)Science and Technology Innovation Leading Talent Project of Xinjiang Uygur Autonomous Region(2022TSYCLJ0042)。
文摘Theγ-rays are widely and abundantly present in strong nuclear radiation environments,and when they act on the camera equipment used to obtain environmental visual information on nuclear robots,radiation effects will occur,which will degrade the performance of the camera system,reduce the imaging quality,and even cause catastrophic consequences.Color reducibility is an important index for evaluating the imaging quality of color camera,but its degradation mechanism in a nuclear radiation environment is still unclear.In this paper,theγ-ray irradiation experiments of CMOS cameras were carried out to analyse the degradation law of the camera’s color reducibility with cumulative irradiation and reveal the degradation mechanism of the color information of the CMOS camera underγ-ray irradiation.The results show that the spectral response of CMOS image sensor(CIS)and the spectral transmittance of lens after irradiation affect the values of a^(*)and b^(*)in the LAB color model.While the full well capacity(FWC)of CIS and transmittance of lens affect the value of L^(*)in the LAB color model,thus increase color difference and reduce brightness,the combined effect of color difference and brightness degradation will reduce the color reducibility of CMOS cameras.Therefore,the degradation of the color information of the CMOS camera afterγ-ray irradiation mainly comes from the changes in the FWC and spectral response of CIS,and the spectral transmittance of lens.