To enhance the image motion compensation accuracy of off-axis three-mirror anastigmatic( TMA)three-line array aerospace mapping cameras,a new method of image motion velocity field modeling is proposed in this paper. F...To enhance the image motion compensation accuracy of off-axis three-mirror anastigmatic( TMA)three-line array aerospace mapping cameras,a new method of image motion velocity field modeling is proposed in this paper. Firstly,based on the imaging principle of mapping cameras,an analytical expression of image motion velocity of off-axis TMA three-line array aerospace mapping cameras is deduced from different coordinate systems we established and the attitude dynamics principle. Then,the case of a three-line array mapping camera is studied,in which the simulation of the focal plane image motion velocity fields of the forward-view camera,the nadir-view camera and the backward-view camera are carried out,and the optimization schemes for image motion velocity matching and drift angle matching are formulated according the simulation results. Finally,this method is verified with a dynamic imaging experimental system. The results are indicative of that when image motion compensation for nadir-view camera is conducted using the proposed image motion velocity field model,the line pair of target images at Nyquist frequency is clear and distinguishable. Under the constraint that modulation transfer function( MTF) reduces by 5%,when the horizontal frequencies of the forward-view camera and the backward-view camera are adjusted uniformly according to the proposed image motion velocity matching scheme,the time delay integration( TDI) stages reach 6 at most. When the TDI stages are more than 6,the three groups of camera will independently undergo horizontal frequency adjustment. However, when the proposed drift angle matching scheme is adopted for uniform drift angle adjustment,the number of TDI stages will not exceed 81. The experimental results have demonstrated the validity and accuracy of the proposed image motion velocity field model and matching optimization scheme,providing reliable basis for on-orbit image motion compensation of aerospace mapping cameras.展开更多
In recent years,three-dimensional reconstruction technologies that employ multiple cameras have continued to evolve significantly,enabling remote collaboration among users in extended Reality(XR)environments.In additi...In recent years,three-dimensional reconstruction technologies that employ multiple cameras have continued to evolve significantly,enabling remote collaboration among users in extended Reality(XR)environments.In addition,methods for deploying multiple cameras for motion capture of users(e.g.,performers)are widely used in computer graphics.As the need to minimize and optimize the number of cameras grows to reduce costs,various technologies and research approaches focused on Optimal Camera Placement(OCP)are continually being proposed.However,as most existing studies assume homogeneous camera setups,there is a growing demand for studies on heterogeneous camera setups.For instance,technical demands keep emerging in scenarios with minimal camera configurations,especially regarding cost factors,the physical placement of cameras given the spatial structure,and image capture strategies for heterogeneous cameras,such as high-resolution RGB cameras and depth cameras.In this study,we propose a pre-visualization and simulation method for the optimal placement of heterogeneous cameras in XR environments,accounting for both the specifications of heterogeneous cameras(e.g.,field of view)and the physical configuration(e.g.,wall configuration)in real-world spaces.The proposed method performs a visibility analysis of cameras by considering each camera’s field-of-view volume,resolution,and unique characteristics,along with physicalspace constraints.This approach enables the optimal position and rotation of each camera to be recommended,along with the minimum number of cameras required.In the results of our study conducted in heterogeneous camera combinations,the proposed method achieved 81.7%~82.7%coverage of the target visual information using only 2~3 cameras.In contrast,single(or homogeneous)-typed cameras were required to use 11 cameras for 81.6%coverage.Accordingly,we found that camera deployment resources can be reduced with the proposed approaches.展开更多
To address the challenges of high-precision optical surface defect detection,we propose a novel design for a wide-field and broadband light field camera in this work.The proposed system can achieve a 50°field of ...To address the challenges of high-precision optical surface defect detection,we propose a novel design for a wide-field and broadband light field camera in this work.The proposed system can achieve a 50°field of view and operates at both visible and near-infrared wavelengths.Using the principles of light field imaging,the proposed design enables 3D reconstruction of optical surfaces,thus enabling vertical surface height measurements with enhanced accuracy.Using Zemax-based simulations,we evaluate the system’s modulation transfer function,its optical aberrations,and its tolerance to shape variations through Zernike coefficient adjustments.The results demonstrate that this camera can achieve the required spatial resolution while also maintaining high imaging quality and thus offers a promising solution for advanced optical surface defect inspection.展开更多
Vascular abnormalities are closely associated with the pathogenesis and progression of numerous diseases, such as thrombosis, tumors, and diabetes. Blood flow velocity serves as a critical biomarker for evaluating per...Vascular abnormalities are closely associated with the pathogenesis and progression of numerous diseases, such as thrombosis, tumors, and diabetes. Blood flow velocity serves as a critical biomarker for evaluating perfusion status. Quantitative detection of full-field blood flow variations in lesion areas holds significant scientific and clinical value for pathological studies,diagnosis, and intraoperative monitoring of related diseases. While laser speckle contrast imaging(LSCI) enables full-field blood flow visualization, its reliance on frame-based sensors necessitates handling massive data volumes, leading to inherent trade-offs among spatiotemporal resolution, real-time performance, and quantitative capabilities. Leveraging the asynchronous dynamic sensing, high temporal sampling rate, and low data redundancy of event cameras, this study proposes a quantitative blood flow imaging method termed laser speckle event imaging(LSEI). Experiments using off-the-shelf event cameras demonstrate that LSEI achieves real-time blood flow imaging with minimal computational overhead compared to frame-based LSCI. Furthermore,we investigate the relationship between event data streams and flow velocity through spatial-temporal autocorrelation analysis,enabling quantitative measurements without compromising temporal or spatial resolution. In in vivo imaging experiments of mouse ear blood flow, LSEI exhibits superior imaging details and real-time performance over conventional methods. The proposed approach holds promise as an efficient tool for diagnosis, therapeutic evaluation, and research on vascular-related diseases.展开更多
LiDAR and camera are two of the most common sensors used in the fields of robot perception,autonomous driving,augmented reality,and virtual reality,where these sensors are widely used to perform various tasks such as ...LiDAR and camera are two of the most common sensors used in the fields of robot perception,autonomous driving,augmented reality,and virtual reality,where these sensors are widely used to perform various tasks such as odometry estimation and 3D reconstruction.Fusing the information from these two sensors can significantly increase the robustness and accuracy of these perception tasks.The extrinsic calibration between cameras and LiDAR is a fundamental prerequisite for multimodal systems.Recently,extensive studies have been conducted on the calibration of extrinsic parameters.Although several calibration methods facilitate sensor fusion,a comprehensive summary for researchers and,especially,non-expert users is lacking.Thus,we present an overview of extrinsic calibration and discuss diverse calibration methods from the perspective of calibration system design.Based on the calibration information sources,this study classifies these methods as target-based or targetless.For each type of calibration method,further classification was performed according to the diverse types of features or constraints used in the calibration process,and their detailed implementations and key characteristics were introduced.Thereafter,calibration-accuracy evaluation methods are presented.Finally,we comprehensively compare the advantages and disadvantages of each calibration method and suggest directions for practical applications and future research.展开更多
Abst[Objective] This study was to understand the genetic dynamics of three-line hybrid rice, and explore the respective effect of sterile line and restoring line on grain characters of hybrid rica. [Method] Four three...Abst[Objective] This study was to understand the genetic dynamics of three-line hybrid rice, and explore the respective effect of sterile line and restoring line on grain characters of hybrid rica. [Method] Four three-line stedle lines and 27 restoring lines(cultivars) commonly culti- vated in Central China region were regarded as expadmental materials to conduct 4 x27NCII cross design, and the grain characters of three-line hybrid rico were analyzed at genetic and correlation levels. [ Result] Four characters of grain length, grain width, 1 000-grain weight and length- width ratio play the leading role in additive gene effect; these four characters were simultaneously influenced by male parent and female parent, but the effect from male parent was relatively larger. The grain length, grain width, 1 000-grain weight and length-width ratio all have high brood hedtabUities( respectively 99.65%, 98.31%, 95.27% and 98.81% ). Correlation analysis showed that grain length was positively correlated with 1 000-grain weight and length-width ratio at extremely significant level; 1 000-grain weight was positively correlated with grain length and length- width ratio at extremely significant level, and was insignificantly correlated with grain width; grain width was negatively correlated with grain length and length-width ratio at extremely significant level. Path analysis showed that the direct path coefficients of grain length, grain width and 1 0(30- grain weight to length-width ratio were 0.624 6, -0.555 9 and -0.015 8, respectively. [ Conclusion] This study systematically analyzed the effects of stedle line and restoring line on grain characters of hybrid rice, which provided theoretical basis for breeding high quality and yield hy- brid dce.展开更多
This paper described the whole process of three line hybrid pepper seed production in detail, including requirement of the seed production base, parent culti- vation, field management, and specified the key operation ...This paper described the whole process of three line hybrid pepper seed production in detail, including requirement of the seed production base, parent culti- vation, field management, and specified the key operation techniques in seed pro- duction, such as parental impurity removal to preserve pure state, pollen collection, pollination and seed collecting essentials. This specification is of guiding significance for the production of hybrid pepper seed and ensuring the purity of hybrid pepper seed.展开更多
Following NCI design, the developmental genetic behavior of tiller number (TN) in three-line indica hybrid rice was studied using additive-dominance developmental genetic models and the corresponding statistical metho...Following NCI design, the developmental genetic behavior of tiller number (TN) in three-line indica hybrid rice was studied using additive-dominance developmental genetic models and the corresponding statistical methods. The results showed that dominance effects were predominant for TN. The expression of those additive effects were affected by environment and genotype interaction, but the expression of dominance effects were not affected. Heterosis was the strongest in the middle developmental periods of TN. Additive effects and dominance effects were selectively expressed throughout in the entire tillering developmental stage. Analysis of genetic correlation between TN at different stages and the productive panicles indicated that a close correlation appeared earlier in the populations with higher heterosis than in those with less heterosis. Utilization of heterosis at the middle tillering stage might enhance the final biomass but reduce the percentage of productive panicles.展开更多
This paper presents a novel vision based localization algorithm from three-line structure ( TLS) .Two types of TLS are investigated: 1) three parallel lines ( Structure I) ; 2) two parallel lines and one orthogonal li...This paper presents a novel vision based localization algorithm from three-line structure ( TLS) .Two types of TLS are investigated: 1) three parallel lines ( Structure I) ; 2) two parallel lines and one orthogonal line ( Structure II) .From single image of either structure,the camera pose can be uniquely computed for vision localization.Contributions of this paper are as follows: 1 ) both TLS structures can be used as simple and practical landmarks,which are widely available in daily life; 2) the proposed algorithm complements existing localization methods,which usually use complex landmarks,especially in the partial blockage conditions; 3) compared with the general Perspective-3-Lines ( P3L) problem,camera pose can be uniquely computed from either structure.The proposed algorithm has been tested with both simulation and real image data.For a typical simulated indoor condition ( 75 cm-size landmark,less than 7.0 m landmark-to-camera distance,and 0.5-pixel image noises) ,the means of localization errors from Structure I and Structure II are less than 3.0 cm.And the standard deviations are less than 3.0 cm and 1.5 cm,respectively.The algorithm is further validated with two actual image experiments.Within a 7.5 m × 7.5 m indoor situation,the overall relative localization errors from Structure I and Structure II are less than 2.2% and 2.3% ,respectively,with about 6.0 m distance.The results demonstrate that the algorithm works well for practical vision localization.展开更多
Due to the limitations of spatial bandwidth product and data transmission bandwidth,the field of view,resolution,and imaging speed constrain each other in an optical imaging system.Here,a fast-zoom and high-resolution...Due to the limitations of spatial bandwidth product and data transmission bandwidth,the field of view,resolution,and imaging speed constrain each other in an optical imaging system.Here,a fast-zoom and high-resolution sparse compound-eye camera(CEC)based on dual-end collaborative optimization is proposed,which provides a cost-effective way to break through the trade-off among the field of view,resolution,and imaging speed.In the optical end,a sparse CEC based on liquid lenses is designed,which can realize large-field-of-view imaging in real time,and fast zooming within 5 ms.In the computational end,a disturbed degradation model driven super-resolution network(DDMDSR-Net)is proposed to deal with complex image degradation issues in actual imaging situations,achieving high-robustness and high-fidelity resolution enhancement.Based on the proposed dual-end collaborative optimization framework,the angular resolution of the CEC can be enhanced from 71.6"to 26.0",which provides a solution to realize high-resolution imaging for array camera dispensing with high optical hardware complexity and data transmission bandwidth.Experiments verify the advantages of the CEC based on dual-end collaborative optimization in high-fidelity reconstruction of real scene images,kilometer-level long-distance detection,and dynamic imaging and precise recognition of targets of interest.展开更多
Observatories typically deploy all-sky cameras for monitoring cloud cover and weather conditions.However,many of these cameras lack scientific-grade sensors,r.esulting in limited photometric precision,which makes calc...Observatories typically deploy all-sky cameras for monitoring cloud cover and weather conditions.However,many of these cameras lack scientific-grade sensors,r.esulting in limited photometric precision,which makes calculating the sky area visibility distribution via extinction measurement challenging.To address this issue,we propose the Photometry-Free Sky Area Visibility Estimation(PFSAVE)method.This method uses the standard magnitude of the faintest star observed within a given sky area to estimate visibility.By employing a pertransformation refitting optimization strategy,we achieve a high-precision coordinate transformation model with an accuracy of 0.42 pixels.Using the results of HEALPix segmentation is also introduced to achieve high spatial resolution.Comprehensive analysis based on real allsky images demonstrates that our method exhibits higher accuracy than the extinction-based method.Our method supports both manual and robotic dynamic scheduling,especially under partially cloudy conditions.展开更多
This paper presents a high-speed and robust dual-band infrared thermal camera based on an ARM CPU.The system consists of a low-resolution long-wavelength infrared detector,a digital temperature and humid⁃ity sensor,an...This paper presents a high-speed and robust dual-band infrared thermal camera based on an ARM CPU.The system consists of a low-resolution long-wavelength infrared detector,a digital temperature and humid⁃ity sensor,and a CMOS sensor.In view of the significant contrast between face and background in thermal infra⁃red images,this paper explores a suitable accuracy-latency tradeoff for thermal face detection and proposes a tiny,lightweight detector named YOLO-Fastest-IR.Four YOLO-Fastest-IR models(IR0 to IR3)with different scales are designed based on YOLO-Fastest.To train and evaluate these lightweight models,a multi-user low-resolution thermal face database(RGBT-MLTF)was collected,and the four networks were trained.Experiments demon⁃strate that the lightweight convolutional neural network performs well in thermal infrared face detection tasks.The proposed algorithm outperforms existing face detection methods in both positioning accuracy and speed,making it more suitable for deployment on mobile platforms or embedded devices.After obtaining the region of interest(ROI)in the infrared(IR)image,the RGB camera is guided by the thermal infrared face detection results to achieve fine positioning of the RGB face.Experimental results show that YOLO-Fastest-IR achieves a frame rate of 92.9 FPS on a Raspberry Pi 4B and successfully detects 97.4%of faces in the RGBT-MLTF test set.Ultimate⁃ly,an infrared temperature measurement system with low cost,strong robustness,and high real-time perfor⁃mance was integrated,achieving a temperature measurement accuracy of 0.3℃.展开更多
Blackmagic Camera for iOS 3.2版本可直接从i Phone推流到You Tube、Vimeo和Twitch,无需借助其他硬件或第三方软件,只要选择平台、输入推流密钥,就能以专业质量直播。本次更新还加入了对SRT推流的支持,可以把高质量、低延迟的视频发送...Blackmagic Camera for iOS 3.2版本可直接从i Phone推流到You Tube、Vimeo和Twitch,无需借助其他硬件或第三方软件,只要选择平台、输入推流密钥,就能以专业质量直播。本次更新还加入了对SRT推流的支持,可以把高质量、低延迟的视频发送到Blackmagic Streaming Decoder或其他支持SRT的广电制作系统,非常适合远程制作、新闻采集、活动实况报道。另外,现在用外部存储时会有更清晰的通知信息。当硬盘在录制、待机或上传素材过程中被断开时,警告会立即出现。展开更多
Camera technology advancement and deployment continue to play a vital role in modern life and production,amongst other capabilities,generating relevant visual information.The challenge of optimizing the use and deploy...Camera technology advancement and deployment continue to play a vital role in modern life and production,amongst other capabilities,generating relevant visual information.The challenge of optimizing the use and deployment of camera networks in various applications(e.g.,surveillance,traffic monitoring,and public safety to name but just a few)has attracted considerable attention from both academia and industries.Camera planning is the first step in addressing this challenge.The surveillance objectives and scenes of a camera network dictate the modelling and optimization algorithms for camera planning.However,existing reviews have primarily focused on models or optimization algorithms,with insufficient attention given to surveillance scenes.This review aims to bridge this gap by 1)Classifying surveillance scenes into the urban environment and rural outdoor environment and comparing the surveillance requirements and challenges;2)Summarizing the details of camera coverage optimization in the relevant literature from the perspective of deployment scenes;and 3)Proposing a new surveillance scene-Solar Insecticidal Lamps with the Internet of Things—as a case study to analyze the surveillance requirement and challenges in agricultural outdoor environment.Finally,we state the technical outlook on the physical safety of outdoor electronic devices in agriculture settings and provide insights to draw more attention and effort into this area.展开更多
This paper designs and implements a single-camera 360°panoramic imaging system based on motor-driven fisheye rotation.The system utilizes a stepper motor for precise angular control,enabling the camera to rotate ...This paper designs and implements a single-camera 360°panoramic imaging system based on motor-driven fisheye rotation.The system utilizes a stepper motor for precise angular control,enabling the camera to rotate around its optical center to capture multi-view images,thereby avoiding the parallax and geometric mismatch problems inherent in traditional multi-camera configurations.To address the strong distortion characteristics of fisheye images,an equidistant projection model is adopted for distortion correction.On this basis,a brightness normalization method combining global linear brightness correction and local illumination compensation is proposed to enhance stitching consistency.By establishing a geometry model constrained by camera rotation and integrating cylindrical projection with cosine-weighted blending,the system achieves high-precision panoramic stitching and seamless visual transitions.展开更多
Photomechanics is a crucial branch of solid mechanics.The localization of point targets constitutes a fundamental problem in optical experimental mechanics,with extensive applications in various missions of unmanned a...Photomechanics is a crucial branch of solid mechanics.The localization of point targets constitutes a fundamental problem in optical experimental mechanics,with extensive applications in various missions of unmanned aerial vehicles.Localizing moving targets is crucial for analyzing their motion characteristics and dynamic properties.Reconstructing the trajectories of points from asynchronous cameras is a significant challenge.It encompasses two coupled sub-problems:Trajectory reconstruction and camera synchronization.Present methods typically address only one of these sub-problems individually.This paper proposes a 3D trajectory reconstruction method for point targets based on asynchronous cameras,simultaneously solving both sub-problems.Firstly,we extend the trajectory intersection method to asynchronous cameras to resolve the limitation of traditional triangulation that requires camera synchronization.Secondly,we develop models for camera temporal information and target motion,based on imaging mechanisms and target dynamics characteristics.The parameters are optimized simultaneously to achieve trajectory reconstruction without accurate time parameters.Thirdly,we optimize the camera rotations alongside the camera time information and target motion parameters,using tighter and more continuous constraints on moving points.The reconstruction accuracy is significantly improved,especially when the camera rotations are inaccurate.Finally,the simulated and real-world experimental results demonstrate the feasibility and accuracy of the proposed method.The real-world results indicate that the proposed algorithm achieved a localization error of 112.95 m at an observation distance range of 15-20 km.展开更多
In various imaging applications such as autonomous vehicles and drones,autofocus lenses are indispensable for capturing clear images.However,conventional camera calibration methods typically rely either on processing ...In various imaging applications such as autonomous vehicles and drones,autofocus lenses are indispensable for capturing clear images.However,conventional camera calibration methods typically rely either on processing multiple images at a fixed focal length or on detecting multi-plane markers in a single image and then applying multi-image calibration models.This paper proposes a flexible and accurate calibration approach that extracts subpixel saddle points from a single image containing three non-coplanar calibration boards.To compute accurate homography matrices for the three boards,outliers are removed by eliminating chessboard points that deviated from the fitted grid lines according to their row and column positions.Initial estimates of the intrinsic parameters and the poses of the three planar chessboards are obtained using the three homography matrices in combination with Zhang’s calibration method.During parameter refinement,a multi-objective optimization function is constructed,incorporating three error terms:(1)Reprojection error of the inlier grid points;(2)Mechanism-driven error derived from the relationship between homography matrices and camera parameters;(3)Cross-planar linearity constraint error,which preserves the pre-imaging collinearity of any five points across different planes after projection.For weight selection in the optimization process,confidence intervals of the detected grid points are analyzed by horizontally rotating the reprojection lines to reduce bias introduced by line slope.The optimal weights are determined by minimizing the number of points whose confidence intervals does not intersect the reprojected lines.When multiple candidates yield similar reprojection performance,the parameter set with the smallest reprojection error is selected as the final result.This method efficiently estimates both intrinsic and extrinsic camera parameters.Simulations and real-world experiments validate the high precision and effectiveness of the proposed approach.Our technique is straightforward,practical,and holds significant theoretical and practical value for rapid and reliable camera calibration.展开更多
基金Sponsored by the National High Technology Research and Development Program of China(Grant No.863-2-5-1-13B)the Jilin Province Science and Technology Development Plan Item(Grant No.20130522107JH)
文摘To enhance the image motion compensation accuracy of off-axis three-mirror anastigmatic( TMA)three-line array aerospace mapping cameras,a new method of image motion velocity field modeling is proposed in this paper. Firstly,based on the imaging principle of mapping cameras,an analytical expression of image motion velocity of off-axis TMA three-line array aerospace mapping cameras is deduced from different coordinate systems we established and the attitude dynamics principle. Then,the case of a three-line array mapping camera is studied,in which the simulation of the focal plane image motion velocity fields of the forward-view camera,the nadir-view camera and the backward-view camera are carried out,and the optimization schemes for image motion velocity matching and drift angle matching are formulated according the simulation results. Finally,this method is verified with a dynamic imaging experimental system. The results are indicative of that when image motion compensation for nadir-view camera is conducted using the proposed image motion velocity field model,the line pair of target images at Nyquist frequency is clear and distinguishable. Under the constraint that modulation transfer function( MTF) reduces by 5%,when the horizontal frequencies of the forward-view camera and the backward-view camera are adjusted uniformly according to the proposed image motion velocity matching scheme,the time delay integration( TDI) stages reach 6 at most. When the TDI stages are more than 6,the three groups of camera will independently undergo horizontal frequency adjustment. However, when the proposed drift angle matching scheme is adopted for uniform drift angle adjustment,the number of TDI stages will not exceed 81. The experimental results have demonstrated the validity and accuracy of the proposed image motion velocity field model and matching optimization scheme,providing reliable basis for on-orbit image motion compensation of aerospace mapping cameras.
基金supported by the 2024 Research Fund of University of Ulsan.
文摘In recent years,three-dimensional reconstruction technologies that employ multiple cameras have continued to evolve significantly,enabling remote collaboration among users in extended Reality(XR)environments.In addition,methods for deploying multiple cameras for motion capture of users(e.g.,performers)are widely used in computer graphics.As the need to minimize and optimize the number of cameras grows to reduce costs,various technologies and research approaches focused on Optimal Camera Placement(OCP)are continually being proposed.However,as most existing studies assume homogeneous camera setups,there is a growing demand for studies on heterogeneous camera setups.For instance,technical demands keep emerging in scenarios with minimal camera configurations,especially regarding cost factors,the physical placement of cameras given the spatial structure,and image capture strategies for heterogeneous cameras,such as high-resolution RGB cameras and depth cameras.In this study,we propose a pre-visualization and simulation method for the optimal placement of heterogeneous cameras in XR environments,accounting for both the specifications of heterogeneous cameras(e.g.,field of view)and the physical configuration(e.g.,wall configuration)in real-world spaces.The proposed method performs a visibility analysis of cameras by considering each camera’s field-of-view volume,resolution,and unique characteristics,along with physicalspace constraints.This approach enables the optimal position and rotation of each camera to be recommended,along with the minimum number of cameras required.In the results of our study conducted in heterogeneous camera combinations,the proposed method achieved 81.7%~82.7%coverage of the target visual information using only 2~3 cameras.In contrast,single(or homogeneous)-typed cameras were required to use 11 cameras for 81.6%coverage.Accordingly,we found that camera deployment resources can be reduced with the proposed approaches.
基金supported by the Jilin Science and Technology Development Plan (20240101029JJ) for the following study:synchronized high-speed detection of surface shape and defects in the grinding stage of complex surfaces (KLMSZZ202305)for the high-precision wide dynamic large aperture optical inspection system for fine astronomical observation by the National Major Research Instrument Development Project (62127901)+2 种基金for ultrasmooth manufacturing technology of large diameter complex curved surface by the National Key R&D Program(2022YFB3403405)for research on the key technology of rapid synchronous detection of surface shape and subsurface defects in the grinding stage of large diameter complex surfaces by the International Cooperation Project(2025010157)The Key Laboratory of Optical System Advanced Manufacturing Technology,Chinese Academy of Sciences (2022KLOMT02-04) also supported this study
文摘To address the challenges of high-precision optical surface defect detection,we propose a novel design for a wide-field and broadband light field camera in this work.The proposed system can achieve a 50°field of view and operates at both visible and near-infrared wavelengths.Using the principles of light field imaging,the proposed design enables 3D reconstruction of optical surfaces,thus enabling vertical surface height measurements with enhanced accuracy.Using Zemax-based simulations,we evaluate the system’s modulation transfer function,its optical aberrations,and its tolerance to shape variations through Zernike coefficient adjustments.The results demonstrate that this camera can achieve the required spatial resolution while also maintaining high imaging quality and thus offers a promising solution for advanced optical surface defect inspection.
基金supported by the National Natural Science Foundation of China (Grant No.12572210)the Scientific Instrument Developing Project of Shenzhen University (Grant Nos.2023YQ011,2024YQ001)the Shenzhen Science and Technology Innovation Commission Project—Stable Support (General Project)(Grant No.20231120175055001)。
文摘Vascular abnormalities are closely associated with the pathogenesis and progression of numerous diseases, such as thrombosis, tumors, and diabetes. Blood flow velocity serves as a critical biomarker for evaluating perfusion status. Quantitative detection of full-field blood flow variations in lesion areas holds significant scientific and clinical value for pathological studies,diagnosis, and intraoperative monitoring of related diseases. While laser speckle contrast imaging(LSCI) enables full-field blood flow visualization, its reliance on frame-based sensors necessitates handling massive data volumes, leading to inherent trade-offs among spatiotemporal resolution, real-time performance, and quantitative capabilities. Leveraging the asynchronous dynamic sensing, high temporal sampling rate, and low data redundancy of event cameras, this study proposes a quantitative blood flow imaging method termed laser speckle event imaging(LSEI). Experiments using off-the-shelf event cameras demonstrate that LSEI achieves real-time blood flow imaging with minimal computational overhead compared to frame-based LSCI. Furthermore,we investigate the relationship between event data streams and flow velocity through spatial-temporal autocorrelation analysis,enabling quantitative measurements without compromising temporal or spatial resolution. In in vivo imaging experiments of mouse ear blood flow, LSEI exhibits superior imaging details and real-time performance over conventional methods. The proposed approach holds promise as an efficient tool for diagnosis, therapeutic evaluation, and research on vascular-related diseases.
基金Supported by Beijing Natural Science Foundation(Grant No.L241012)the National Natural Science Foundation of China(Grant No.62572468).
文摘LiDAR and camera are two of the most common sensors used in the fields of robot perception,autonomous driving,augmented reality,and virtual reality,where these sensors are widely used to perform various tasks such as odometry estimation and 3D reconstruction.Fusing the information from these two sensors can significantly increase the robustness and accuracy of these perception tasks.The extrinsic calibration between cameras and LiDAR is a fundamental prerequisite for multimodal systems.Recently,extensive studies have been conducted on the calibration of extrinsic parameters.Although several calibration methods facilitate sensor fusion,a comprehensive summary for researchers and,especially,non-expert users is lacking.Thus,we present an overview of extrinsic calibration and discuss diverse calibration methods from the perspective of calibration system design.Based on the calibration information sources,this study classifies these methods as target-based or targetless.For each type of calibration method,further classification was performed according to the diverse types of features or constraints used in the calibration process,and their detailed implementations and key characteristics were introduced.Thereafter,calibration-accuracy evaluation methods are presented.Finally,we comprehensively compare the advantages and disadvantages of each calibration method and suggest directions for practical applications and future research.
文摘Abst[Objective] This study was to understand the genetic dynamics of three-line hybrid rice, and explore the respective effect of sterile line and restoring line on grain characters of hybrid rica. [Method] Four three-line stedle lines and 27 restoring lines(cultivars) commonly culti- vated in Central China region were regarded as expadmental materials to conduct 4 x27NCII cross design, and the grain characters of three-line hybrid rico were analyzed at genetic and correlation levels. [ Result] Four characters of grain length, grain width, 1 000-grain weight and length- width ratio play the leading role in additive gene effect; these four characters were simultaneously influenced by male parent and female parent, but the effect from male parent was relatively larger. The grain length, grain width, 1 000-grain weight and length-width ratio all have high brood hedtabUities( respectively 99.65%, 98.31%, 95.27% and 98.81% ). Correlation analysis showed that grain length was positively correlated with 1 000-grain weight and length-width ratio at extremely significant level; 1 000-grain weight was positively correlated with grain length and length- width ratio at extremely significant level, and was insignificantly correlated with grain width; grain width was negatively correlated with grain length and length-width ratio at extremely significant level. Path analysis showed that the direct path coefficients of grain length, grain width and 1 0(30- grain weight to length-width ratio were 0.624 6, -0.555 9 and -0.015 8, respectively. [ Conclusion] This study systematically analyzed the effects of stedle line and restoring line on grain characters of hybrid rice, which provided theoretical basis for breeding high quality and yield hy- brid dce.
基金Supported by the Planning Subject of"the Twelfth Five-Year-Plan"in National Science and Technology for the Rural Development in China(2011BAD35B07)the Job Subsidies for the Experts in Staple Vegetable Breeding of Vegetable Industry of Hunan Province+1 种基金the"the Twelfth Five-Year-Plan"of National Science and Technology Support Plan(2012BAD02B02)the Special Fund for Agro-Scientific Research in Public Interest(201303028)~~
文摘This paper described the whole process of three line hybrid pepper seed production in detail, including requirement of the seed production base, parent culti- vation, field management, and specified the key operation techniques in seed pro- duction, such as parental impurity removal to preserve pure state, pollen collection, pollination and seed collecting essentials. This specification is of guiding significance for the production of hybrid pepper seed and ensuring the purity of hybrid pepper seed.
文摘Following NCI design, the developmental genetic behavior of tiller number (TN) in three-line indica hybrid rice was studied using additive-dominance developmental genetic models and the corresponding statistical methods. The results showed that dominance effects were predominant for TN. The expression of those additive effects were affected by environment and genotype interaction, but the expression of dominance effects were not affected. Heterosis was the strongest in the middle developmental periods of TN. Additive effects and dominance effects were selectively expressed throughout in the entire tillering developmental stage. Analysis of genetic correlation between TN at different stages and the productive panicles indicated that a close correlation appeared earlier in the populations with higher heterosis than in those with less heterosis. Utilization of heterosis at the middle tillering stage might enhance the final biomass but reduce the percentage of productive panicles.
基金Sponsored by the National Natural Science Foundation of China (Grant No. 51208168)the Research Grant from the Department of Education of Liaoning Province (Grant No. L2010060)
文摘This paper presents a novel vision based localization algorithm from three-line structure ( TLS) .Two types of TLS are investigated: 1) three parallel lines ( Structure I) ; 2) two parallel lines and one orthogonal line ( Structure II) .From single image of either structure,the camera pose can be uniquely computed for vision localization.Contributions of this paper are as follows: 1 ) both TLS structures can be used as simple and practical landmarks,which are widely available in daily life; 2) the proposed algorithm complements existing localization methods,which usually use complex landmarks,especially in the partial blockage conditions; 3) compared with the general Perspective-3-Lines ( P3L) problem,camera pose can be uniquely computed from either structure.The proposed algorithm has been tested with both simulation and real image data.For a typical simulated indoor condition ( 75 cm-size landmark,less than 7.0 m landmark-to-camera distance,and 0.5-pixel image noises) ,the means of localization errors from Structure I and Structure II are less than 3.0 cm.And the standard deviations are less than 3.0 cm and 1.5 cm,respectively.The algorithm is further validated with two actual image experiments.Within a 7.5 m × 7.5 m indoor situation,the overall relative localization errors from Structure I and Structure II are less than 2.2% and 2.3% ,respectively,with about 6.0 m distance.The results demonstrate that the algorithm works well for practical vision localization.
基金financial supports from National Natural Science Foundation of China(Grant Nos.U23A20368 and 62175006)Academic Excellence Foundation of BUAA for PhD Students.
文摘Due to the limitations of spatial bandwidth product and data transmission bandwidth,the field of view,resolution,and imaging speed constrain each other in an optical imaging system.Here,a fast-zoom and high-resolution sparse compound-eye camera(CEC)based on dual-end collaborative optimization is proposed,which provides a cost-effective way to break through the trade-off among the field of view,resolution,and imaging speed.In the optical end,a sparse CEC based on liquid lenses is designed,which can realize large-field-of-view imaging in real time,and fast zooming within 5 ms.In the computational end,a disturbed degradation model driven super-resolution network(DDMDSR-Net)is proposed to deal with complex image degradation issues in actual imaging situations,achieving high-robustness and high-fidelity resolution enhancement.Based on the proposed dual-end collaborative optimization framework,the angular resolution of the CEC can be enhanced from 71.6"to 26.0",which provides a solution to realize high-resolution imaging for array camera dispensing with high optical hardware complexity and data transmission bandwidth.Experiments verify the advantages of the CEC based on dual-end collaborative optimization in high-fidelity reconstruction of real scene images,kilometer-level long-distance detection,and dynamic imaging and precise recognition of targets of interest.
基金supported by Natural Science Foundation of Jilin Province(20210101468JC)Chinese Academy of Sciences and Local Government Cooperation Project(2023SYHZ0027,23SH04)National Natural Science Foundation of China(12273063&12203078)。
文摘Observatories typically deploy all-sky cameras for monitoring cloud cover and weather conditions.However,many of these cameras lack scientific-grade sensors,r.esulting in limited photometric precision,which makes calculating the sky area visibility distribution via extinction measurement challenging.To address this issue,we propose the Photometry-Free Sky Area Visibility Estimation(PFSAVE)method.This method uses the standard magnitude of the faintest star observed within a given sky area to estimate visibility.By employing a pertransformation refitting optimization strategy,we achieve a high-precision coordinate transformation model with an accuracy of 0.42 pixels.Using the results of HEALPix segmentation is also introduced to achieve high spatial resolution.Comprehensive analysis based on real allsky images demonstrates that our method exhibits higher accuracy than the extinction-based method.Our method supports both manual and robotic dynamic scheduling,especially under partially cloudy conditions.
基金Supported by the Fundamental Research Funds for the Central Universities(2024300443)the Natural Science Foundation of Jiangsu Province(BK20241224).
文摘This paper presents a high-speed and robust dual-band infrared thermal camera based on an ARM CPU.The system consists of a low-resolution long-wavelength infrared detector,a digital temperature and humid⁃ity sensor,and a CMOS sensor.In view of the significant contrast between face and background in thermal infra⁃red images,this paper explores a suitable accuracy-latency tradeoff for thermal face detection and proposes a tiny,lightweight detector named YOLO-Fastest-IR.Four YOLO-Fastest-IR models(IR0 to IR3)with different scales are designed based on YOLO-Fastest.To train and evaluate these lightweight models,a multi-user low-resolution thermal face database(RGBT-MLTF)was collected,and the four networks were trained.Experiments demon⁃strate that the lightweight convolutional neural network performs well in thermal infrared face detection tasks.The proposed algorithm outperforms existing face detection methods in both positioning accuracy and speed,making it more suitable for deployment on mobile platforms or embedded devices.After obtaining the region of interest(ROI)in the infrared(IR)image,the RGB camera is guided by the thermal infrared face detection results to achieve fine positioning of the RGB face.Experimental results show that YOLO-Fastest-IR achieves a frame rate of 92.9 FPS on a Raspberry Pi 4B and successfully detects 97.4%of faces in the RGBT-MLTF test set.Ultimate⁃ly,an infrared temperature measurement system with low cost,strong robustness,and high real-time perfor⁃mance was integrated,achieving a temperature measurement accuracy of 0.3℃.
文摘Blackmagic Camera for iOS 3.2版本可直接从i Phone推流到You Tube、Vimeo和Twitch,无需借助其他硬件或第三方软件,只要选择平台、输入推流密钥,就能以专业质量直播。本次更新还加入了对SRT推流的支持,可以把高质量、低延迟的视频发送到Blackmagic Streaming Decoder或其他支持SRT的广电制作系统,非常适合远程制作、新闻采集、活动实况报道。另外,现在用外部存储时会有更清晰的通知信息。当硬盘在录制、待机或上传素材过程中被断开时,警告会立即出现。
基金supported by the National Natural Science Foundation of China(62072248).
文摘Camera technology advancement and deployment continue to play a vital role in modern life and production,amongst other capabilities,generating relevant visual information.The challenge of optimizing the use and deployment of camera networks in various applications(e.g.,surveillance,traffic monitoring,and public safety to name but just a few)has attracted considerable attention from both academia and industries.Camera planning is the first step in addressing this challenge.The surveillance objectives and scenes of a camera network dictate the modelling and optimization algorithms for camera planning.However,existing reviews have primarily focused on models or optimization algorithms,with insufficient attention given to surveillance scenes.This review aims to bridge this gap by 1)Classifying surveillance scenes into the urban environment and rural outdoor environment and comparing the surveillance requirements and challenges;2)Summarizing the details of camera coverage optimization in the relevant literature from the perspective of deployment scenes;and 3)Proposing a new surveillance scene-Solar Insecticidal Lamps with the Internet of Things—as a case study to analyze the surveillance requirement and challenges in agricultural outdoor environment.Finally,we state the technical outlook on the physical safety of outdoor electronic devices in agriculture settings and provide insights to draw more attention and effort into this area.
基金Graduate Innovation Ability Training Program of the Hebei Provincial Department of Education,2025(Project No.:CXZZSS2025095)。
文摘This paper designs and implements a single-camera 360°panoramic imaging system based on motor-driven fisheye rotation.The system utilizes a stepper motor for precise angular control,enabling the camera to rotate around its optical center to capture multi-view images,thereby avoiding the parallax and geometric mismatch problems inherent in traditional multi-camera configurations.To address the strong distortion characteristics of fisheye images,an equidistant projection model is adopted for distortion correction.On this basis,a brightness normalization method combining global linear brightness correction and local illumination compensation is proposed to enhance stitching consistency.By establishing a geometry model constrained by camera rotation and integrating cylindrical projection with cosine-weighted blending,the system achieves high-precision panoramic stitching and seamless visual transitions.
基金supported by the Hunan Provin〓〓cial Natural Science Foundation for Excellent Young Scholars(Grant No.2023JJ20045)the National Natural Science Foundation of China(Grant No.12372189)。
文摘Photomechanics is a crucial branch of solid mechanics.The localization of point targets constitutes a fundamental problem in optical experimental mechanics,with extensive applications in various missions of unmanned aerial vehicles.Localizing moving targets is crucial for analyzing their motion characteristics and dynamic properties.Reconstructing the trajectories of points from asynchronous cameras is a significant challenge.It encompasses two coupled sub-problems:Trajectory reconstruction and camera synchronization.Present methods typically address only one of these sub-problems individually.This paper proposes a 3D trajectory reconstruction method for point targets based on asynchronous cameras,simultaneously solving both sub-problems.Firstly,we extend the trajectory intersection method to asynchronous cameras to resolve the limitation of traditional triangulation that requires camera synchronization.Secondly,we develop models for camera temporal information and target motion,based on imaging mechanisms and target dynamics characteristics.The parameters are optimized simultaneously to achieve trajectory reconstruction without accurate time parameters.Thirdly,we optimize the camera rotations alongside the camera time information and target motion parameters,using tighter and more continuous constraints on moving points.The reconstruction accuracy is significantly improved,especially when the camera rotations are inaccurate.Finally,the simulated and real-world experimental results demonstrate the feasibility and accuracy of the proposed method.The real-world results indicate that the proposed algorithm achieved a localization error of 112.95 m at an observation distance range of 15-20 km.
基金supported by the Research on the Reform of Curriculum Assessment Methods for College Mathematics Platform Courses(No.53111104016)。
文摘In various imaging applications such as autonomous vehicles and drones,autofocus lenses are indispensable for capturing clear images.However,conventional camera calibration methods typically rely either on processing multiple images at a fixed focal length or on detecting multi-plane markers in a single image and then applying multi-image calibration models.This paper proposes a flexible and accurate calibration approach that extracts subpixel saddle points from a single image containing three non-coplanar calibration boards.To compute accurate homography matrices for the three boards,outliers are removed by eliminating chessboard points that deviated from the fitted grid lines according to their row and column positions.Initial estimates of the intrinsic parameters and the poses of the three planar chessboards are obtained using the three homography matrices in combination with Zhang’s calibration method.During parameter refinement,a multi-objective optimization function is constructed,incorporating three error terms:(1)Reprojection error of the inlier grid points;(2)Mechanism-driven error derived from the relationship between homography matrices and camera parameters;(3)Cross-planar linearity constraint error,which preserves the pre-imaging collinearity of any five points across different planes after projection.For weight selection in the optimization process,confidence intervals of the detected grid points are analyzed by horizontally rotating the reprojection lines to reduce bias introduced by line slope.The optimal weights are determined by minimizing the number of points whose confidence intervals does not intersect the reprojected lines.When multiple candidates yield similar reprojection performance,the parameter set with the smallest reprojection error is selected as the final result.This method efficiently estimates both intrinsic and extrinsic camera parameters.Simulations and real-world experiments validate the high precision and effectiveness of the proposed approach.Our technique is straightforward,practical,and holds significant theoretical and practical value for rapid and reliable camera calibration.