This paper designs and implements a single-camera 360°panoramic imaging system based on motor-driven fisheye rotation.The system utilizes a stepper motor for precise angular control,enabling the camera to rotate ...This paper designs and implements a single-camera 360°panoramic imaging system based on motor-driven fisheye rotation.The system utilizes a stepper motor for precise angular control,enabling the camera to rotate around its optical center to capture multi-view images,thereby avoiding the parallax and geometric mismatch problems inherent in traditional multi-camera configurations.To address the strong distortion characteristics of fisheye images,an equidistant projection model is adopted for distortion correction.On this basis,a brightness normalization method combining global linear brightness correction and local illumination compensation is proposed to enhance stitching consistency.By establishing a geometry model constrained by camera rotation and integrating cylindrical projection with cosine-weighted blending,the system achieves high-precision panoramic stitching and seamless visual transitions.展开更多
[目的/意义]玉米是主要粮食作物,果穗作为玉米关键表型性状,其形态、大小及颜色特征能够有效反映植株生长状态及潜在产量。传统的田间玉米果穗检测依赖人工,效率低且劳动强度大。随着密植栽培模式的推广,玉米冠层结构愈发密集,人工进入...[目的/意义]玉米是主要粮食作物,果穗作为玉米关键表型性状,其形态、大小及颜色特征能够有效反映植株生长状态及潜在产量。传统的田间玉米果穗检测依赖人工,效率低且劳动强度大。随着密植栽培模式的推广,玉米冠层结构愈发密集,人工进入田间开展果穗测量不仅操作困难,还容易对植株造成机械损伤,进一步限制了数据的准确性与代表性。因此,亟需高效的自动化检测技术。[方法]为实现复杂田间环境下玉米果穗的高效精准检测,提出一种基于改进YOLO11n(You Only Look Once 11)的CornYOLO模型。创新性地采用无人车搭载全景相机进行图像采集,构建了高质量的田间数据集,并在此基础上提出了3项核心模型改进:1)采用动态点空间注意力的跨阶段部分网络(Cross Stage Partial Network with Dynamic Pointwise Spatial Attention,C2PDA)以提升对遮挡目标的识别鲁棒性;2)引入特征优化模块(Feature Refinement Module,FRM)以增强多尺度目标检测能力;3)使用统一交并比(Unified Intersection Over Union,UIoU)损失函数以优化边界框回归精度。为作物田间表型高通量获取提供了一种从数据采集到智能识别的端到端解决方案。[结果和讨论]CornYOLO在复杂田间环境下表现出优异的检测性能,在验证集上mAP@50达到89.3%,相较于YOLO11n,F1分数提升2.5个百分点。相较于其余基线模型,其mAP@50提升显著,最高达12.6个百分点。消融实验表明,C2PDA、FRM与UIoU这3个模块均对性能提升有积极贡献,其中C2PDA作用最为关键。[结论]CornYOLO模型能够高效精准地识别田间玉米果穗,为玉米育种表型分析和产量预测提供可靠的技术支持,推动玉米果穗信息提取的智能化发展。展开更多
溺水是造成意外死亡的重要原因之一。监控范围小和实时性差是造成溺水获救率低的重要因素。水上智能救生系统是及时发现和防范溺水事件发生的主要手段之一,该文基于全景相机搭建一套完整的水上智能救生系统实验平台。首先采用YOLOv5算...溺水是造成意外死亡的重要原因之一。监控范围小和实时性差是造成溺水获救率低的重要因素。水上智能救生系统是及时发现和防范溺水事件发生的主要手段之一,该文基于全景相机搭建一套完整的水上智能救生系统实验平台。首先采用YOLOv5算法识别水域目标,平均精度值(mean average precision,mAP)为82.43%;其次提出一种滑窗式软件环扫技术,解决YOLOv5在全景大分辨情况下识别效果较差的问题,重点目标如获救人员、海豚1号救生机器人和溺水人员的AP分别为75%、64%和65%;最后设计一种全景联动球机的机制用以实现目标的定位。外场测试结果表明,所提方案对比球机可将检测速度缩短至1~3 s,对比枪机可有效扩大目标监测范围,保证救援效率。展开更多
Panoramic camera-based Visual-Inertial-Odometry(VIO)systems play a crucial role in robotic navigation,autonomous driving,and virtual reality applications,owing to their large Field-of-View and enhanced localization ca...Panoramic camera-based Visual-Inertial-Odometry(VIO)systems play a crucial role in robotic navigation,autonomous driving,and virtual reality applications,owing to their large Field-of-View and enhanced localization capabilities.However,the nonlinear distortions caused by the lack of geometric consistency in projection models for panoramic images pose signifcant challenges to feature extraction and tracking algorithms.In this paper,we present Geotri-VIO,a novel VIO system that addresses these challenges using a multi-prism projection model.By constructing the multiprism projection planes such that each face is tangent to the inherent projection sphere of the panoramic camera,the proposed model ensures strict geometric consistency in each projection plane while maintaining global geometric consistency,which is supported by mathematical proof.Additionally,we evaluate the impact of increasing the number of projection planes and demonstrate that triangular prism projection outperforms other multi-prism projection models.To validate its efectiveness,Geotri-VIO is tested on public datasets.Experimental results show that the triangular prism projection signifcantly improves the tracking accuracy of both point and line features,thereby enhancing the overall localization performance of the VIO system.展开更多
基金Graduate Innovation Ability Training Program of the Hebei Provincial Department of Education,2025(Project No.:CXZZSS2025095)。
文摘This paper designs and implements a single-camera 360°panoramic imaging system based on motor-driven fisheye rotation.The system utilizes a stepper motor for precise angular control,enabling the camera to rotate around its optical center to capture multi-view images,thereby avoiding the parallax and geometric mismatch problems inherent in traditional multi-camera configurations.To address the strong distortion characteristics of fisheye images,an equidistant projection model is adopted for distortion correction.On this basis,a brightness normalization method combining global linear brightness correction and local illumination compensation is proposed to enhance stitching consistency.By establishing a geometry model constrained by camera rotation and integrating cylindrical projection with cosine-weighted blending,the system achieves high-precision panoramic stitching and seamless visual transitions.
文摘[目的/意义]玉米是主要粮食作物,果穗作为玉米关键表型性状,其形态、大小及颜色特征能够有效反映植株生长状态及潜在产量。传统的田间玉米果穗检测依赖人工,效率低且劳动强度大。随着密植栽培模式的推广,玉米冠层结构愈发密集,人工进入田间开展果穗测量不仅操作困难,还容易对植株造成机械损伤,进一步限制了数据的准确性与代表性。因此,亟需高效的自动化检测技术。[方法]为实现复杂田间环境下玉米果穗的高效精准检测,提出一种基于改进YOLO11n(You Only Look Once 11)的CornYOLO模型。创新性地采用无人车搭载全景相机进行图像采集,构建了高质量的田间数据集,并在此基础上提出了3项核心模型改进:1)采用动态点空间注意力的跨阶段部分网络(Cross Stage Partial Network with Dynamic Pointwise Spatial Attention,C2PDA)以提升对遮挡目标的识别鲁棒性;2)引入特征优化模块(Feature Refinement Module,FRM)以增强多尺度目标检测能力;3)使用统一交并比(Unified Intersection Over Union,UIoU)损失函数以优化边界框回归精度。为作物田间表型高通量获取提供了一种从数据采集到智能识别的端到端解决方案。[结果和讨论]CornYOLO在复杂田间环境下表现出优异的检测性能,在验证集上mAP@50达到89.3%,相较于YOLO11n,F1分数提升2.5个百分点。相较于其余基线模型,其mAP@50提升显著,最高达12.6个百分点。消融实验表明,C2PDA、FRM与UIoU这3个模块均对性能提升有积极贡献,其中C2PDA作用最为关键。[结论]CornYOLO模型能够高效精准地识别田间玉米果穗,为玉米育种表型分析和产量预测提供可靠的技术支持,推动玉米果穗信息提取的智能化发展。
文摘溺水是造成意外死亡的重要原因之一。监控范围小和实时性差是造成溺水获救率低的重要因素。水上智能救生系统是及时发现和防范溺水事件发生的主要手段之一,该文基于全景相机搭建一套完整的水上智能救生系统实验平台。首先采用YOLOv5算法识别水域目标,平均精度值(mean average precision,mAP)为82.43%;其次提出一种滑窗式软件环扫技术,解决YOLOv5在全景大分辨情况下识别效果较差的问题,重点目标如获救人员、海豚1号救生机器人和溺水人员的AP分别为75%、64%和65%;最后设计一种全景联动球机的机制用以实现目标的定位。外场测试结果表明,所提方案对比球机可将检测速度缩短至1~3 s,对比枪机可有效扩大目标监测范围,保证救援效率。
基金supported in part by the China National Key Research and Development Program under Grant 2022YFB3903804.
文摘Panoramic camera-based Visual-Inertial-Odometry(VIO)systems play a crucial role in robotic navigation,autonomous driving,and virtual reality applications,owing to their large Field-of-View and enhanced localization capabilities.However,the nonlinear distortions caused by the lack of geometric consistency in projection models for panoramic images pose signifcant challenges to feature extraction and tracking algorithms.In this paper,we present Geotri-VIO,a novel VIO system that addresses these challenges using a multi-prism projection model.By constructing the multiprism projection planes such that each face is tangent to the inherent projection sphere of the panoramic camera,the proposed model ensures strict geometric consistency in each projection plane while maintaining global geometric consistency,which is supported by mathematical proof.Additionally,we evaluate the impact of increasing the number of projection planes and demonstrate that triangular prism projection outperforms other multi-prism projection models.To validate its efectiveness,Geotri-VIO is tested on public datasets.Experimental results show that the triangular prism projection signifcantly improves the tracking accuracy of both point and line features,thereby enhancing the overall localization performance of the VIO system.