Ecological monitoring vehicles are equipped with a range of sensors and monitoring devices designed to gather data on ecological and environmental factors.These vehicles are crucial in various fields,including environ...Ecological monitoring vehicles are equipped with a range of sensors and monitoring devices designed to gather data on ecological and environmental factors.These vehicles are crucial in various fields,including environmental science research,ecological and environmental monitoring projects,disaster response,and emergency management.A key method employed in these vehicles for achieving high-precision positioning is LiDAR(lightlaser detection and ranging)-Visual Simultaneous Localization and Mapping(SLAM).However,maintaining highprecision localization in complex scenarios,such as degraded environments or when dynamic objects are present,remains a significant challenge.To address this issue,we integrate both semantic and texture information from LiDAR and cameras to enhance the robustness and efficiency of data registration.Specifically,semantic information simplifies the modeling of scene elements,reducing the reliance on dense point clouds,which can be less efficient.Meanwhile,visual texture information complements LiDAR-Visual localization by providing additional contextual details.By incorporating semantic and texture details frompaired images and point clouds,we significantly improve the quality of data association,thereby increasing the success rate of localization.This approach not only enhances the operational capabilities of ecological monitoring vehicles in complex environments but also contributes to improving the overall efficiency and effectiveness of ecological monitoring and environmental protection efforts.展开更多
为提升自动驾驶车辆在多车道行驶与作业时的道路环境感知能力,提出了自动驾驶环境下车道级雷视融合方法 LLV-SLAM(lane-level LiDAR-visual fusion SLAM),并构建了适用于雷视融合的实时定位与建图算法(simultaneous localization and ma...为提升自动驾驶车辆在多车道行驶与作业时的道路环境感知能力,提出了自动驾驶环境下车道级雷视融合方法 LLV-SLAM(lane-level LiDAR-visual fusion SLAM),并构建了适用于雷视融合的实时定位与建图算法(simultaneous localization and mapping,SLAM)。首先,在视觉特征点提取的基础上引入直方图均衡化,并利用激光雷达获取特征点深度信息,通过视觉特征跟踪以提升SLAM系统鲁棒性。其次,利用视觉关键帧信息对激光点云进行运动畸变校正,并将LeGO-LOAM(lightweight and groud-optimized lidar odometry and mapping)融入视觉ORBSLAM2(oriented FAST and rotated BRIEF SLAM2)以增强闭环检测与矫正性能,降低系统累计误差。最后,将视觉图像所获取的位姿进行坐标转换作为激光里程计的位姿初值,辅助激光雷达SLAM进行三维场景重建。实验结果表明:相比于传统的SLAM方法,融合后的LLV-SLAM方法平均定位时延减少了41.61%;在x、y、z方向上的平均定位误差分别减少了34.63%、38.16%、24.09%;在滚转角、俯仰角、偏航角方向上的平均旋转误差减少了40.8%、37.52%、39.5%。LLV-SLAM算法有效抑制了LeGO-LOAM算法的尺度漂移,实时性和鲁棒性有显著提升,能够满足自动驾驶车辆对多车道道路环境的感知需要。展开更多
基金supported by the project“GEF9874:Strengthening Coordinated Approaches to Reduce Invasive Alien Species(lAS)Threats to Globally Significant Agrobiodiversity and Agroecosystems in China”funding from the Excellent Talent Training Funding Project in Dongcheng District,Beijing,with project number 2024-dchrcpyzz-9.
文摘Ecological monitoring vehicles are equipped with a range of sensors and monitoring devices designed to gather data on ecological and environmental factors.These vehicles are crucial in various fields,including environmental science research,ecological and environmental monitoring projects,disaster response,and emergency management.A key method employed in these vehicles for achieving high-precision positioning is LiDAR(lightlaser detection and ranging)-Visual Simultaneous Localization and Mapping(SLAM).However,maintaining highprecision localization in complex scenarios,such as degraded environments or when dynamic objects are present,remains a significant challenge.To address this issue,we integrate both semantic and texture information from LiDAR and cameras to enhance the robustness and efficiency of data registration.Specifically,semantic information simplifies the modeling of scene elements,reducing the reliance on dense point clouds,which can be less efficient.Meanwhile,visual texture information complements LiDAR-Visual localization by providing additional contextual details.By incorporating semantic and texture details frompaired images and point clouds,we significantly improve the quality of data association,thereby increasing the success rate of localization.This approach not only enhances the operational capabilities of ecological monitoring vehicles in complex environments but also contributes to improving the overall efficiency and effectiveness of ecological monitoring and environmental protection efforts.
文摘为提升自动驾驶车辆在多车道行驶与作业时的道路环境感知能力,提出了自动驾驶环境下车道级雷视融合方法 LLV-SLAM(lane-level LiDAR-visual fusion SLAM),并构建了适用于雷视融合的实时定位与建图算法(simultaneous localization and mapping,SLAM)。首先,在视觉特征点提取的基础上引入直方图均衡化,并利用激光雷达获取特征点深度信息,通过视觉特征跟踪以提升SLAM系统鲁棒性。其次,利用视觉关键帧信息对激光点云进行运动畸变校正,并将LeGO-LOAM(lightweight and groud-optimized lidar odometry and mapping)融入视觉ORBSLAM2(oriented FAST and rotated BRIEF SLAM2)以增强闭环检测与矫正性能,降低系统累计误差。最后,将视觉图像所获取的位姿进行坐标转换作为激光里程计的位姿初值,辅助激光雷达SLAM进行三维场景重建。实验结果表明:相比于传统的SLAM方法,融合后的LLV-SLAM方法平均定位时延减少了41.61%;在x、y、z方向上的平均定位误差分别减少了34.63%、38.16%、24.09%;在滚转角、俯仰角、偏航角方向上的平均旋转误差减少了40.8%、37.52%、39.5%。LLV-SLAM算法有效抑制了LeGO-LOAM算法的尺度漂移,实时性和鲁棒性有显著提升,能够满足自动驾驶车辆对多车道道路环境的感知需要。