期刊文献+
共找到464篇文章
< 1 2 24 >
每页显示 20 50 100
Sensor planning method for visual tracking in 3D camera networks 被引量:1
1
作者 Anlong Ming Xin Chen 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2014年第6期1107-1116,共10页
Most sensors or cameras discussed in the sensor network community are usually 3D homogeneous, even though their2 D coverage areas in the ground plane are heterogeneous. Meanwhile, observed objects of camera networks a... Most sensors or cameras discussed in the sensor network community are usually 3D homogeneous, even though their2 D coverage areas in the ground plane are heterogeneous. Meanwhile, observed objects of camera networks are usually simplified as 2D points in previous literature. However in actual application scenes, not only cameras are always heterogeneous with different height and action radiuses, but also the observed objects are with 3D features(i.e., height). This paper presents a sensor planning formulation addressing the efficiency enhancement of visual tracking in 3D heterogeneous camera networks that track and detect people traversing a region. The problem of sensor planning consists of three issues:(i) how to model the 3D heterogeneous cameras;(ii) how to rank the visibility, which ensures that the object of interest is visible in a camera's field of view;(iii) how to reconfigure the 3D viewing orientations of the cameras. This paper studies the geometric properties of 3D heterogeneous camera networks and addresses an evaluation formulation to rank the visibility of observed objects. Then a sensor planning method is proposed to improve the efficiency of visual tracking. Finally, the numerical results show that the proposed method can improve the tracking performance of the system compared to the conventional strategies. 展开更多
关键词 camera model sensor planning camera network visual tracking
在线阅读 下载PDF
Color compensation for multi-view video coding based on diversity of cameras 被引量:1
2
作者 Jun-yan HUO Yi-lin CHANG +1 位作者 Hai-tao YANG Shuai WAN 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2008年第12期1631-1637,共7页
A novel color compensation method for multi-view video coding (MVC) is proposed, which efficiently exploits the inter-view dependencies between views with the existence of color mismatch caused by the diversity of cam... A novel color compensation method for multi-view video coding (MVC) is proposed, which efficiently exploits the inter-view dependencies between views with the existence of color mismatch caused by the diversity of cameras. A color compensation model is developed in RGB channels and then extended to YCbCr channels for practical use. A modified inter-view reference picture is constructed based on the color compensation model, which is more similar to the coding picture than the original inter-view reference picture. Moreover, the color compensation factors can be derived in both encoder and decoder, therefore no additional data need to be transmitted to the decoder. The experimental results show that the proposed method improves the coding efficiency of MVC and maintains good subjective quality. 展开更多
关键词 multi-view video coding (MVC) H.264/AVC Color compensation Diversity of cameras
在线阅读 下载PDF
Stereoscopic Camera-Sensor Model for the Development of Highly Automated Driving Functions within a Virtual Test Environment
3
作者 René Degen Martin de Fries +3 位作者 Alexander Nüßgen Marcus Irmer Mats Leijon Margot Ruschitzka 《Journal of Transportation Technologies》 2023年第1期87-114,共28页
The need for efficient and reproducible development processes for sensor and perception systems is growing with their increased use in modern vehicles. Such processes can be achieved by using virtual test environments... The need for efficient and reproducible development processes for sensor and perception systems is growing with their increased use in modern vehicles. Such processes can be achieved by using virtual test environments and virtual sensor models. In the context of this, the present paper documents the development of a sensor model for depth estimation of virtual three-dimensional scenarios. For this purpose, the geometric and algorithmic principles of stereoscopic camera systems are recreated in a virtual form. The model is implemented as a subroutine in the Epic Games Unreal Engine, which is one of the most common Game Engines. Its architecture consists of several independent procedures that enable a local depth estimation, but also a reconstruction of a whole three-dimensional scenery. In addition, a separate programme for calibrating the model is presented. In addition to the basic principles, the architecture and the implementation, this work also documents the evaluation of the model created. It is shown that the model meets specifically defined requirements for real-time capability and the accuracy of the evaluation. Thus, it is suitable for the virtual testing of common algorithms and highly automated driving functions. 展开更多
关键词 sensor Model Virtual Test Environment Stereoscopic camera Unreal Engine OPENCV ADAS/AD
在线阅读 下载PDF
基于相机传感器的轻量级全景分割网络
4
作者 郭蕊 周高利 洪祥龙 《传感器与微系统》 北大核心 2026年第1期117-120,共4页
针对目前主流的高性能全景分割方法计算开销大,难以满足实时应用需求。提出了一种创新的基于相机传感器的单阶段轻量级全景分割网络,该网络采用共享特征提取主干,并配备3个专用分支分别处理目标检测、语义分割和实例级注意力掩码生成。... 针对目前主流的高性能全景分割方法计算开销大,难以满足实时应用需求。提出了一种创新的基于相机传感器的单阶段轻量级全景分割网络,该网络采用共享特征提取主干,并配备3个专用分支分别处理目标检测、语义分割和实例级注意力掩码生成。设计的新型全景分割头通过空间嵌入学习生成实例感知的软注意力掩码,实现了物体类别语义掩码与实例软掩码在像素层面的自然融合,无需额外后处理即可直接输出全景分割结果。在COCO和Cityscapes数据集上的实验表明,该方法在精度和速度方面均达到先进水平,尤其在Cityscapes数据集1024×2048分辨率图像上取得了59.7的全景质量分数,同时保持超过10 fps的实时推理速度。 展开更多
关键词 全景分割 相机传感器 单阶段 轻量级
在线阅读 下载PDF
一种基于Spike Camera的彩色脉冲相机仿真
5
作者 马雪山 刘斌 +2 位作者 杨静 田娟 马雷 《现代计算机》 2022年第12期17-23,共7页
受到生物视觉机制的启发,一种新型的仿视网膜中央凹的视觉传感器Spike Camera被研制出来。相比传统基于帧的相机和其他事件相机,Spike Camera更适合处理超高速场景的视觉任务。目前物理设备推广受到限制,此类视觉类传感器的发展和应用... 受到生物视觉机制的启发,一种新型的仿视网膜中央凹的视觉传感器Spike Camera被研制出来。相比传统基于帧的相机和其他事件相机,Spike Camera更适合处理超高速场景的视觉任务。目前物理设备推广受到限制,此类视觉类传感器的发展和应用也受到影响。在本文中,我们实现了一种基于Spike Camera的仿真建模方法,将连续的视频帧转化成脉冲数据,再根据脉冲数据重构成图像帧。通过仿真程序,探索了在Spike Camera传感器上的彩色信息脉冲编码的方式。实验结果表明,相比其他彩色编码方式,使用单通道的RGB彩色脉冲编码能在损失很少的图像信息的情况下,极大地压缩脉冲数据,减少传输带宽。本文的研究结果对设计彩色脉冲相机具有重要参考意义。 展开更多
关键词 脉冲相机 彩色脉冲 传感器仿真
在线阅读 下载PDF
洗煤厂皮带安全运行智能监测系统设计
6
作者 张强 《煤》 2026年第1期100-104,共5页
针对洗煤厂皮带系统核心区域监测不足的现状,原有模拟监控摄像头存在图像分辨率低、维护成本高昂且不便的问题,提出在皮带的机头与机尾位置安装高性能矿用光纤摄像机,提升现场监控的精确度和效率;在皮带的主驱动滚筒及减速器上,集成安... 针对洗煤厂皮带系统核心区域监测不足的现状,原有模拟监控摄像头存在图像分辨率低、维护成本高昂且不便的问题,提出在皮带的机头与机尾位置安装高性能矿用光纤摄像机,提升现场监控的精确度和效率;在皮带的主驱动滚筒及减速器上,集成安装温度与振动传感器,实时监测设备运行过程中的温度波动与振动情况,构建一套集数据采集、分析、预警于一体的皮带运行状态在线监测系统。该系统不仅能够实时展示皮带的运行状况,还能通过智能算法对收集到的数据进行深度分析,自动识别潜在的安全隐患或性能衰退趋势,实现故障的早期预警与快速响应,为洗煤厂的稳定高效运行提供坚实的技术支撑。 展开更多
关键词 监控摄像头 图像分辨率 光纤摄像机 振动传感器 数据采集 运行状况
在线阅读 下载PDF
基于Cameralink接口的双目CMOS APS成像系统的研究 被引量:1
7
作者 张明宇 刘金国 +2 位作者 李余 宋丹 孔德柱 《光电子技术》 CAS 2008年第4期270-273,共4页
为满足大视场、多个图像传感器同时使用、多光电图像传感器成像系统的响应一致性等技术需求,设计了一种大视场的成像系统。将先进先出缓存(FIFO)应用到数据传输技术当中,保证了系统多种图像数据同步实时输出;在基于低压差分信号(LVDS)... 为满足大视场、多个图像传感器同时使用、多光电图像传感器成像系统的响应一致性等技术需求,设计了一种大视场的成像系统。将先进先出缓存(FIFO)应用到数据传输技术当中,保证了系统多种图像数据同步实时输出;在基于低压差分信号(LVDS)技术上同时采用Camera link接口协议,设计了一种低成本、高速、稳定、简易的双目半导体金属氧化物有源传感器(CMOSAPS)成像系统,并深入探讨了系统的基本组成和工作原理;利用可编程门阵列完成自上而下的模块设计;并对系统部分工作时序进行了仿真,并进行板上调试;结果表明,本设计方案满足系统的大视场需求,为后续的图像采集工作打下了坚实的基础。 展开更多
关键词 cameralink接口 双目 CMOS APS图像传感器 现场可编程门阵列
在线阅读 下载PDF
Research on a New-style Visual Sensor for Measurement
8
作者 YU Rong TAN Yuegang School of Mechanical and Electronic Engineering,Wuhan University of Technology,Wuhan 430070,China, 《武汉理工大学学报》 CAS CSCD 北大核心 2006年第S3期1003-1006,共4页
First,the constitution of traditional visual sensor is presented.The linear camera model is introduced and the transform matrix between the image coordinate system and the world coordinate system is established.The ba... First,the constitution of traditional visual sensor is presented.The linear camera model is introduced and the transform matrix between the image coordinate system and the world coordinate system is established.The basic principle of camera calibration is expatiated based on the linear camera model. On the basis of a detailed analysis of camera model,a new-style visual sensor for measurement is advanced.It can realize the real time control of the zoom of camera lens by step motor according to the size of objects.Moreover,re-calibration could be avoided and the transform matrix can be acquired by calculating,which can greatly simplify camera calibration process and save the time. Clearer images are gained,so the measurement system precision could be greatly improved.The basic structure of the visual sensor zoom is introduced,including the constitute mode and the movement rule of the fixed former part,zoom part,compensatory part and the fixed latter port.The realization method of zoom controlled by step motor is introduced. Finally,the constitution of the new-style visual sensor is introduced,including hardware and software.The hardware system is composed by manual zoom,CCD camera,image card,gearing,step motor,step motor driver and computer.The realization of software is introduced,including the composed module of software and the workflow of measurement system in the form of structured block diagram. 展开更多
关键词 camera CALIBRATION ZOOM VISUAL sensor
在线阅读 下载PDF
Modeling and Rectification of Rolling Shutter Effect in CMOS Aerial Cameras
9
作者 Lei Wan Ye Zhang +1 位作者 Ping Jia Jiajia Xu 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2017年第4期71-77,共7页
Due to the electronic rolling shutter, high-speed Complementary Metal-Oxide Semiconductor( CMOS) aerial cameras are generally subject to geometric distortions,which cannot be perfectly corrected by conventional vision... Due to the electronic rolling shutter, high-speed Complementary Metal-Oxide Semiconductor( CMOS) aerial cameras are generally subject to geometric distortions,which cannot be perfectly corrected by conventional vision-based algorithms. In this paper we propose a novel approach to address the problem of rolling shutter distortion in aerial imaging. A mathematical model is established by the coordinate transformation method. It can directly calculate the pixel distortion when an aerial camera is imaging at arbitrary gesture angles.Then all pixel distortions form a distortion map over the whole CMOS array and the map is exploited in the image rectification process incorporating reverse projection. The error analysis indicates that within the margin of measuring errors,the final calculation error of our model is less than 1/2 pixel. The experimental results show that our approach yields good rectification performance in a series of images with different distortions. We demonstrate that our method outperforms other vision-based algorithms in terms of the computational complexity,which makes it more suitable for aerial real-time imaging. 展开更多
关键词 aerial camera CMOS sensor ROLLING SHUTTER effect COORDINATE transformation image RECTIFICATION
在线阅读 下载PDF
Energy Efficient Content Based Image Retrieval in Sensor Networks
10
作者 Qurban A. Memon Hend Alqamzi 《International Journal of Communications, Network and System Sciences》 2012年第7期405-415,共11页
The presence of increased memory and computational power in imaging sensor networks attracts researchers to exploit image processing algorithms on distributed memory and computational power. In this paper, a typical p... The presence of increased memory and computational power in imaging sensor networks attracts researchers to exploit image processing algorithms on distributed memory and computational power. In this paper, a typical perimeter is investigated with a number of sensors placed to form an image sensor network for the purpose of content based distributed image search. Image search algorithm is used to enable distributed content based image search within each sensor node. The energy model is presented to calculate energy efficiency for various cases of image search and transmission. The simulations are carried out based on consideration of continuous monitoring or event driven activity on the perimeter. The simulation setups consider distributed image processing on sensor nodes and results show that energy saving is significant if search algorithms are embedded in image sensor nodes and image processing is distributed across sensor nodes. The tradeoff between sensor life time, distributed image search and network deployed cost is also investigated. 展开更多
关键词 IMAGE sensor NETWORKS IMAGE Identification in sensor Network camera sensor NETWORKS Distributed IMAGE SEARCH
在线阅读 下载PDF
Trifocal tensor based side information generation for multi-view distributed video code
11
作者 Lin Xin Liu Haitao Wei Jianming 《High Technology Letters》 EI CAS 2010年第3期268-273,共6页
Distributed video coding (DVC) is a new video coding approach based on Wyner-Ziv theorem. The novel uplink-friendly DVC, which offers low-complexity, low-power consuming, and low-cost video encoding, has aroused mor... Distributed video coding (DVC) is a new video coding approach based on Wyner-Ziv theorem. The novel uplink-friendly DVC, which offers low-complexity, low-power consuming, and low-cost video encoding, has aroused more and more research interests. In this paper a new method based on multiple view geometry is presented for spatial side information generation of uncalibrated video sensor network. Trifocal tensor encapsulates all the geometric relations among three views that are independent of scene structure; it can be computed from image correspondences alone without requiring knowledge of the motion or calibration. Simulation results show that trifocal tensor-based spatial side information improves the rate-distortion performance over motion compensation based interpolation side information by a maximum gap of around 2dB. Then fusion merges the different side information (temporal and spatial) in order to improve the quality of the final one. Simulation results show that the rate-distortion gains about 0.4 dB. 展开更多
关键词 multi-view distributed video coding (DVC) camera sensor networks trifocal tensor side information
在线阅读 下载PDF
Co-axial depth sensor with an extended depth range for AR/VR applications 被引量:1
12
作者 Mohan XU Hong HUA 《Virtual Reality & Intelligent Hardware》 2020年第1期1-11,共11页
Background Depth sensor is an essential element in virtual and augmented reality devices to digitalize users'environment in real time.The current popular technologies include the stereo,structured light,and Time-o... Background Depth sensor is an essential element in virtual and augmented reality devices to digitalize users'environment in real time.The current popular technologies include the stereo,structured light,and Time-of-Flight(ToF).The stereo and structured light method require a baseline separation between multiple sensors for depth sensing,and both suffer from a limited measurement range.The ToF depth sensors have the largest depth range but the lowest depth map resolution.To overcome these problems,we propose a co-axial depth map sensor which is potentially more compact and cost-effective than conventional structured light depth cameras.Meanwhile,it can extend the depth range while maintaining a high depth map resolution.Also,it provides a high-resolution 2 D image along with the 3 D depth map.Methods This depth sensor is constructed with a projection path and an imaging path.Those two paths are combined by a beamsplitter for a co-axial design.In the projection path,a cylindrical lens is inserted to add extra power in one direction which creates an astigmatic pattern.For depth measurement,the astigmatic pattern is projected onto the test scene,and then the depth information can be calculated from the contrast change of the reflected pattern image in two orthogonal directions.To extend the depth measurement range,we use an electronically focus tunable lens at the system stop and tune the power to implement an extended depth range without compromising depth resolution.Results In the depth measurement simulation,we project a resolution target onto a white screen which is moving along the optical axis and then tune the focus tunable lens power for three depth measurement subranges,namely,near,middle and far.In each sub-range,as the test screen moves away from the depth sensor,the horizontal contrast keeps increasing while the vertical contrast keeps decreasing in the reflected image.Therefore,the depth information can be obtained by computing the contrast ratio between features in orthogonal directions.Conclusions The proposed depth map sensor could implement depth measurement for an extended depth range with a co-axial design. 展开更多
关键词 Depth map sensor 3 D camera Controlled aberration
在线阅读 下载PDF
Automatic Miscalibration Detection and Correction of LiDAR and Camera Using Motion Cues
13
作者 Pai Peng Dawei Pi +3 位作者 Guodong Yin Yan Wang Liwei Xu Jiwei Feng 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2024年第2期318-329,共12页
This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm... This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods. 展开更多
关键词 Autonomous vehicle LiDAR and camera Miscalibration detection and correction sensor drift
在线阅读 下载PDF
In-Motes EYE: A Real Time Application for Automobiles in Wireless Sensor Networks
14
作者 Dimitrios Georgoulas Keith Blow 《Wireless Sensor Network》 2011年第5期158-166,共9页
Wireless sensor networks have been identified as one of the key technologies for the 21st century. In order to overcome their limitations such as fault tolerance and conservation of energy, we propose a middleware sol... Wireless sensor networks have been identified as one of the key technologies for the 21st century. In order to overcome their limitations such as fault tolerance and conservation of energy, we propose a middleware solution, In-Motes. In-Motes stands as a fault tolerant platform for deploying and monitoring applications in real time offers a number of possibilities for the end user giving him in parallel the freedom to experiment with various parameters, in an effort the deployed applications to run in an energy efficient manner inside the network. The proposed scheme is evaluated through the In-Motes EYE application, aiming to test its merits under real time conditions. In-Motes EYE application which is an agent based real time In-Motes application developed for sensing acceleration variations in an environment. The application was tested in a prototype area, road alike, for a period of four months. 展开更多
关键词 Wireless sensor Networks MIDDLEWARE Mobile AGENTS In-Motes ACCELERATION Measurements TRAFFIC cameras
暂未订购
Evaluation of Skills in Swing Technique in Classical Japanese Swordsmanship in laido Using Sensors
15
作者 Satoru Okamoto Satomi Adachi 《Journal of Mechanics Engineering and Automation》 2016年第4期190-196,共7页
In this study, we analyzed the swing motions of more experienced practitioner and new practitioner of iaido players by using tri-axial acceleration sensor and gyro sensor. Iaido is a modern Japanese martial art/sport.... In this study, we analyzed the swing motions of more experienced practitioner and new practitioner of iaido players by using tri-axial acceleration sensor and gyro sensor. Iaido is a modern Japanese martial art/sport. In this way, the acceleration and gyro sensor measurement enabled detailed motion information at the installation points to be displayed in a short time, thus making it possible to easily extract the objective problems. Although it was not possible to confirm by the acceleration and angular velocity measurements the detailed motion of the entire body as obtained in the 2D motion analysis with a high-speed camera, it was confirmed that the acceleration and gyro sensor is an evaluation means that can be installed easily and can provide the exercise information in a short time as an objective index. 展开更多
关键词 Skill science classical Japanese swordsmanship in iaido acceleration sensor high-speed video camera gyro sensor.
在线阅读 下载PDF
Towards Autonomous Vehicles with Advanced Sensor Solutions
16
作者 Matti Kutila Pasi Pyykonen +2 位作者 Aarno Lybeck Pirita Niemi Erik Nordin 《World Journal of Engineering and Technology》 2015年第3期6-17,共12页
Professional truck drivers are an essential part of transportation in keeping the global economy alive and commercial products moving. In order to increase productivity and improve safety, an increasing amount of auto... Professional truck drivers are an essential part of transportation in keeping the global economy alive and commercial products moving. In order to increase productivity and improve safety, an increasing amount of automation is implemented in modern trucks. Transition to automated heavy good vehicles is intended to make trucks accident-free and, on the other hand, more comfortable to drive. This motivates the automotive industry to bring more embedded ICT into their vehicles in the future. An avenue towards autonomous vehicles requires robust environmental perception and driver monitoring technologies to be introduced. This is the main motivation behind the DESERVE project. This is the study of sensor technology trials in order to minimize blind spots around the truck and, on the other hand, keep the driver’s vigilance at a sufficiently high level. The outcomes are two innovative truck demonstrations: one R & D study for bringing equipment to production in the future and one implementation to the driver training vehicle. The earlier experiments include both driver monitoring technology which works at a 60% - 80% accuracy level and environment perception (stereo and thermal cameras) whose performance rates are 70% - 100%. The results are not sufficient for autonomous vehicles, but are a step forward, since they are in-line even if moved from the lab to real automotive implementations. 展开更多
关键词 Autonomous Driving camera Driver Monitoring Environment Perception Automated Vehicle sensor Laser Scanner TRUCK RADAR Data Fusion
暂未订购
多视图相机传感器与多模态特征融合的三维重建 被引量:1
17
作者 尉艳丽 张素智 《传感器与微系统》 北大核心 2025年第11期101-105,共5页
针对当前三维(3D)重建模型细节捕捉不足、多视图变化适应性差等问题。提出了一种多视图相机传感器与多模态特征融合的3D重建模型,使用多视图的相机传感器数据,通过多模态特征融合和全局—局部注意力机制,有效提高复杂场景的重建质量。另... 针对当前三维(3D)重建模型细节捕捉不足、多视图变化适应性差等问题。提出了一种多视图相机传感器与多模态特征融合的3D重建模型,使用多视图的相机传感器数据,通过多模态特征融合和全局—局部注意力机制,有效提高复杂场景的重建质量。另外,从多视图二维(2D)图像中提取颜色、深度、语义学等多模态特征,通过动态调整特征的重要性来准确捕捉关键区域。实验结果表明,本文模型在局部光场融合(LLFF)和丹麦技术大学(DTU)数据集上的3D重建表现优于现有主流模型,模型在3视图、6视图和9视图输入的峰值信噪比(PSNR)指标分别达到20.01、23.56和24.58。本文模型在复杂场景和多视图变化中表现出较强的鲁棒性,验证了模型的有效性和可靠性。 展开更多
关键词 多视图相机传感器 多模态特征融合 三维重建
在线阅读 下载PDF
激光雷达与相机外参标定方法研究综述
18
作者 黄跃成 曹成 《辽宁师专学报(自然科学版)》 2025年第3期26-34,共9页
激光雷达(LiDAR)与相机的外参标定是实现多模态感知融合的关键步骤,直接影响自动驾驶、机器人等系统在环境感知、三维重建和目标定位等任务中的精度与稳定性.研究系统梳理并分析现有LiDAR-相机外参标定方法,按传统几何标定方法、基于特... 激光雷达(LiDAR)与相机的外参标定是实现多模态感知融合的关键步骤,直接影响自动驾驶、机器人等系统在环境感知、三维重建和目标定位等任务中的精度与稳定性.研究系统梳理并分析现有LiDAR-相机外参标定方法,按传统几何标定方法、基于特征匹配的无标定物方法及深度学习方法进行分类阐述.通过对比各类方法的技术原理、实验流程、精度表现及其在不同应用场景下的优劣,揭示当前标定方法在自动化程度、环境依赖性及鲁棒性等方面面临的挑战.同时,对联合标定未来发展方向进行展望,涵盖自监督学习、动态场景下的在线标定及多传感器联合优化等内容,为后续研究提供理论参考. 展开更多
关键词 激光雷达 相机标定 外参估计 几何方法 深度学习 多传感器融合
在线阅读 下载PDF
基于多传感器融合的无人车目标检测系统研究 被引量:1
19
作者 陈晓锋 李郁峰 +3 位作者 王传松 郭荣 樊宏丽 朱堉伦 《激光杂志》 北大核心 2025年第5期94-100,共7页
针对单一传感器存在受环境因素影响较大,容易造成漏检,误检且不同传感器之间的数据格式不同,融合复杂度高的问题,提出一种基于激光雷达和相机的决策级融合方法。首先对激光雷达和相机进行时空对齐,然后分别使用PointPillars算法和Yolov... 针对单一传感器存在受环境因素影响较大,容易造成漏检,误检且不同传感器之间的数据格式不同,融合复杂度高的问题,提出一种基于激光雷达和相机的决策级融合方法。首先对激光雷达和相机进行时空对齐,然后分别使用PointPillars算法和Yolov5算法对预处理后的点云数据和图像数据进行迁移训练和目标检测得到检测框,最后使用交并比匹配、D-S证据理论和加权框融合方法对目标结果进行融合。通过实车试验,得出提出的融合方法在激光雷达和相机的决策级融合场景中能够有效结合两者的优势,实现对环境的更全面感知,有效提升目标检测精度,减小误检,漏检的概率。 展开更多
关键词 多传感器融合 激光雷达 单目相机 D-S证据理论 加权框融合
原文传递
激光雷达和相机传感器融合的两阶段3D目标检测 被引量:1
20
作者 洪宝惜 范智淳 柳萍萍 《传感器与微系统》 北大核心 2025年第11期110-114,共5页
针对自动驾驶和机器人技术对感知系统的高精度需求,提出了一种激光雷达(LiDAR)与相机传感器融合的两阶段3D目标检测方法FusionDetect,以解决现有激光雷达检测器因点云稀疏性导致的远距离物体识别难题。该方法通过创新性地融合激光雷达... 针对自动驾驶和机器人技术对感知系统的高精度需求,提出了一种激光雷达(LiDAR)与相机传感器融合的两阶段3D目标检测方法FusionDetect,以解决现有激光雷达检测器因点云稀疏性导致的远距离物体识别难题。该方法通过创新性地融合激光雷达的几何信息与相机的纹理信息,采用感兴趣区域(RoI)池化技术统一处理多模态数据,并设计模态内自注意力与交叉注意力机制实现特征增强与信息融合。实验证明,在KITTI和Waymo基准测试中,FusionDetect不仅使主流检测器性能获得突破性提升,更在Waymo数据集上以6.45%的平均精度均值(mAP)提升超越基线模型,显著优于现有两阶段方法。研究成果验证了多模态传感器融合在提升3D目标检测精度方面的关键作用。 展开更多
关键词 3D目标检测 传感器融合 激光雷达 相机 多模态数据 注意力机制
在线阅读 下载PDF
上一页 1 2 24 下一页 到第
使用帮助 返回顶部