期刊文献+
共找到469篇文章
< 1 2 24 >
每页显示 20 50 100
Sensor planning method for visual tracking in 3D camera networks 被引量:1
1
作者 Anlong Ming Xin Chen 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2014年第6期1107-1116,共10页
Most sensors or cameras discussed in the sensor network community are usually 3D homogeneous, even though their2 D coverage areas in the ground plane are heterogeneous. Meanwhile, observed objects of camera networks a... Most sensors or cameras discussed in the sensor network community are usually 3D homogeneous, even though their2 D coverage areas in the ground plane are heterogeneous. Meanwhile, observed objects of camera networks are usually simplified as 2D points in previous literature. However in actual application scenes, not only cameras are always heterogeneous with different height and action radiuses, but also the observed objects are with 3D features(i.e., height). This paper presents a sensor planning formulation addressing the efficiency enhancement of visual tracking in 3D heterogeneous camera networks that track and detect people traversing a region. The problem of sensor planning consists of three issues:(i) how to model the 3D heterogeneous cameras;(ii) how to rank the visibility, which ensures that the object of interest is visible in a camera's field of view;(iii) how to reconfigure the 3D viewing orientations of the cameras. This paper studies the geometric properties of 3D heterogeneous camera networks and addresses an evaluation formulation to rank the visibility of observed objects. Then a sensor planning method is proposed to improve the efficiency of visual tracking. Finally, the numerical results show that the proposed method can improve the tracking performance of the system compared to the conventional strategies. 展开更多
关键词 camera model sensor planning camera network visual tracking
在线阅读 下载PDF
Color compensation for multi-view video coding based on diversity of cameras 被引量:1
2
作者 Jun-yan HUO Yi-lin CHANG +1 位作者 Hai-tao YANG Shuai WAN 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2008年第12期1631-1637,共7页
A novel color compensation method for multi-view video coding (MVC) is proposed, which efficiently exploits the inter-view dependencies between views with the existence of color mismatch caused by the diversity of cam... A novel color compensation method for multi-view video coding (MVC) is proposed, which efficiently exploits the inter-view dependencies between views with the existence of color mismatch caused by the diversity of cameras. A color compensation model is developed in RGB channels and then extended to YCbCr channels for practical use. A modified inter-view reference picture is constructed based on the color compensation model, which is more similar to the coding picture than the original inter-view reference picture. Moreover, the color compensation factors can be derived in both encoder and decoder, therefore no additional data need to be transmitted to the decoder. The experimental results show that the proposed method improves the coding efficiency of MVC and maintains good subjective quality. 展开更多
关键词 multi-view video coding (MVC) H.264/AVC Color compensation Diversity of cameras
在线阅读 下载PDF
Stereoscopic Camera-Sensor Model for the Development of Highly Automated Driving Functions within a Virtual Test Environment
3
作者 René Degen Martin de Fries +3 位作者 Alexander Nüßgen Marcus Irmer Mats Leijon Margot Ruschitzka 《Journal of Transportation Technologies》 2023年第1期87-114,共28页
The need for efficient and reproducible development processes for sensor and perception systems is growing with their increased use in modern vehicles. Such processes can be achieved by using virtual test environments... The need for efficient and reproducible development processes for sensor and perception systems is growing with their increased use in modern vehicles. Such processes can be achieved by using virtual test environments and virtual sensor models. In the context of this, the present paper documents the development of a sensor model for depth estimation of virtual three-dimensional scenarios. For this purpose, the geometric and algorithmic principles of stereoscopic camera systems are recreated in a virtual form. The model is implemented as a subroutine in the Epic Games Unreal Engine, which is one of the most common Game Engines. Its architecture consists of several independent procedures that enable a local depth estimation, but also a reconstruction of a whole three-dimensional scenery. In addition, a separate programme for calibrating the model is presented. In addition to the basic principles, the architecture and the implementation, this work also documents the evaluation of the model created. It is shown that the model meets specifically defined requirements for real-time capability and the accuracy of the evaluation. Thus, it is suitable for the virtual testing of common algorithms and highly automated driving functions. 展开更多
关键词 sensor Model Virtual Test Environment Stereoscopic camera Unreal Engine OPENCV ADAS/AD
在线阅读 下载PDF
基于复杂设施农业环境的多传感器融合建图
4
作者 张三强 钱刚 +4 位作者 虢淇泽 刘微 吴杰 周红宇 胡新宇 《农机化研究》 北大核心 2026年第6期179-187,共9页
针对当前2D激光雷达SLAM系统不适应复杂设施农业环境建图和3D激光雷达成本高昂的问题,基于阿克曼农业机器人平台提出了一种2D激光雷达、视觉RGB-D相机与轮式里程计融合的建图方法,构建了2D激光雷达、RGB-D相机与轮式里程计多传感器融合... 针对当前2D激光雷达SLAM系统不适应复杂设施农业环境建图和3D激光雷达成本高昂的问题,基于阿克曼农业机器人平台提出了一种2D激光雷达、视觉RGB-D相机与轮式里程计融合的建图方法,构建了2D激光雷达、RGB-D相机与轮式里程计多传感器融合建图模型,对视觉-雷达-轮式里程计融合的SLAM建图过程进行了研究分析。在模拟的复杂设施农业环境中进行试验,对提出的建图方法进行了验证。试验结果显示:该方法建立的环境地图为二维平面与三维空间的融合地图,误差最大为2.2%,2D激光雷达建图的地图误差最大为2.9%,RGB-D相机纯视觉建图的地图误差最大为4.4%,融合建图地图的精度高于2D激光雷达与RGB-D相机建图。融合地图中,障碍物长、宽、高的最大误差分别为16.3%、20.9%、12.1%,障碍物质心到建图起始点的距离最大误差为4.5%,均在合理范围内,满足复杂设施农业环境中自动导航的建图要求,有效改善了农业机器人2D激光雷达在复杂设施农业环境下建图的局限性,同时解决了3D激光雷达成本昂贵、不利于农业机器人推广应用的问题,为农业机器人建图与导航研究提供了理论基础与数据支撑。 展开更多
关键词 设施农业 多传感器融合 SLAM 2D激光雷达 RGB-D深度相机 轮式里程计
在线阅读 下载PDF
激光传感器感知环境信息和视觉融合的机器人定位研究
5
作者 范新明 朱锦新 邵俊 《激光杂志》 北大核心 2026年第2期216-221,共6页
为使机器人能够全面感知各种复杂环境,在确保移动位姿稳定的前提下,使其能在各种环境中均能实现稳定精确定位,提出基于激光传感器感知环境信息和视觉融合的机器人定位方法。该方法利用双目相机和激光传感器分别进行机器人空间标定和移... 为使机器人能够全面感知各种复杂环境,在确保移动位姿稳定的前提下,使其能在各种环境中均能实现稳定精确定位,提出基于激光传感器感知环境信息和视觉融合的机器人定位方法。该方法利用双目相机和激光传感器分别进行机器人空间标定和移动环境信息感知,将两者的结果统一到相同的坐标系中,获取机器人的全局移动位姿;依据该位姿结果,采用编码器与卡尔曼滤波方法全面融合机器人感知到的环境细节信息,从而准确确定机器人在空间中的位置。实验表明:该方法可以准确地调节关节夹角,使其平移误差均小于0.14%,而旋转角度误差均小于0.05°;并精确融合机器人激光成像信息与视觉信息,以此准确定位机器人当前位置。 展开更多
关键词 激光传感器 双目相机 参数矩阵 编码器 卡尔曼滤波 视觉融合
原文传递
基于相机传感器的轻量级全景分割网络
6
作者 郭蕊 周高利 洪祥龙 《传感器与微系统》 北大核心 2026年第1期117-120,共4页
针对目前主流的高性能全景分割方法计算开销大,难以满足实时应用需求。提出了一种创新的基于相机传感器的单阶段轻量级全景分割网络,该网络采用共享特征提取主干,并配备3个专用分支分别处理目标检测、语义分割和实例级注意力掩码生成。... 针对目前主流的高性能全景分割方法计算开销大,难以满足实时应用需求。提出了一种创新的基于相机传感器的单阶段轻量级全景分割网络,该网络采用共享特征提取主干,并配备3个专用分支分别处理目标检测、语义分割和实例级注意力掩码生成。设计的新型全景分割头通过空间嵌入学习生成实例感知的软注意力掩码,实现了物体类别语义掩码与实例软掩码在像素层面的自然融合,无需额外后处理即可直接输出全景分割结果。在COCO和Cityscapes数据集上的实验表明,该方法在精度和速度方面均达到先进水平,尤其在Cityscapes数据集1024×2048分辨率图像上取得了59.7的全景质量分数,同时保持超过10 fps的实时推理速度。 展开更多
关键词 全景分割 相机传感器 单阶段 轻量级
在线阅读 下载PDF
弱边缘特征的LiDAR-红外相机高精度外参标定方法
7
作者 王妍 左勇 +4 位作者 唐义 黄朝围 陆悦 洪小斌 伍剑 《红外与激光工程》 北大核心 2026年第1期155-164,共10页
LiDAR-红外相机外参标定是实现多源传感器信息融合的关键环节。针对传统方法对标定板要求高且需人工干预以及红外图像分辨率低、边缘模糊的问题,文中提出了弱边缘特征的LiDAR-红外相机高精度外参标定方法。首先,设计了跨模态自适应角点... LiDAR-红外相机外参标定是实现多源传感器信息融合的关键环节。针对传统方法对标定板要求高且需人工干预以及红外图像分辨率低、边缘模糊的问题,文中提出了弱边缘特征的LiDAR-红外相机高精度外参标定方法。首先,设计了跨模态自适应角点检测框架,将红外图像与点云特征提取统一建模为“粗定位-局部增强-自适应精修”的多层级迭代优化过程,有效解决了不同模态下特征分布不一致和弱边缘特性导致的误检问题。实验结果表明,该框架在红外图像与三维点云数据中分别实现了83%和89%的特征点检测重复率;其次,结合EPnP建模与Ceres非线性优化,文中方法实现了无需标定板的全自动高精度外参估计,平均重投影误差为1.74 pixel,较标定板方法降低54.45%,较引入SAM大模型的方法降低19.44%;最后,通过多场景实验验证,该方法在不同光照和测距条件下均能保持稳定性能,为全天时LiDAR-红外相机多源融合感知提供了可靠支撑。 展开更多
关键词 外参标定 红外相机 激光点云 多传感器融合
原文传递
一种基于Spike Camera的彩色脉冲相机仿真
8
作者 马雪山 刘斌 +2 位作者 杨静 田娟 马雷 《现代计算机》 2022年第12期17-23,共7页
受到生物视觉机制的启发,一种新型的仿视网膜中央凹的视觉传感器Spike Camera被研制出来。相比传统基于帧的相机和其他事件相机,Spike Camera更适合处理超高速场景的视觉任务。目前物理设备推广受到限制,此类视觉类传感器的发展和应用... 受到生物视觉机制的启发,一种新型的仿视网膜中央凹的视觉传感器Spike Camera被研制出来。相比传统基于帧的相机和其他事件相机,Spike Camera更适合处理超高速场景的视觉任务。目前物理设备推广受到限制,此类视觉类传感器的发展和应用也受到影响。在本文中,我们实现了一种基于Spike Camera的仿真建模方法,将连续的视频帧转化成脉冲数据,再根据脉冲数据重构成图像帧。通过仿真程序,探索了在Spike Camera传感器上的彩色信息脉冲编码的方式。实验结果表明,相比其他彩色编码方式,使用单通道的RGB彩色脉冲编码能在损失很少的图像信息的情况下,极大地压缩脉冲数据,减少传输带宽。本文的研究结果对设计彩色脉冲相机具有重要参考意义。 展开更多
关键词 脉冲相机 彩色脉冲 传感器仿真
在线阅读 下载PDF
洗煤厂皮带安全运行智能监测系统设计
9
作者 张强 《煤》 2026年第1期100-104,共5页
针对洗煤厂皮带系统核心区域监测不足的现状,原有模拟监控摄像头存在图像分辨率低、维护成本高昂且不便的问题,提出在皮带的机头与机尾位置安装高性能矿用光纤摄像机,提升现场监控的精确度和效率;在皮带的主驱动滚筒及减速器上,集成安... 针对洗煤厂皮带系统核心区域监测不足的现状,原有模拟监控摄像头存在图像分辨率低、维护成本高昂且不便的问题,提出在皮带的机头与机尾位置安装高性能矿用光纤摄像机,提升现场监控的精确度和效率;在皮带的主驱动滚筒及减速器上,集成安装温度与振动传感器,实时监测设备运行过程中的温度波动与振动情况,构建一套集数据采集、分析、预警于一体的皮带运行状态在线监测系统。该系统不仅能够实时展示皮带的运行状况,还能通过智能算法对收集到的数据进行深度分析,自动识别潜在的安全隐患或性能衰退趋势,实现故障的早期预警与快速响应,为洗煤厂的稳定高效运行提供坚实的技术支撑。 展开更多
关键词 监控摄像头 图像分辨率 光纤摄像机 振动传感器 数据采集 运行状况
在线阅读 下载PDF
基于Cameralink接口的双目CMOS APS成像系统的研究 被引量:1
10
作者 张明宇 刘金国 +2 位作者 李余 宋丹 孔德柱 《光电子技术》 CAS 2008年第4期270-273,共4页
为满足大视场、多个图像传感器同时使用、多光电图像传感器成像系统的响应一致性等技术需求,设计了一种大视场的成像系统。将先进先出缓存(FIFO)应用到数据传输技术当中,保证了系统多种图像数据同步实时输出;在基于低压差分信号(LVDS)... 为满足大视场、多个图像传感器同时使用、多光电图像传感器成像系统的响应一致性等技术需求,设计了一种大视场的成像系统。将先进先出缓存(FIFO)应用到数据传输技术当中,保证了系统多种图像数据同步实时输出;在基于低压差分信号(LVDS)技术上同时采用Camera link接口协议,设计了一种低成本、高速、稳定、简易的双目半导体金属氧化物有源传感器(CMOSAPS)成像系统,并深入探讨了系统的基本组成和工作原理;利用可编程门阵列完成自上而下的模块设计;并对系统部分工作时序进行了仿真,并进行板上调试;结果表明,本设计方案满足系统的大视场需求,为后续的图像采集工作打下了坚实的基础。 展开更多
关键词 cameralink接口 双目 CMOS APS图像传感器 现场可编程门阵列
在线阅读 下载PDF
Modeling and Rectification of Rolling Shutter Effect in CMOS Aerial Cameras
11
作者 Lei Wan Ye Zhang +1 位作者 Ping Jia Jiajia Xu 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2017年第4期71-77,共7页
Due to the electronic rolling shutter, high-speed Complementary Metal-Oxide Semiconductor( CMOS) aerial cameras are generally subject to geometric distortions,which cannot be perfectly corrected by conventional vision... Due to the electronic rolling shutter, high-speed Complementary Metal-Oxide Semiconductor( CMOS) aerial cameras are generally subject to geometric distortions,which cannot be perfectly corrected by conventional vision-based algorithms. In this paper we propose a novel approach to address the problem of rolling shutter distortion in aerial imaging. A mathematical model is established by the coordinate transformation method. It can directly calculate the pixel distortion when an aerial camera is imaging at arbitrary gesture angles.Then all pixel distortions form a distortion map over the whole CMOS array and the map is exploited in the image rectification process incorporating reverse projection. The error analysis indicates that within the margin of measuring errors,the final calculation error of our model is less than 1/2 pixel. The experimental results show that our approach yields good rectification performance in a series of images with different distortions. We demonstrate that our method outperforms other vision-based algorithms in terms of the computational complexity,which makes it more suitable for aerial real-time imaging. 展开更多
关键词 aerial camera CMOS sensor ROLLING SHUTTER effect COORDINATE transformation image RECTIFICATION
在线阅读 下载PDF
Energy Efficient Content Based Image Retrieval in Sensor Networks
12
作者 Qurban A. Memon Hend Alqamzi 《International Journal of Communications, Network and System Sciences》 2012年第7期405-415,共11页
The presence of increased memory and computational power in imaging sensor networks attracts researchers to exploit image processing algorithms on distributed memory and computational power. In this paper, a typical p... The presence of increased memory and computational power in imaging sensor networks attracts researchers to exploit image processing algorithms on distributed memory and computational power. In this paper, a typical perimeter is investigated with a number of sensors placed to form an image sensor network for the purpose of content based distributed image search. Image search algorithm is used to enable distributed content based image search within each sensor node. The energy model is presented to calculate energy efficiency for various cases of image search and transmission. The simulations are carried out based on consideration of continuous monitoring or event driven activity on the perimeter. The simulation setups consider distributed image processing on sensor nodes and results show that energy saving is significant if search algorithms are embedded in image sensor nodes and image processing is distributed across sensor nodes. The tradeoff between sensor life time, distributed image search and network deployed cost is also investigated. 展开更多
关键词 IMAGE sensor NETWORKS IMAGE Identification in sensor Network camera sensor NETWORKS Distributed IMAGE SEARCH
在线阅读 下载PDF
Research on a New-style Visual Sensor for Measurement
13
作者 YU Rong TAN Yuegang 《武汉理工大学学报》 CAS CSCD 北大核心 2006年第S3期1003-1006,共4页
First,the constitution of traditional visual sensor is presented.The linear camera model is introduced and the transform matrix between the image coordinate system and the world coordinate system is established.The ba... First,the constitution of traditional visual sensor is presented.The linear camera model is introduced and the transform matrix between the image coordinate system and the world coordinate system is established.The basic principle of camera calibration is expatiated based on the linear camera model.On the basis of a detailed analysis of camera model,a new-style visual sensor for measurement is advanced.It can realize the real time control of the zoom of camera lens by step motor according to the size of objects.Moreover,re-calibration could be avoided and the transform matrix can be acquired by calculating,which can greatly simplify camera calibration process and save the time.Clearer images are gained,so the measurement system precision could be greatly improved.The basic structure of the visual sensor zoom is introduced,including the constitute mode and the movement rule of the fixed former part,zoom part,compensatory part and the fixed latter port.The realization method of zoom controlled by step motor is introduced.Finally,the constitution of the new-style visual sensor is introduced,including hardware and software.The hardware system is composed by manual zoom,CCD camera,image card,gearing,step motor,step motor driver and computer.The realization of software is introduced,including the composed module of software and the workflow of measurement system in the form of structured block diagram. 展开更多
关键词 camera CALIBRATION ZOOM VISUAL sensor
在线阅读 下载PDF
Trifocal tensor based side information generation for multi-view distributed video code
14
作者 Lin Xin Liu Haitao Wei Jianming 《High Technology Letters》 EI CAS 2010年第3期268-273,共6页
Distributed video coding (DVC) is a new video coding approach based on Wyner-Ziv theorem. The novel uplink-friendly DVC, which offers low-complexity, low-power consuming, and low-cost video encoding, has aroused mor... Distributed video coding (DVC) is a new video coding approach based on Wyner-Ziv theorem. The novel uplink-friendly DVC, which offers low-complexity, low-power consuming, and low-cost video encoding, has aroused more and more research interests. In this paper a new method based on multiple view geometry is presented for spatial side information generation of uncalibrated video sensor network. Trifocal tensor encapsulates all the geometric relations among three views that are independent of scene structure; it can be computed from image correspondences alone without requiring knowledge of the motion or calibration. Simulation results show that trifocal tensor-based spatial side information improves the rate-distortion performance over motion compensation based interpolation side information by a maximum gap of around 2dB. Then fusion merges the different side information (temporal and spatial) in order to improve the quality of the final one. Simulation results show that the rate-distortion gains about 0.4 dB. 展开更多
关键词 multi-view distributed video coding (DVC) camera sensor networks trifocal tensor side information
在线阅读 下载PDF
Co-axial depth sensor with an extended depth range for AR/VR applications 被引量:1
15
作者 Mohan XU Hong HUA 《Virtual Reality & Intelligent Hardware》 2020年第1期1-11,共11页
Background Depth sensor is an essential element in virtual and augmented reality devices to digitalize users'environment in real time.The current popular technologies include the stereo,structured light,and Time-o... Background Depth sensor is an essential element in virtual and augmented reality devices to digitalize users'environment in real time.The current popular technologies include the stereo,structured light,and Time-of-Flight(ToF).The stereo and structured light method require a baseline separation between multiple sensors for depth sensing,and both suffer from a limited measurement range.The ToF depth sensors have the largest depth range but the lowest depth map resolution.To overcome these problems,we propose a co-axial depth map sensor which is potentially more compact and cost-effective than conventional structured light depth cameras.Meanwhile,it can extend the depth range while maintaining a high depth map resolution.Also,it provides a high-resolution 2 D image along with the 3 D depth map.Methods This depth sensor is constructed with a projection path and an imaging path.Those two paths are combined by a beamsplitter for a co-axial design.In the projection path,a cylindrical lens is inserted to add extra power in one direction which creates an astigmatic pattern.For depth measurement,the astigmatic pattern is projected onto the test scene,and then the depth information can be calculated from the contrast change of the reflected pattern image in two orthogonal directions.To extend the depth measurement range,we use an electronically focus tunable lens at the system stop and tune the power to implement an extended depth range without compromising depth resolution.Results In the depth measurement simulation,we project a resolution target onto a white screen which is moving along the optical axis and then tune the focus tunable lens power for three depth measurement subranges,namely,near,middle and far.In each sub-range,as the test screen moves away from the depth sensor,the horizontal contrast keeps increasing while the vertical contrast keeps decreasing in the reflected image.Therefore,the depth information can be obtained by computing the contrast ratio between features in orthogonal directions.Conclusions The proposed depth map sensor could implement depth measurement for an extended depth range with a co-axial design. 展开更多
关键词 Depth map sensor 3 D camera Controlled aberration
在线阅读 下载PDF
Automatic Miscalibration Detection and Correction of LiDAR and Camera Using Motion Cues
16
作者 Pai Peng Dawei Pi +3 位作者 Guodong Yin Yan Wang Liwei Xu Jiwei Feng 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2024年第2期318-329,共12页
This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm... This paper aims to develop an automatic miscalibration detection and correction framework to maintain accurate calibration of LiDAR and camera for autonomous vehicle after the sensor drift.First,a monitoring algorithm that can continuously detect the miscalibration in each frame is designed,leveraging the rotational motion each individual sensor observes.Then,as sensor drift occurs,the projection constraints between visual feature points and LiDAR 3-D points are used to compute the scaled camera motion,which is further utilized to align the drifted LiDAR scan with the camera image.Finally,the proposed method is sufficiently compared with two representative approaches in the online experiments with varying levels of random drift,then the method is further extended to the offline calibration experiment and is demonstrated by a comparison with two existing benchmark methods. 展开更多
关键词 Autonomous vehicle LiDAR and camera Miscalibration detection and correction sensor drift
在线阅读 下载PDF
In-Motes EYE: A Real Time Application for Automobiles in Wireless Sensor Networks
17
作者 Dimitrios Georgoulas Keith Blow 《Wireless Sensor Network》 2011年第5期158-166,共9页
Wireless sensor networks have been identified as one of the key technologies for the 21st century. In order to overcome their limitations such as fault tolerance and conservation of energy, we propose a middleware sol... Wireless sensor networks have been identified as one of the key technologies for the 21st century. In order to overcome their limitations such as fault tolerance and conservation of energy, we propose a middleware solution, In-Motes. In-Motes stands as a fault tolerant platform for deploying and monitoring applications in real time offers a number of possibilities for the end user giving him in parallel the freedom to experiment with various parameters, in an effort the deployed applications to run in an energy efficient manner inside the network. The proposed scheme is evaluated through the In-Motes EYE application, aiming to test its merits under real time conditions. In-Motes EYE application which is an agent based real time In-Motes application developed for sensing acceleration variations in an environment. The application was tested in a prototype area, road alike, for a period of four months. 展开更多
关键词 Wireless sensor Networks MIDDLEWARE Mobile AGENTS In-Motes ACCELERATION Measurements TRAFFIC cameras
暂未订购
Evaluation of Skills in Swing Technique in Classical Japanese Swordsmanship in laido Using Sensors
18
作者 Satoru Okamoto Satomi Adachi 《Journal of Mechanics Engineering and Automation》 2016年第4期190-196,共7页
In this study, we analyzed the swing motions of more experienced practitioner and new practitioner of iaido players by using tri-axial acceleration sensor and gyro sensor. Iaido is a modern Japanese martial art/sport.... In this study, we analyzed the swing motions of more experienced practitioner and new practitioner of iaido players by using tri-axial acceleration sensor and gyro sensor. Iaido is a modern Japanese martial art/sport. In this way, the acceleration and gyro sensor measurement enabled detailed motion information at the installation points to be displayed in a short time, thus making it possible to easily extract the objective problems. Although it was not possible to confirm by the acceleration and angular velocity measurements the detailed motion of the entire body as obtained in the 2D motion analysis with a high-speed camera, it was confirmed that the acceleration and gyro sensor is an evaluation means that can be installed easily and can provide the exercise information in a short time as an objective index. 展开更多
关键词 Skill science classical Japanese swordsmanship in iaido acceleration sensor high-speed video camera gyro sensor.
在线阅读 下载PDF
Towards Autonomous Vehicles with Advanced Sensor Solutions
19
作者 Matti Kutila Pasi Pyykonen +2 位作者 Aarno Lybeck Pirita Niemi Erik Nordin 《World Journal of Engineering and Technology》 2015年第3期6-17,共12页
Professional truck drivers are an essential part of transportation in keeping the global economy alive and commercial products moving. In order to increase productivity and improve safety, an increasing amount of auto... Professional truck drivers are an essential part of transportation in keeping the global economy alive and commercial products moving. In order to increase productivity and improve safety, an increasing amount of automation is implemented in modern trucks. Transition to automated heavy good vehicles is intended to make trucks accident-free and, on the other hand, more comfortable to drive. This motivates the automotive industry to bring more embedded ICT into their vehicles in the future. An avenue towards autonomous vehicles requires robust environmental perception and driver monitoring technologies to be introduced. This is the main motivation behind the DESERVE project. This is the study of sensor technology trials in order to minimize blind spots around the truck and, on the other hand, keep the driver’s vigilance at a sufficiently high level. The outcomes are two innovative truck demonstrations: one R & D study for bringing equipment to production in the future and one implementation to the driver training vehicle. The earlier experiments include both driver monitoring technology which works at a 60% - 80% accuracy level and environment perception (stereo and thermal cameras) whose performance rates are 70% - 100%. The results are not sufficient for autonomous vehicles, but are a step forward, since they are in-line even if moved from the lab to real automotive implementations. 展开更多
关键词 Autonomous Driving camera Driver Monitoring Environment Perception Automated Vehicle sensor Laser Scanner TRUCK RADAR Data Fusion
暂未订购
多视图相机传感器与多模态特征融合的三维重建 被引量:2
20
作者 尉艳丽 张素智 《传感器与微系统》 北大核心 2025年第11期101-105,共5页
针对当前三维(3D)重建模型细节捕捉不足、多视图变化适应性差等问题。提出了一种多视图相机传感器与多模态特征融合的3D重建模型,使用多视图的相机传感器数据,通过多模态特征融合和全局—局部注意力机制,有效提高复杂场景的重建质量。另... 针对当前三维(3D)重建模型细节捕捉不足、多视图变化适应性差等问题。提出了一种多视图相机传感器与多模态特征融合的3D重建模型,使用多视图的相机传感器数据,通过多模态特征融合和全局—局部注意力机制,有效提高复杂场景的重建质量。另外,从多视图二维(2D)图像中提取颜色、深度、语义学等多模态特征,通过动态调整特征的重要性来准确捕捉关键区域。实验结果表明,本文模型在局部光场融合(LLFF)和丹麦技术大学(DTU)数据集上的3D重建表现优于现有主流模型,模型在3视图、6视图和9视图输入的峰值信噪比(PSNR)指标分别达到20.01、23.56和24.58。本文模型在复杂场景和多视图变化中表现出较强的鲁棒性,验证了模型的有效性和可靠性。 展开更多
关键词 多视图相机传感器 多模态特征融合 三维重建
在线阅读 下载PDF
上一页 1 2 24 下一页 到第
使用帮助 返回顶部