期刊文献+
共找到821篇文章
< 1 2 42 >
每页显示 20 50 100
Thin-film event-based vision sensors for enhanced multispectral perception beyond human vision
1
作者 Kexin Li Xiaoting Wang +8 位作者 YiWu Wenjie Deng Jing Li Jingjie Li Yuehui Zhao Zhijie Chen Dezhen Yang Songlin Yu Yongzhe Zhang 《InfoMat》 2025年第7期104-114,共11页
Dynamic detection is crucial for intelligent vision systems,enabling applications like autonomous vehicles and advanced surveillance.Event-based sensors,which convert illumination variations into sparse event spikes,a... Dynamic detection is crucial for intelligent vision systems,enabling applications like autonomous vehicles and advanced surveillance.Event-based sensors,which convert illumination variations into sparse event spikes,are highly effective for dynamic detection with low data redundancy.However,current event-based vision sensors with simplified photosensitive capacitor structures face limitations,particularly in their spectral response,which hinders effective information acquisition in multispectral scenes.Here,we introduce a twoterminal thin-film event-based vision sensor that innovatively integrates an inorganic oxide p-n junction with the pyro-phototronic effect,synergistically combining the photovoltaic and pyroelectric mechanisms.This innovation enables spiking signals with a tenfold increase in responsivity,a dynamic range of 110 dB,and an extended spectral response from ultraviolet(UV)to near-infrared(NIR).With a thin-film sensor array,these spiking signals accurately extract fingerprint edge features even under low-light conditions,benefiting from high sensitivity to minor luminance variations.Additionally,the sensors'broadband spiking response captures richer information,achieving 99.25%accuracy in multispectral dynamic gesture recognition while reducing data processing by over 65%.This approach effectively eliminates redundant data while minimizing information loss,offering a promising alternative to current dynamic perception technologies. 展开更多
关键词 event-based vision in-sensor processing motion detection multispectral perception
原文传递
NeuroBiometric:An Eye Blink Based Biometric Authentication System Using an Event-Based Neuromorphic Vision Sensor 被引量:4
2
作者 Guang Chen Fa Wang +3 位作者 Xiaoding Yuan Zhijun Li Zichen Liang Alois Knoll 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第1期206-218,共13页
The rise of the Internet and identity authentication systems has brought convenience to people's lives but has also introduced the potential risk of privacy leaks.Existing biometric authentication systems based on... The rise of the Internet and identity authentication systems has brought convenience to people's lives but has also introduced the potential risk of privacy leaks.Existing biometric authentication systems based on explicit and static features bear the risk of being attacked by mimicked data.This work proposes a highly efficient biometric authentication system based on transient eye blink signals that are precisely captured by a neuromorphic vision sensor with microsecond-level temporal resolution.The neuromorphic vision sensor only transmits the local pixel-level changes induced by the eye blinks when they occur,which leads to advantageous characteristics such as an ultra-low latency response.We first propose a set of effective biometric features describing the motion,speed,energy and frequency signal of eye blinks based on the microsecond temporal resolution of event densities.We then train the ensemble model and non-ensemble model with our Neuro Biometric dataset for biometrics authentication.The experiments show that our system is able to identify and verify the subjects with the ensemble model at an accuracy of 0.948 and with the non-ensemble model at an accuracy of 0.925.The low false positive rates(about 0.002)and the highly dynamic features are not only hard to reproduce but also avoid recording visible characteristics of a user's appearance.The proposed system sheds light on a new path towards safer authentication using neuromorphic vision sensors. 展开更多
关键词 BIOMETRICS biometric autentication event-based vision neuromorphic vision
在线阅读 下载PDF
Ultrathin Gallium Nitride Quantum-Disk-in-Nanowire-Enabled Reconfigurable Bioinspired Sensor for High-Accuracy Human Action Recognition
3
作者 Zhixiang Gao Xin Ju +10 位作者 Huabin Yu Wei Chen Xin Liu Yuanmin Luo Yang Kang Dongyang Luo JiKai Yao Wengang Gu Muhammad Hunain Memon Yong Yan Haiding Sun 《Nano-Micro Letters》 2026年第2期439-453,共15页
Human action recognition(HAR)is crucial for the development of efficient computer vision,where bioinspired neuromorphic perception visual systems have emerged as a vital solution to address transmission bottlenecks ac... Human action recognition(HAR)is crucial for the development of efficient computer vision,where bioinspired neuromorphic perception visual systems have emerged as a vital solution to address transmission bottlenecks across sensor-processor interfaces.However,the absence of interactions among versatile biomimicking functionalities within a single device,which was developed for specific vision tasks,restricts the computational capacity,practicality,and scalability of in-sensor vision computing.Here,we propose a bioinspired vision sensor composed of a Ga N/Al N-based ultrathin quantum-disks-in-nanowires(QD-NWs)array to mimic not only Parvo cells for high-contrast vision and Magno cells for dynamic vision in the human retina but also the synergistic activity between the two cells for in-sensor vision computing.By simply tuning the applied bias voltage on each QD-NW-array-based pixel,we achieve two biosimilar photoresponse characteristics with slow and fast reactions to light stimuli that enhance the in-sensor image quality and HAR efficiency,respectively.Strikingly,the interplay and synergistic interaction of the two photoresponse modes within a single device markedly increased the HAR recognition accuracy from 51.4%to 81.4%owing to the integrated artificial vision system.The demonstration of an intelligent vision sensor offers a promising device platform for the development of highly efficient HAR systems and future smart optoelectronics. 展开更多
关键词 GaN nanowire Quantum-confined Stark effect Voltage-tunable photoresponse Bioinspired sensor Artificial vision system
在线阅读 下载PDF
Neuromorphic vision sensors: Principle, progress and perspectives 被引量:9
4
作者 Fuyou Liao Feichi Zhou Yang Chai 《Journal of Semiconductors》 EI CAS CSCD 2021年第1期112-121,共10页
Conventional frame-based image sensors suffer greatly from high energy consumption and latency.Mimicking neurobiological structures and functionalities of the retina provides a promising way to build a neuromorphic vi... Conventional frame-based image sensors suffer greatly from high energy consumption and latency.Mimicking neurobiological structures and functionalities of the retina provides a promising way to build a neuromorphic vision sensor with highly efficient image processing.In this review article,we will start with a brief introduction to explain the working mechanism and the challenges of conventional frame-based image sensors,and introduce the structure and functions of biological retina.In the main section,we will overview recent developments in neuromorphic vision sensors,including the silicon retina based on conventional Si CMOS digital technologies,and the neuromorphic vision sensors with the implementation of emerging devices.Finally,we will provide a brief outline of the prospects and outlook for the development of this field. 展开更多
关键词 image sensors silicon retina neuromorphic vision sensors photonic synapses
在线阅读 下载PDF
Progress of Materials and Devices for Neuromorphic Vision Sensors 被引量:10
5
作者 Sung Woon Cho Chanho Jo +1 位作者 Yong-Hoon Kim Sung Kyu Park 《Nano-Micro Letters》 SCIE EI CAS CSCD 2022年第12期239-271,共33页
The latest developments in bio-inspired neuromorphic vision sensors can be summarized in 3 keywords:smaller,faster,and smarter.(1)Smaller:Devices are becoming more compact by integrating previously separated component... The latest developments in bio-inspired neuromorphic vision sensors can be summarized in 3 keywords:smaller,faster,and smarter.(1)Smaller:Devices are becoming more compact by integrating previously separated components such as sensors,memory,and processing units.As a prime example,the transition from traditional sensory vision computing to in-sensor vision computing has shown clear benefits,such as simpler circuitry,lower power consumption,and less data redundancy.(2)Swifter:Owing to the nature of physics,smaller and more integrated devices can detect,process,and react to input more quickly.In addition,the methods for sensing and processing optical information using various materials(such as oxide semiconductors)are evolving.(3)Smarter:Owing to these two main research directions,we can expect advanced applications such as adaptive vision sensors,collision sensors,and nociceptive sensors.This review mainly focuses on the recent progress,working mechanisms,image pre-processing techniques,and advanced features of two types of neuromorphic vision sensors based on near-sensor and in-sensor vision computing methodologies. 展开更多
关键词 In-sensor computing Near-sensor computing Neuromorphic vision sensor Optoelectronic synaptic circuit Optoelectronic synapse
在线阅读 下载PDF
Collaborative positioning for swarms:A brief survey of vision,LiDAR and wireless sensors based methods 被引量:2
6
作者 Zeyu Li Changhui Jiang +3 位作者 Xiaobo Gu Ying Xu Feng zhou Jianhui Cui 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第3期475-493,共19页
As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from bo... As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from both its environment and other agents,an agent can use various methods and sensor types to localize itself.With its high flexibility and robustness,collaborative positioning has become a widely used method in both military and civilian applications.This paper introduces the basic fundamental concepts and applications of collaborative positioning,and reviews recent progress in the field based on camera,LiDAR(Light Detection and Ranging),wireless sensor,and their integration.The paper compares the current methods with respect to their sensor type,summarizes their main paradigms,and analyzes their evaluation experiments.Finally,the paper discusses the main challenges and open issues that require further research. 展开更多
关键词 Collaborative positioning vision LIDAR Wireless sensors sensor fusion
在线阅读 下载PDF
Recent advances in imaging devices:image sensors and neuromorphic vision sensors
7
作者 Wen-Qiang Wu Chun-Feng Wang +1 位作者 Su-Ting Han Cao-Feng Pan 《Rare Metals》 SCIE EI CAS CSCD 2024年第11期5487-5515,共29页
Remarkable developments in image recognition technology trigger demands for more advanced imaging devices.In recent years,traditional image sensors,as the go-to imaging devices,have made substantial progress in their ... Remarkable developments in image recognition technology trigger demands for more advanced imaging devices.In recent years,traditional image sensors,as the go-to imaging devices,have made substantial progress in their optoelectronic characteristics and functionality.Moreover,a new breed of imaging device with information processing capability,known as neuromorphic vision sensors,is developed by mimicking biological vision.In this review,we delve into the recent progress of imaging devices,specifically image sensors and neuromorphic vision sensors.This review starts by introducing their core components,namely photodetectors and photonic synapses,while placing a strong emphasis on device structures,working mechanisms and key performance parameters.Then it proceeds to summarize the noteworthy achievements in both image sensors and neuromorphic vision sensors,including advancements in large-scale and highresolution imaging,filter-free multispectral recognition,polarization sensitivity,flexibility,hemispherical designs,and self-power supply of image sensors,as well as in neuromorphic imaging and data processing,environmental adaptation,and ultra-low power consumption of neuromorphic vision sensors.Finally,the challenges and prospects that lie ahead in the ongoing development of imaging devices are addressed. 展开更多
关键词 Imaging devices PHOTODETECTORS Photonic synapses Image sensors Neuromorphic vision sensors
原文传递
An Embedded Computer Vision Approach to Environment Modeling and Local Path Planning in Autonomous Mobile Robots
8
作者 Rıdvan Yayla Hakan Üçgün Onur Ali Korkmaz 《Computer Modeling in Engineering & Sciences》 2025年第12期4055-4087,共33页
Recent advancements in autonomous vehicle technologies are transforming intelligent transportation systems.Artificial intelligence enables real-time sensing,decision-making,and control on embedded platforms with impro... Recent advancements in autonomous vehicle technologies are transforming intelligent transportation systems.Artificial intelligence enables real-time sensing,decision-making,and control on embedded platforms with improved efficiency.This study presents the design and implementation of an autonomous radio-controlled(RC)vehicle prototype capable of lane line detection,obstacle avoidance,and navigation through dynamic path planning.The system integrates image processing and ultrasonic sensing,utilizing Raspberry Pi for vision-based tasks and ArduinoNano for real-time control.Lane line detection is achieved through conventional image processing techniques,providing the basis for local path generation,while traffic sign classification employs a You Only Look Once(YOLO)model optimized with TensorFlow Lite to support navigation decisions.Images captured by the onboard camera are processed on the Raspberry Pi to extract lane geometry and calculate steering angles,enabling the vehicle to follow the planned path.In addition,ultrasonic sensors placed in three directions at the front of the vehicle detect obstacles and allow real-time path adjustment for safe navigation.Experimental results demonstrate stable performance under controlled conditions,highlighting the system’s potential for scalable autonomous driving applications.This work confirms that deep learning methods can be efficiently deployed on low-power embedded systems,offering a practical framework for navigation,path planning,and intelligent transportation research. 展开更多
关键词 Embedded vision system mobile robot navigation lane detection sensor fusion deep learning on embedded systems real-time path planning
在线阅读 下载PDF
遗传算法下多线结构光视觉传感器解耦标定方法
9
作者 傅龙天 许振宇 +1 位作者 陈钦 Ruel REYES 《传感技术学报》 北大核心 2026年第1期147-152,共6页
针对多线结构光视觉传感器易受环境干扰,导致深度估计不准确、无法获取有效三维信息的问题,研究遗传算法下多线结构光视觉传感器解耦标定方法。该方法通过一阶径向变形模型校正图像点坐标,在此基础上基于移动3次以上的一维靶标,将两个... 针对多线结构光视觉传感器易受环境干扰,导致深度估计不准确、无法获取有效三维信息的问题,研究遗传算法下多线结构光视觉传感器解耦标定方法。该方法通过一阶径向变形模型校正图像点坐标,在此基础上基于移动3次以上的一维靶标,将两个不存在共同视场的线结构光视觉传感器相关联,结合交比定义与不变性相求出整体坐标系中的转换矩阵和交点处的三维坐标。并以最小化移动后标靶特征点重投影坐标与实际坐标之间距离和移动后交点与光平面之间距离为目标构建目标函数,使用多种群遗传算法求解目标函数最优解,实现多线结构光视觉传感器解耦标定。实验结果表明,利用所提方法解耦标定时的皮尔逊相关系数更接近于1,可以较为准确地完成解耦标定。 展开更多
关键词 光视觉传感器 解耦标定 多种群遗传算法 多线结构 一维靶标
在线阅读 下载PDF
多传感器信息处理下智能汽车3D防撞预警目标识别
10
作者 辛光红 林甄 邢洁洁 《传感技术学报》 北大核心 2026年第1期126-131,共6页
为了提高道路驾驶安全性,从视觉传感器和惯性测量传感器两方面进行智能汽车3D防撞预警目标识别方法设计。使用可见光双目相机作为视觉传感器采集3D图像,通过径向和切向畸变因子校正图像,提供高分辨率的道路环境数据;在此基础上,利用立... 为了提高道路驾驶安全性,从视觉传感器和惯性测量传感器两方面进行智能汽车3D防撞预警目标识别方法设计。使用可见光双目相机作为视觉传感器采集3D图像,通过径向和切向畸变因子校正图像,提供高分辨率的道路环境数据;在此基础上,利用立体匹配获得计算双目最小视差值,明确道路上的车辆、行人或其他障碍物与车辆间距离。考虑到城市道路情况复杂多变,从而影响车辆的实时动态控制和导航,使用惯性测量传感器测算车辆的加速度和角速度,提供车辆的运动状态和姿态信息。通过融合两种传感器的数据,判断是否存在潜在的碰撞风险,并提供及时的预警。实验结果表明,所提方法在预警距离设定为0.8 m时,可以在1 s内精准识别目标,实现有效的3D防撞预警。 展开更多
关键词 多传感器 防撞预警 信息融合 视觉传感器 惯性测量传感器 环境感知 车辆运动状态
在线阅读 下载PDF
高速视觉芯片研究进展
11
作者 王哲 杨旭 +8 位作者 吕卓阳 丁伯文 于双铭 窦润江 石匆 刘剑 吴南健 冯鹏 刘力源 《物理学报》 北大核心 2026年第4期21-42,共22页
在边缘计算场景中,视觉感知系统的响应速度、体积及功耗已成为核心挑战.传统感算分离的视觉系统因数据传输导致的高延迟、高功耗以及隐私泄露等问题亟待解决.在此背景下,模仿人类视觉系统的视觉芯片成为有效解决方案之一,视觉芯片将图... 在边缘计算场景中,视觉感知系统的响应速度、体积及功耗已成为核心挑战.传统感算分离的视觉系统因数据传输导致的高延迟、高功耗以及隐私泄露等问题亟待解决.在此背景下,模仿人类视觉系统的视觉芯片成为有效解决方案之一,视觉芯片将图像采集与信息处理集成在一起,实现了感算一体的协同处理机制,能在边缘端高效完成视觉感知与计算任务.本文围绕高速视觉芯片的技术路径,系统梳理了其关键环节的研究进展,分别从高速传感器件、读出电路与智能处理3个层面展开论述.分析了互补金属氧化物半导体图像传感器、动态视觉传感器与单光子图像传感器在实现高速光电转换中的物理机制、结构创新与性能瓶颈;探讨了高速模数转换、地址事件编码及时间相关单光子计数等读出电路架构及其效率优化策略;并介绍了基于脉冲信号的高速图像复原与脉冲神经网络处理等前沿智能处理算法.最后对高速视觉芯片未来发展趋势进行了展望. 展开更多
关键词 高速视觉芯片 互补金属氧化物半导体图像传感器 脉冲型图像传感器 高速脉冲处理
在线阅读 下载PDF
脉冲神经网络中LIF神经元与突触时序依赖性研究
12
作者 周运 应骏 王子健 《微电子学与计算机》 2026年第1期32-43,共12页
针对脉冲神经网络在复杂特征学习和分类任务中存在的学习稳定性差、权重分布单一等问题,提出了一种自适应LIF神经元模型,并结合全新设计的可调节乘性STDP规则,构建了一个高效的脉冲神经网络架构。突触前踪迹的指数映射和乘性调制机制提... 针对脉冲神经网络在复杂特征学习和分类任务中存在的学习稳定性差、权重分布单一等问题,提出了一种自适应LIF神经元模型,并结合全新设计的可调节乘性STDP规则,构建了一个高效的脉冲神经网络架构。突触前踪迹的指数映射和乘性调制机制提升了LIF神经元对输入脉冲的响应速度和网络对复杂信号的适应能力。同时,所提出的新的STDP规则结合了归一化的突触前轨迹和Sigmoid函数,实现了突触权重在适应性和稳定性之间的平衡,显著提高了学习效率和模型稳定性。实验结果表明:在动态视觉传感器采集的真实世界的路线图纹理和旋转盘序列数据集上,该方法能够准确识别不同方向和极性的特征。在MNIST分类手写数字数据集上,改进模型的分类准确度达到98.7%,验证了该方法的有效性和鲁棒性。 展开更多
关键词 脉冲神经网络 LIF神经元 脉冲时序依赖可塑性 动态视觉传感器
在线阅读 下载PDF
面向车辆目标检测的毫米波雷达和相机融合方法
13
作者 王建宇 马小龙 +1 位作者 刘康 胡冰楠 《计量学报》 北大核心 2026年第2期239-250,共12页
为改善车辆目标检测中单一传感器识别效果差,以及不同传感器目标之间因车辆遮挡造成关联错误等问题,提出了一种基于车载毫米波雷达和相机(视觉检测)融合的车辆检测方法。首先,采用改进的YOLOv8n_M模型对视觉信息进行检测,该模型在原始YO... 为改善车辆目标检测中单一传感器识别效果差,以及不同传感器目标之间因车辆遮挡造成关联错误等问题,提出了一种基于车载毫米波雷达和相机(视觉检测)融合的车辆检测方法。首先,采用改进的YOLOv8n_M模型对视觉信息进行检测,该模型在原始YOLOv8n模型的Neck和Head部分添加SimAM注意力机制来增强目标特征;使用具有动态非单调聚焦机制的Wise-IoU v1作为损失函数以提高边界框的回归性能;添加小目标检测层P2,改善模型对小目标车辆检测效果不佳的问题。与此同时,对雷达数据解析、预处理,筛选出雷达有效目标并对它们进行基于卡尔曼滤波算法的目标跟踪。然后,对相机和雷达进行时间和空间上的对齐。最后,计算目标检测框重叠率和中心点归一化的欧氏距离并构造关联矩阵,结合匈牙利算法完成数据匹配,输出融合目标。实验表明:在BDD100K和自制数据集中,YOLOv8n_M相较于原始YOLOv8n,mAP50分别提高了4.7%和3.6%,mAP50~95分别提高了2.9%和5.4%;在复杂交通场景下,所提关联算法的关联精确率相较于传统的最近邻域、全局最近邻域算法,分别提高了4.66%、2.91%;融合检测的检测率达到88.09%,高于单一传感器,能够实时、准确地检测车辆目标。 展开更多
关键词 车辆检测 机器视觉 YOLOv8 毫米波雷达 数据关联算法 传感器融合
在线阅读 下载PDF
基于动态视觉传感器的航发叶片缺陷检测
14
作者 张行顺 陈海永 《图学学报》 北大核心 2026年第1期120-130,共11页
航空发动机叶片作为发动机核心零部件,其表面微小缺陷可能导致严重安全事故,传统视觉检测技术受限于运动模糊、动态范围低及背景冗余等问题。针对上述挑战,提出一种基于动态视觉传感器(DVS)的航发叶片缺陷检测方法。动态视觉传感器数据... 航空发动机叶片作为发动机核心零部件,其表面微小缺陷可能导致严重安全事故,传统视觉检测技术受限于运动模糊、动态范围低及背景冗余等问题。针对上述挑战,提出一种基于动态视觉传感器(DVS)的航发叶片缺陷检测方法。动态视觉传感器数据格式为异步事件流,故也被称作事件相机,具有动态范围大、高帧率和微小目标捕捉能力强等优势。首先搭建基于DVS的缺陷检测平台,探索总结了其成像特点及优势。在此基础上,构建首个基于DVS的航发叶片缺陷检测数据集(EDD-AB),涵盖划痕、点痕、边缘损伤3类缺陷近6 000张图像,精细标注近1.2万个目标标签,数据集已开源(链接:https://github.com/NiBieZhouMei5520/EDD-AB.git)。进一步提出基于异步事件流帧聚合的多尺度缺陷检测算法(AEAF-ABDD):通过固定时间窗的帧聚合技术实现事件流可视化;构建多分辨率自适应特征金字塔网络(MRAFPN)增强多尺度缺陷特征提取能力;引入轻量级SimAM注意力机制强化关键区域聚焦;融合星形卷积模块(StarNet)提升高维非线性特征映射效率,实现复杂曲面工件多尺度缺陷的精准检测。实验表明,AEAF-ABDD在EDD-AB数据集上的平均精度均值(mAP)达97.7%,检测速度达105帧/秒,显著优于主流算法,为高反光曲面工件的自动化质检提供了高效解决方案,推动了DVS在工业检测领域的应用。 展开更多
关键词 动态视觉传感器 航空发动机叶片 缺陷检测 异步事件流 多尺度特征融合
在线阅读 下载PDF
Calibration of laser beam direction based on monocular vision 被引量:3
15
作者 WANG Zhong YANG Tong-yu +2 位作者 WANG Lei FU Lu-hua LIU Chang-jie 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2017年第4期354-363,共10页
In the laser displacement sensors measurement system,the laser beam direction is an important parameter.Particularly,the azimuth and pitch angles are the most important parameters to a laser beam.In this paper,based o... In the laser displacement sensors measurement system,the laser beam direction is an important parameter.Particularly,the azimuth and pitch angles are the most important parameters to a laser beam.In this paper,based on monocular vision,a laser beam direction measurement method is proposed.First,place the charge coupled device(CCD)camera above the base plane,and adjust and fix the camera position so that the optical axis is nearly perpendicular to the base plane.The monocular vision localization model is established by using circular aperture calibration board.Then the laser beam generating device is placed and maintained on the base plane at fixed position.At the same time a special target block is placed on the base plane so that the laser beam can project to the special target and form a laser spot.The CCD camera placed above the base plane can acquire the laser spot and the image of the target block clearly,so the two-dimensional(2D)image coordinate of the centroid of the laser spot can be extracted by correlation algorithm.The target is moved at an equal distance along the laser beam direction,and the spots and target images of each moving under the current position are collected by the CCD camera.By using the relevant transformation formula and combining the intrinsic parameters of the target block,the2D coordinates of the gravity center of the spot are converted to the three-dimensional(3D)coordinate in the base plane.Because of the moving of the target,the3D coordinates of the gravity center of the laser spot at different positions are obtained,and these3D coordinates are synthesized into a space straight line to represent the laser beam to be measured.In the experiment,the target parameters are measured by high-precision instruments,and the calibration parameters of the camera are calibrated by a high-precision calibration board to establish the corresponding positioning model.The measurement accuracy is mainly guaranteed by the monocular vision positioning accuracy and the gravity center extraction accuracy.The experimental results show the maximum error of the angle between laser beams reaches to0.04°and the maximum error of beam pitch angle reaches to0.02°. 展开更多
关键词 monocular vision laser beam direction coordinate transformation laser displacement sensor
在线阅读 下载PDF
Vision Sensing-Based Online Correction System for Robotic Weld Grinding 被引量:1
16
作者 Jimin Ge Zhaohui Deng +3 位作者 Shuixian Wang Zhongyang Li Wei Liu Jiaxu Nie 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2023年第5期97-108,共12页
The service cycle and dynamic performance of structural parts are afected by the weld grinding accuracy and surface consistency. Because of reasons such as assembly errors and thermal deformation, the actual track of ... The service cycle and dynamic performance of structural parts are afected by the weld grinding accuracy and surface consistency. Because of reasons such as assembly errors and thermal deformation, the actual track of the robot does not coincide with the theoretical track when the weld is ground ofine, resulting in poor workpiece surface quality. Considering these problems, in this study, a vision sensing-based online correction system for robotic weld grinding was developed. The system mainly included three subsystems: weld feature extraction, grinding, and robot real-time control. The grinding equipment was frst set as a substation for the robot using the WorkVisual software. The input/output (I/O) ports for communication between the robot and the grinding equipment were confgured via the I/O mapping function to enable the robot to control the grinding equipment (start, stop, and speed control). Subsequently, the Ethernet KRL software package was used to write the data interaction structure to realize realtime communication between the robot and the laser vision system. To correct the measurement error caused by the bending deformation of the workpiece, we established a surface profle model of the base material in the weld area using a polynomial ftting algorithm to compensate for the measurement data. The corrected extracted weld width and height errors were reduced by 2.01% and 9.3%, respectively. Online weld seam extraction and correction experiments verifed the efectiveness of the system’s correction function, and the system could control the grinding trajectory error within 0.2 mm. The reliability of the system was verifed through actual weld grinding experiments. The roughness, Ra, could reach 0.504 µm and the average residual height was within 0.21 mm. In this study, we developed a vision sensing-based online correction system for robotic weld grinding with a good correction efect and high robustness. 展开更多
关键词 Online correction system ROBOT GRINDING Weld seam Laser vision sensor
在线阅读 下载PDF
Calibration of line structured light vision system based on camera’s projective center 被引量:7
17
作者 ZHU Ji-gui LI Yan-jun YE Sheng-hua 《光学精密工程》 EI CAS CSCD 北大核心 2005年第5期584-591,共8页
Based on the characteristics of line structured light sensor, a speedy method for the calibration was established. With the coplanar reference target, the spacial pose between camera and optical plane can be calibrate... Based on the characteristics of line structured light sensor, a speedy method for the calibration was established. With the coplanar reference target, the spacial pose between camera and optical plane can be calibrated by using of the camera’s projective center and the light’s information in the camera’s image surface. Without striction to the movement of the coplanar reference target and assistant adjustment equipment, this calibration method can be implemented. This method has been used and decreased the cost of calibration equipment, simplified the calibration procedure, improved calibration efficiency. Using experiment, the sensor can attain relative accuracy about 0.5%, which indicates the rationality and effectivity of this method. 展开更多
关键词 投影中心 线性结构 光传感器 标度 可视性系统
在线阅读 下载PDF
Monitoring a Wide Manufacture Field Automatically by Multiple Sensors
18
作者 LU Jian HAMAJIMA Kyoko JIANG Wei 《自动化学报》 EI CSCD 北大核心 2006年第6期956-967,共12页
This research is dedicated to develop a safety measurement for human-machine cooperative system, in which the machine region and the human region cannot be separated due to the overlap and the movement both from human... This research is dedicated to develop a safety measurement for human-machine cooperative system, in which the machine region and the human region cannot be separated due to the overlap and the movement both from human and from machines. Our proposal here is to automatically monitor the moving objects by image sensing/recognition method, such that the machine system can get enough information about the environment situation and about the production progress at any time, and therefore the machines can accordingly take some corresponding actions automatically to avoid hazard. For this purpose, two types of monitor systems are proposed. The first type is based on the omni directional vision sensor, and the second is based on the stereo vision sensor. Each type may be used alone or together with another type, depending on the safety system's requirements and the specific situation of the manufacture field to be monitored. In this paper, the description about these two types are given, and as for the special application of these image sensors into safety control, the construction of a hierarchy safety system is proposed. 展开更多
关键词 sensor network robot vision safety control stereo vision omni-direction vision
在线阅读 下载PDF
Modeling of a Linear Scanning 3D Vision Coordinate Measurement System
19
作者 孙玉芹 黄庆成 车仁生 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 1998年第3期32-35,共4页
This paper theoretically analyzes and researches the coordinate frames of a 3D vision scanning system, establishes the mathematic model of a system scanning process, derives the relationship between the general non-or... This paper theoretically analyzes and researches the coordinate frames of a 3D vision scanning system, establishes the mathematic model of a system scanning process, derives the relationship between the general non-orthonormal sensor coordinate system and the machine coordinate system and the coordinate transformation matrix of the extrinsic calibration for the system. 展开更多
关键词 STRUCTURED light laser STRIPE sensor 3D vision CMM mathematic model EXTRINSIC CALIBRATION
在线阅读 下载PDF
Simultaneous observation of keyhole and weld pool in plasma arc welding with a single cost-effective sensor
20
作者 张国凯 武传松 +1 位作者 刘新锋 张晨 《China Welding》 EI CAS 2014年第4期8-12,共5页
The dynamic behaviors of the keyhole and weld pool are coupled together in plasma arc welding, and the geometric variations of both the keyhole and the weld pool determine the weld quality. It is of great significance... The dynamic behaviors of the keyhole and weld pool are coupled together in plasma arc welding, and the geometric variations of both the keyhole and the weld pool determine the weld quality. It is of great significance to simultaneously sense and monitor the keyhole and the weld pool behaviors by using a single low-cost vision sensor in plasma arc welding process. In this study, the keyhole and weld pool were observed and measured under different levels of welding current by using the near infrared sensing technology and the charge coupled device (CCD) sensing system. The shapes and relative position of weld pool and keyhole under different conditions were compared and analyzed. The observation results lay solid foundation for controlling weld quality and understanding the underlying process mechanisms. 展开更多
关键词 KEYHOLE weld pool plasma arc welding single vision sensor infrared sensing
在线阅读 下载PDF
上一页 1 2 42 下一页 到第
使用帮助 返回顶部