期刊文献+
共找到812篇文章
< 1 2 41 >
每页显示 20 50 100
Ultrathin Gallium Nitride Quantum-Disk-in-Nanowire-Enabled Reconfigurable Bioinspired Sensor for High-Accuracy Human Action Recognition
1
作者 Zhixiang Gao Xin Ju +10 位作者 Huabin Yu Wei Chen Xin Liu Yuanmin Luo Yang Kang Dongyang Luo JiKai Yao Wengang Gu Muhammad Hunain Memon Yong Yan Haiding Sun 《Nano-Micro Letters》 2026年第2期439-453,共15页
Human action recognition(HAR)is crucial for the development of efficient computer vision,where bioinspired neuromorphic perception visual systems have emerged as a vital solution to address transmission bottlenecks ac... Human action recognition(HAR)is crucial for the development of efficient computer vision,where bioinspired neuromorphic perception visual systems have emerged as a vital solution to address transmission bottlenecks across sensor-processor interfaces.However,the absence of interactions among versatile biomimicking functionalities within a single device,which was developed for specific vision tasks,restricts the computational capacity,practicality,and scalability of in-sensor vision computing.Here,we propose a bioinspired vision sensor composed of a Ga N/Al N-based ultrathin quantum-disks-in-nanowires(QD-NWs)array to mimic not only Parvo cells for high-contrast vision and Magno cells for dynamic vision in the human retina but also the synergistic activity between the two cells for in-sensor vision computing.By simply tuning the applied bias voltage on each QD-NW-array-based pixel,we achieve two biosimilar photoresponse characteristics with slow and fast reactions to light stimuli that enhance the in-sensor image quality and HAR efficiency,respectively.Strikingly,the interplay and synergistic interaction of the two photoresponse modes within a single device markedly increased the HAR recognition accuracy from 51.4%to 81.4%owing to the integrated artificial vision system.The demonstration of an intelligent vision sensor offers a promising device platform for the development of highly efficient HAR systems and future smart optoelectronics. 展开更多
关键词 GaN nanowire Quantum-confined Stark effect Voltage-tunable photoresponse Bioinspired sensor Artificial vision system
在线阅读 下载PDF
Neuromorphic vision sensors: Principle, progress and perspectives 被引量:8
2
作者 Fuyou Liao Feichi Zhou Yang Chai 《Journal of Semiconductors》 EI CAS CSCD 2021年第1期112-121,共10页
Conventional frame-based image sensors suffer greatly from high energy consumption and latency.Mimicking neurobiological structures and functionalities of the retina provides a promising way to build a neuromorphic vi... Conventional frame-based image sensors suffer greatly from high energy consumption and latency.Mimicking neurobiological structures and functionalities of the retina provides a promising way to build a neuromorphic vision sensor with highly efficient image processing.In this review article,we will start with a brief introduction to explain the working mechanism and the challenges of conventional frame-based image sensors,and introduce the structure and functions of biological retina.In the main section,we will overview recent developments in neuromorphic vision sensors,including the silicon retina based on conventional Si CMOS digital technologies,and the neuromorphic vision sensors with the implementation of emerging devices.Finally,we will provide a brief outline of the prospects and outlook for the development of this field. 展开更多
关键词 image sensors silicon retina neuromorphic vision sensors photonic synapses
在线阅读 下载PDF
Progress of Materials and Devices for Neuromorphic Vision Sensors 被引量:10
3
作者 Sung Woon Cho Chanho Jo +1 位作者 Yong-Hoon Kim Sung Kyu Park 《Nano-Micro Letters》 SCIE EI CAS CSCD 2022年第12期239-271,共33页
The latest developments in bio-inspired neuromorphic vision sensors can be summarized in 3 keywords:smaller,faster,and smarter.(1)Smaller:Devices are becoming more compact by integrating previously separated component... The latest developments in bio-inspired neuromorphic vision sensors can be summarized in 3 keywords:smaller,faster,and smarter.(1)Smaller:Devices are becoming more compact by integrating previously separated components such as sensors,memory,and processing units.As a prime example,the transition from traditional sensory vision computing to in-sensor vision computing has shown clear benefits,such as simpler circuitry,lower power consumption,and less data redundancy.(2)Swifter:Owing to the nature of physics,smaller and more integrated devices can detect,process,and react to input more quickly.In addition,the methods for sensing and processing optical information using various materials(such as oxide semiconductors)are evolving.(3)Smarter:Owing to these two main research directions,we can expect advanced applications such as adaptive vision sensors,collision sensors,and nociceptive sensors.This review mainly focuses on the recent progress,working mechanisms,image pre-processing techniques,and advanced features of two types of neuromorphic vision sensors based on near-sensor and in-sensor vision computing methodologies. 展开更多
关键词 In-sensor computing Near-sensor computing Neuromorphic vision sensor Optoelectronic synaptic circuit Optoelectronic synapse
在线阅读 下载PDF
Collaborative positioning for swarms:A brief survey of vision,LiDAR and wireless sensors based methods 被引量:2
4
作者 Zeyu Li Changhui Jiang +3 位作者 Xiaobo Gu Ying Xu Feng zhou Jianhui Cui 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第3期475-493,共19页
As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from bo... As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from both its environment and other agents,an agent can use various methods and sensor types to localize itself.With its high flexibility and robustness,collaborative positioning has become a widely used method in both military and civilian applications.This paper introduces the basic fundamental concepts and applications of collaborative positioning,and reviews recent progress in the field based on camera,LiDAR(Light Detection and Ranging),wireless sensor,and their integration.The paper compares the current methods with respect to their sensor type,summarizes their main paradigms,and analyzes their evaluation experiments.Finally,the paper discusses the main challenges and open issues that require further research. 展开更多
关键词 Collaborative positioning vision LIDAR Wireless sensors sensor fusion
在线阅读 下载PDF
Recent advances in imaging devices:image sensors and neuromorphic vision sensors
5
作者 Wen-Qiang Wu Chun-Feng Wang +1 位作者 Su-Ting Han Cao-Feng Pan 《Rare Metals》 SCIE EI CAS CSCD 2024年第11期5487-5515,共29页
Remarkable developments in image recognition technology trigger demands for more advanced imaging devices.In recent years,traditional image sensors,as the go-to imaging devices,have made substantial progress in their ... Remarkable developments in image recognition technology trigger demands for more advanced imaging devices.In recent years,traditional image sensors,as the go-to imaging devices,have made substantial progress in their optoelectronic characteristics and functionality.Moreover,a new breed of imaging device with information processing capability,known as neuromorphic vision sensors,is developed by mimicking biological vision.In this review,we delve into the recent progress of imaging devices,specifically image sensors and neuromorphic vision sensors.This review starts by introducing their core components,namely photodetectors and photonic synapses,while placing a strong emphasis on device structures,working mechanisms and key performance parameters.Then it proceeds to summarize the noteworthy achievements in both image sensors and neuromorphic vision sensors,including advancements in large-scale and highresolution imaging,filter-free multispectral recognition,polarization sensitivity,flexibility,hemispherical designs,and self-power supply of image sensors,as well as in neuromorphic imaging and data processing,environmental adaptation,and ultra-low power consumption of neuromorphic vision sensors.Finally,the challenges and prospects that lie ahead in the ongoing development of imaging devices are addressed. 展开更多
关键词 Imaging devices PHOTODETECTORS Photonic synapses Image sensors Neuromorphic vision sensors
原文传递
An Embedded Computer Vision Approach to Environment Modeling and Local Path Planning in Autonomous Mobile Robots
6
作者 Rıdvan Yayla Hakan Üçgün Onur Ali Korkmaz 《Computer Modeling in Engineering & Sciences》 2025年第12期4055-4087,共33页
Recent advancements in autonomous vehicle technologies are transforming intelligent transportation systems.Artificial intelligence enables real-time sensing,decision-making,and control on embedded platforms with impro... Recent advancements in autonomous vehicle technologies are transforming intelligent transportation systems.Artificial intelligence enables real-time sensing,decision-making,and control on embedded platforms with improved efficiency.This study presents the design and implementation of an autonomous radio-controlled(RC)vehicle prototype capable of lane line detection,obstacle avoidance,and navigation through dynamic path planning.The system integrates image processing and ultrasonic sensing,utilizing Raspberry Pi for vision-based tasks and ArduinoNano for real-time control.Lane line detection is achieved through conventional image processing techniques,providing the basis for local path generation,while traffic sign classification employs a You Only Look Once(YOLO)model optimized with TensorFlow Lite to support navigation decisions.Images captured by the onboard camera are processed on the Raspberry Pi to extract lane geometry and calculate steering angles,enabling the vehicle to follow the planned path.In addition,ultrasonic sensors placed in three directions at the front of the vehicle detect obstacles and allow real-time path adjustment for safe navigation.Experimental results demonstrate stable performance under controlled conditions,highlighting the system’s potential for scalable autonomous driving applications.This work confirms that deep learning methods can be efficiently deployed on low-power embedded systems,offering a practical framework for navigation,path planning,and intelligent transportation research. 展开更多
关键词 Embedded vision system mobile robot navigation lane detection sensor fusion deep learning on embedded systems real-time path planning
在线阅读 下载PDF
脉冲神经网络中LIF神经元与突触时序依赖性研究
7
作者 周运 应骏 王子健 《微电子学与计算机》 2026年第1期32-43,共12页
针对脉冲神经网络在复杂特征学习和分类任务中存在的学习稳定性差、权重分布单一等问题,提出了一种自适应LIF神经元模型,并结合全新设计的可调节乘性STDP规则,构建了一个高效的脉冲神经网络架构。突触前踪迹的指数映射和乘性调制机制提... 针对脉冲神经网络在复杂特征学习和分类任务中存在的学习稳定性差、权重分布单一等问题,提出了一种自适应LIF神经元模型,并结合全新设计的可调节乘性STDP规则,构建了一个高效的脉冲神经网络架构。突触前踪迹的指数映射和乘性调制机制提升了LIF神经元对输入脉冲的响应速度和网络对复杂信号的适应能力。同时,所提出的新的STDP规则结合了归一化的突触前轨迹和Sigmoid函数,实现了突触权重在适应性和稳定性之间的平衡,显著提高了学习效率和模型稳定性。实验结果表明:在动态视觉传感器采集的真实世界的路线图纹理和旋转盘序列数据集上,该方法能够准确识别不同方向和极性的特征。在MNIST分类手写数字数据集上,改进模型的分类准确度达到98.7%,验证了该方法的有效性和鲁棒性。 展开更多
关键词 脉冲神经网络 LIF神经元 脉冲时序依赖可塑性 动态视觉传感器
在线阅读 下载PDF
Calibration of laser beam direction based on monocular vision 被引量:3
8
作者 WANG Zhong YANG Tong-yu +2 位作者 WANG Lei FU Lu-hua LIU Chang-jie 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2017年第4期354-363,共10页
In the laser displacement sensors measurement system,the laser beam direction is an important parameter.Particularly,the azimuth and pitch angles are the most important parameters to a laser beam.In this paper,based o... In the laser displacement sensors measurement system,the laser beam direction is an important parameter.Particularly,the azimuth and pitch angles are the most important parameters to a laser beam.In this paper,based on monocular vision,a laser beam direction measurement method is proposed.First,place the charge coupled device(CCD)camera above the base plane,and adjust and fix the camera position so that the optical axis is nearly perpendicular to the base plane.The monocular vision localization model is established by using circular aperture calibration board.Then the laser beam generating device is placed and maintained on the base plane at fixed position.At the same time a special target block is placed on the base plane so that the laser beam can project to the special target and form a laser spot.The CCD camera placed above the base plane can acquire the laser spot and the image of the target block clearly,so the two-dimensional(2D)image coordinate of the centroid of the laser spot can be extracted by correlation algorithm.The target is moved at an equal distance along the laser beam direction,and the spots and target images of each moving under the current position are collected by the CCD camera.By using the relevant transformation formula and combining the intrinsic parameters of the target block,the2D coordinates of the gravity center of the spot are converted to the three-dimensional(3D)coordinate in the base plane.Because of the moving of the target,the3D coordinates of the gravity center of the laser spot at different positions are obtained,and these3D coordinates are synthesized into a space straight line to represent the laser beam to be measured.In the experiment,the target parameters are measured by high-precision instruments,and the calibration parameters of the camera are calibrated by a high-precision calibration board to establish the corresponding positioning model.The measurement accuracy is mainly guaranteed by the monocular vision positioning accuracy and the gravity center extraction accuracy.The experimental results show the maximum error of the angle between laser beams reaches to0.04°and the maximum error of beam pitch angle reaches to0.02°. 展开更多
关键词 monocular vision laser beam direction coordinate transformation laser displacement sensor
在线阅读 下载PDF
Vision Sensing-Based Online Correction System for Robotic Weld Grinding 被引量:1
9
作者 Jimin Ge Zhaohui Deng +3 位作者 Shuixian Wang Zhongyang Li Wei Liu Jiaxu Nie 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2023年第5期97-108,共12页
The service cycle and dynamic performance of structural parts are afected by the weld grinding accuracy and surface consistency. Because of reasons such as assembly errors and thermal deformation, the actual track of ... The service cycle and dynamic performance of structural parts are afected by the weld grinding accuracy and surface consistency. Because of reasons such as assembly errors and thermal deformation, the actual track of the robot does not coincide with the theoretical track when the weld is ground ofine, resulting in poor workpiece surface quality. Considering these problems, in this study, a vision sensing-based online correction system for robotic weld grinding was developed. The system mainly included three subsystems: weld feature extraction, grinding, and robot real-time control. The grinding equipment was frst set as a substation for the robot using the WorkVisual software. The input/output (I/O) ports for communication between the robot and the grinding equipment were confgured via the I/O mapping function to enable the robot to control the grinding equipment (start, stop, and speed control). Subsequently, the Ethernet KRL software package was used to write the data interaction structure to realize realtime communication between the robot and the laser vision system. To correct the measurement error caused by the bending deformation of the workpiece, we established a surface profle model of the base material in the weld area using a polynomial ftting algorithm to compensate for the measurement data. The corrected extracted weld width and height errors were reduced by 2.01% and 9.3%, respectively. Online weld seam extraction and correction experiments verifed the efectiveness of the system’s correction function, and the system could control the grinding trajectory error within 0.2 mm. The reliability of the system was verifed through actual weld grinding experiments. The roughness, Ra, could reach 0.504 µm and the average residual height was within 0.21 mm. In this study, we developed a vision sensing-based online correction system for robotic weld grinding with a good correction efect and high robustness. 展开更多
关键词 Online correction system ROBOT GRINDING Weld seam Laser vision sensor
在线阅读 下载PDF
Calibration of line structured light vision system based on camera’s projective center 被引量:7
10
作者 ZHU Ji-gui LI Yan-jun YE Sheng-hua 《光学精密工程》 EI CAS CSCD 北大核心 2005年第5期584-591,共8页
Based on the characteristics of line structured light sensor, a speedy method for the calibration was established. With the coplanar reference target, the spacial pose between camera and optical plane can be calibrate... Based on the characteristics of line structured light sensor, a speedy method for the calibration was established. With the coplanar reference target, the spacial pose between camera and optical plane can be calibrated by using of the camera’s projective center and the light’s information in the camera’s image surface. Without striction to the movement of the coplanar reference target and assistant adjustment equipment, this calibration method can be implemented. This method has been used and decreased the cost of calibration equipment, simplified the calibration procedure, improved calibration efficiency. Using experiment, the sensor can attain relative accuracy about 0.5%, which indicates the rationality and effectivity of this method. 展开更多
关键词 投影中心 线性结构 光传感器 标度 可视性系统
在线阅读 下载PDF
Monitoring a Wide Manufacture Field Automatically by Multiple Sensors
11
作者 LU Jian HAMAJIMA Kyoko JIANG Wei 《自动化学报》 EI CSCD 北大核心 2006年第6期956-967,共12页
This research is dedicated to develop a safety measurement for human-machine cooperative system, in which the machine region and the human region cannot be separated due to the overlap and the movement both from human... This research is dedicated to develop a safety measurement for human-machine cooperative system, in which the machine region and the human region cannot be separated due to the overlap and the movement both from human and from machines. Our proposal here is to automatically monitor the moving objects by image sensing/recognition method, such that the machine system can get enough information about the environment situation and about the production progress at any time, and therefore the machines can accordingly take some corresponding actions automatically to avoid hazard. For this purpose, two types of monitor systems are proposed. The first type is based on the omni directional vision sensor, and the second is based on the stereo vision sensor. Each type may be used alone or together with another type, depending on the safety system's requirements and the specific situation of the manufacture field to be monitored. In this paper, the description about these two types are given, and as for the special application of these image sensors into safety control, the construction of a hierarchy safety system is proposed. 展开更多
关键词 sensor network robot vision safety control stereo vision omni-direction vision
在线阅读 下载PDF
Modeling of a Linear Scanning 3D Vision Coordinate Measurement System
12
作者 孙玉芹 黄庆成 车仁生 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 1998年第3期32-35,共4页
This paper theoretically analyzes and researches the coordinate frames of a 3D vision scanning system, establishes the mathematic model of a system scanning process, derives the relationship between the general non-or... This paper theoretically analyzes and researches the coordinate frames of a 3D vision scanning system, establishes the mathematic model of a system scanning process, derives the relationship between the general non-orthonormal sensor coordinate system and the machine coordinate system and the coordinate transformation matrix of the extrinsic calibration for the system. 展开更多
关键词 STRUCTURED light laser STRIPE sensor 3D vision CMM mathematic model EXTRINSIC CALIBRATION
在线阅读 下载PDF
Simultaneous observation of keyhole and weld pool in plasma arc welding with a single cost-effective sensor
13
作者 张国凯 武传松 +1 位作者 刘新锋 张晨 《China Welding》 EI CAS 2014年第4期8-12,共5页
The dynamic behaviors of the keyhole and weld pool are coupled together in plasma arc welding, and the geometric variations of both the keyhole and the weld pool determine the weld quality. It is of great significance... The dynamic behaviors of the keyhole and weld pool are coupled together in plasma arc welding, and the geometric variations of both the keyhole and the weld pool determine the weld quality. It is of great significance to simultaneously sense and monitor the keyhole and the weld pool behaviors by using a single low-cost vision sensor in plasma arc welding process. In this study, the keyhole and weld pool were observed and measured under different levels of welding current by using the near infrared sensing technology and the charge coupled device (CCD) sensing system. The shapes and relative position of weld pool and keyhole under different conditions were compared and analyzed. The observation results lay solid foundation for controlling weld quality and understanding the underlying process mechanisms. 展开更多
关键词 KEYHOLE weld pool plasma arc welding single vision sensor infrared sensing
在线阅读 下载PDF
Second-order divided difference filter for vision-based relative navigation
14
作者 王小刚 崔乃刚 郭继峰 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2011年第3期16-20,共5页
A second-order divided difference filter (SDDF) is derived for integrating line of sight measurement from vision sensor with acceleration and angular rate measurements of the follower to estimate the precise relative ... A second-order divided difference filter (SDDF) is derived for integrating line of sight measurement from vision sensor with acceleration and angular rate measurements of the follower to estimate the precise relative position,velocity and attitude of two unmanned aerial vehicles (UAVs).The second-order divided difference filter which makes use of multidimensional interpolation formulations to approximate the nonlinear transformations could achieve more accurate estimation and faster convergence from inaccurate initial conditions than standard extended Kalman filter.The filter formulation is based on relative motion equations.The global attitude parameterization is given by quarternion,while a generalized three-dimensional attitude representation is used to define the local attitude error.Simulation results are shown to compare the performance of the second-order divided difference filter with a standard extended Kalman filter approach. 展开更多
关键词 relative navigation second-order divided difference filter vision sensor unmanned aerial vehicle formation flight
在线阅读 下载PDF
Application of computer vision technology on raising sow and procreating of processing
15
作者 Yun Yang 《Agricultural Sciences》 2013年第12期689-693,共5页
This paper expounds the application of machine vision theory, composition and technology in the sow breeding process monitoring, auxiliary judgment, and growth of young monitoring. It also points out the problems and ... This paper expounds the application of machine vision theory, composition and technology in the sow breeding process monitoring, auxiliary judgment, and growth of young monitoring. It also points out the problems and deficiency in the application of machine vision technology, and discusses the development trends and prospects of the machine vision technology in agricultural engineering. The application of machine vision is a process in which dynamic original image from the sows estrus is collected with a CCD camera, and then black and white ash three binarization image in adjournments of the threshold value is made by using image acquisition card, through the median filtering and gray processing. The practitioners can extract respective image information from the sow estrus, pregnancy and birth delivery. Applying the computer vision system in the sow farm effectively enhances the practitioners’ objectivity and precision in their efforts to assess the whole process of sow birth delivery. 展开更多
关键词 COMPUTER vision System INFRARED sensor Image PROCESSING RAISING SOW
在线阅读 下载PDF
Monocular Vision Based Boundary Avoidance for Non-Invasive Stray Control System for Cattle: A Conceptual Approach
16
作者 Adeniran Ishola Oluwaranti Seun Ayeni 《Journal of Sensor Technology》 2015年第3期63-71,共9页
Building fences to manage the cattle grazing can be very expensive;cost inefficient. These do not provide dynamic control over the area in which the cattle are grazing. Existing virtual fencing techniques for the cont... Building fences to manage the cattle grazing can be very expensive;cost inefficient. These do not provide dynamic control over the area in which the cattle are grazing. Existing virtual fencing techniques for the control of herds of cattle, based on polygon coordinate definition of boundaries is limited in the area of land mass coverage and dynamism. This work seeks to develop a more robust and an improved monocular vision based boundary avoidance for non-invasive stray control system for cattle, with a view to increase land mass coverage in virtual fencing techniques and dynamism. The monocular vision based depth estimation will be modeled using concept of global Fourier Transform (FT) and local Wavelet Transform (WT) of image structure of scenes (boundaries). The magnitude of the global Fourier Transform gives the dominant orientations and textual patterns of the image;while the local Wavelet Transform gives the dominant spectral features of the image and their spatial distribution. Each scene picture or image is defined by features v, which contain the set of global (FT) and local (WT) statistics of the image. Scenes or boundaries distances are given by estimating the depth D by means of the image features v. Sound cues of intensity equivalent to the magnitude of the depth D are applied to the animal ears as stimuli. This brings about the desired control as animals tend to move away from uncomfortable sounds. 展开更多
关键词 MONOCULAR vision Control Systems Global POSITIONING System Wireless sensor Networks Depth Estimation
暂未订购
Enhance Egocentric Grasp Recognition Based Flex Sensor Under Low Illumination
17
作者 Chana Chansri Jakkree Srinonchat 《Computers, Materials & Continua》 SCIE EI 2022年第6期4377-4389,共13页
Egocentric recognition is exciting computer vision research by acquiring images and video from the first-person overview.However,an image becomes noisy and dark under low illumination conditions,making subsequent hand... Egocentric recognition is exciting computer vision research by acquiring images and video from the first-person overview.However,an image becomes noisy and dark under low illumination conditions,making subsequent hand detection tasks difficult.Thus,image enhancement is necessary to make buried detail more visible.This article addresses the challenge of egocentric hand grasp recognition in low light conditions by utilizing the flex sensor and image enhancement algorithm based on adaptive gamma correction with weighting distribution.Initially,a flex sensor is installed to the thumb for object manipulation.The thumb placement that holds in a different position on the object of each grasp affects the voltage changing of the flex sensor circuit.The average voltages are used to configure the weighting parameter to improve images in the image enhancement stage.Moreover,the contrast and gamma function are used to adjust varies the low light condition.These grasp images are then separated to be training and testing with pretrained deep neural networks as the feature extractor in YOLOv2 detection network for the grasp recognition system.The proposed of using a flex sensor significantly improves the grasp recognition rate in low light conditions. 展开更多
关键词 Egocentric vision hand grasp flex sensor low light enhancement
在线阅读 下载PDF
Multi-sensor control for precise assembly of optical components
18
作者 Ma Li Rong Weibin Sun Lining 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2014年第3期613-621,共9页
In order to perform an optical assembly accurately, a multi-sensor control strategy is developed which includes an attitude measurement system, a vision system, a loss measurement system and a force sensor. A 3-DOF at... In order to perform an optical assembly accurately, a multi-sensor control strategy is developed which includes an attitude measurement system, a vision system, a loss measurement system and a force sensor. A 3-DOF attitude measuring method using linear variable differential transformers (LVDT) is designed to adjust the relation of position and attitude between the spher- ical mirror and the resonator. A micro vision feedback system is set up to extract the light beam and the diaphragm, which can achieve the coarse positioning of the spherical mirror in the optical assembly process. A rapid self-correlation method is presented to analyze the spectrum signal for the fine positioning. In order to prevent the damage of the optical components and realize sealing of the resonator, a hybrid force-position control is constructed to control the contact force of the optical components. The experimental results show that the proposed multi-sensor control strategy succeeds in accomplishing the precise assembly of the optical components, which consists of parallel adjustment, macro coarse adjustment, macro approach, micro fine adjustment, micro approach and optical contact. Therefore, the results validate the multi-sensor control strategy. 展开更多
关键词 ASSEMBLY Attitude measurement Force control MULTI-sensor vision
原文传递
农业领域多模态融合技术方法与应用研究进展 被引量:21
19
作者 李道亮 赵晔 杜壮壮 《农业机械学报》 北大核心 2025年第1期1-15,共15页
多模态融合技术通过结合多源数据,可以克服单一模态的局限性。近年来,传感器以及遥感技术的发展为作物监测提供了更加丰富的数据源,光谱数据、图像数据、雷达数据以及热红外数据被广泛应用于作物监测中。通过利用计算机视觉技术以及数... 多模态融合技术通过结合多源数据,可以克服单一模态的局限性。近年来,传感器以及遥感技术的发展为作物监测提供了更加丰富的数据源,光谱数据、图像数据、雷达数据以及热红外数据被广泛应用于作物监测中。通过利用计算机视觉技术以及数据分析方法,可以从中获取作物的表型参数、理化特征等信息,从而有助于评估作物的生长状况、指导农业生产管理。现有研究多数是基于单一模态数据展开,而单一模态的数据仅有一种类型的输入,缺乏对整体信息的理解,且容易受到单模态噪声的影响;部分研究虽然采用了多模态融合技术,但仍未能充分考虑模态间的复杂交互关系。为了深入分析多模态融合技术在农业领域应用的潜力,本文首先阐述了农业领域中多模态融合的先进技术与方法,重点梳理了多模态融合技术在作物识别、性状分析、产量预测、胁迫分析及病虫害诊断领域中的应用研究成果,分析了多模态融合技术在农业领域中存在的数据利用程度低、有效特征提取难、融合方式单一等问题,并对未来发展提出展望,以期通过多模态融合的方法推动农业精准管理、提高生产效率。 展开更多
关键词 多模态融合 传感器 遥感技术 作物监测 计算机视觉 农业精准管理
在线阅读 下载PDF
基于深度残差网络的多层多道焊缝识别 被引量:1
20
作者 何俊杰 王传睿 王天琪 《天津工业大学学报》 北大核心 2025年第1期91-96,共6页
为保证焊缝跟踪的精度并将激光条纹从强弧光、飞溅中分离出来,提出了一种基于深度残差(SRNU)网络的激光条纹分割算法。该算法是将带有弧光的图像送入SRNU模型,对内嵌于Resunet网络的编码层部分进行改进,添加SE模块和分组残差模块,对多... 为保证焊缝跟踪的精度并将激光条纹从强弧光、飞溅中分离出来,提出了一种基于深度残差(SRNU)网络的激光条纹分割算法。该算法是将带有弧光的图像送入SRNU模型,对内嵌于Resunet网络的编码层部分进行改进,添加SE模块和分组残差模块,对多层级特征信息进行提取和解析。结果表明:所提算法与Resunet算法相比,平均交并比、精确率、召回率与F1分数分别提升了0.79%、1.38%、0.50%和0.91%,说明该方法有较好的鲁棒性且具有较强的抗干扰能力,在复杂工况下也能将激光条纹从强弧光、飞溅中分离出来。 展开更多
关键词 结构光视觉传感器 深度学习 多层多道焊缝 焊缝识别 深度残差 激光条纹分割算法
在线阅读 下载PDF
上一页 1 2 41 下一页 到第
使用帮助 返回顶部