The latest developments in bio-inspired neuromorphic vision sensors can be summarized in 3 keywords:smaller,faster,and smarter.(1)Smaller:Devices are becoming more compact by integrating previously separated component...The latest developments in bio-inspired neuromorphic vision sensors can be summarized in 3 keywords:smaller,faster,and smarter.(1)Smaller:Devices are becoming more compact by integrating previously separated components such as sensors,memory,and processing units.As a prime example,the transition from traditional sensory vision computing to in-sensor vision computing has shown clear benefits,such as simpler circuitry,lower power consumption,and less data redundancy.(2)Swifter:Owing to the nature of physics,smaller and more integrated devices can detect,process,and react to input more quickly.In addition,the methods for sensing and processing optical information using various materials(such as oxide semiconductors)are evolving.(3)Smarter:Owing to these two main research directions,we can expect advanced applications such as adaptive vision sensors,collision sensors,and nociceptive sensors.This review mainly focuses on the recent progress,working mechanisms,image pre-processing techniques,and advanced features of two types of neuromorphic vision sensors based on near-sensor and in-sensor vision computing methodologies.展开更多
Conventional frame-based image sensors suffer greatly from high energy consumption and latency.Mimicking neurobiological structures and functionalities of the retina provides a promising way to build a neuromorphic vi...Conventional frame-based image sensors suffer greatly from high energy consumption and latency.Mimicking neurobiological structures and functionalities of the retina provides a promising way to build a neuromorphic vision sensor with highly efficient image processing.In this review article,we will start with a brief introduction to explain the working mechanism and the challenges of conventional frame-based image sensors,and introduce the structure and functions of biological retina.In the main section,we will overview recent developments in neuromorphic vision sensors,including the silicon retina based on conventional Si CMOS digital technologies,and the neuromorphic vision sensors with the implementation of emerging devices.Finally,we will provide a brief outline of the prospects and outlook for the development of this field.展开更多
Remarkable developments in image recognition technology trigger demands for more advanced imaging devices.In recent years,traditional image sensors,as the go-to imaging devices,have made substantial progress in their ...Remarkable developments in image recognition technology trigger demands for more advanced imaging devices.In recent years,traditional image sensors,as the go-to imaging devices,have made substantial progress in their optoelectronic characteristics and functionality.Moreover,a new breed of imaging device with information processing capability,known as neuromorphic vision sensors,is developed by mimicking biological vision.In this review,we delve into the recent progress of imaging devices,specifically image sensors and neuromorphic vision sensors.This review starts by introducing their core components,namely photodetectors and photonic synapses,while placing a strong emphasis on device structures,working mechanisms and key performance parameters.Then it proceeds to summarize the noteworthy achievements in both image sensors and neuromorphic vision sensors,including advancements in large-scale and highresolution imaging,filter-free multispectral recognition,polarization sensitivity,flexibility,hemispherical designs,and self-power supply of image sensors,as well as in neuromorphic imaging and data processing,environmental adaptation,and ultra-low power consumption of neuromorphic vision sensors.Finally,the challenges and prospects that lie ahead in the ongoing development of imaging devices are addressed.展开更多
Artificial visual sensors(AVSs)with bio-inspired sensing and neuromorphic signal processing are essential for next-generation intelligent systems.Conventional optoelectronic devices employed in AVSs operate discretely...Artificial visual sensors(AVSs)with bio-inspired sensing and neuromorphic signal processing are essential for next-generation intelligent systems.Conventional optoelectronic devices employed in AVSs operate discretely in terms of sensing,processing,and memorization,and not ideal for applications necessitating shape deformation to achieve wide fields-of-view and deep depths-of-field.Here,we present stretchable artificial visual sensors(S-AVS)capable of concurrently sensing and processing optical signals while adapting to shape deformations.Specifically,these S-AVSs use a stretchable transistor structure with a meticulously engineered photosensitive semiconductor layer,comprising an organic semiconductor,thermoplastic elastomer,and cesium lead bromide quantum dots(CsPbBr_(3) QDs).They exhibit synaptic behaviors such as excitatory postsynaptic current(EPSC)and paired-pulse facilitation(PPF)under optical signals,maintaining functionality under 30%strain and repeated stretching.The nonlinear response and fading memory effect support in-sensor reservoir computing,achieving image recognition accuracies of 97.46%and 97.1%at 0%and 30%strain,respectively.展开更多
Neuromorphic systems represent a promising avenue for the development of the next generation of artificial intelligence hardware.Machine vision,one of the cores in artificial intelligence,requires system-level support...Neuromorphic systems represent a promising avenue for the development of the next generation of artificial intelligence hardware.Machine vision,one of the cores in artificial intelligence,requires system-level support with low power consumption,low latency,and parallel computing.Neuromorphic vision sensors provide an efficient solution for machine vision by simulating the structure and function of the biological retina.Optoelectronic synapses,which use light as the main means to achieve the dual functions of photosensitivity and synapse,are the basic units of the neuromorphic vision sensor.Therefore,it is necessary to develop various optoelectronic synaptic devices to expand the application scenarios of neuromorphic vision systems.This review compares the structure and function for both biological and artificial retina systems,and introduces various optoelectronic synaptic devices based on low-dimensional materials and working mechanisms.In addition,advanced applications of optoelectronic synapses as neuromorphic vision sensors are comprehensively summarized.Finally,the challenges and prospects in this field are briefly discussed.展开更多
A digital still camera image processing system on a chip, different from the video camera system, is pre- sented for mobile phone to reduce the power consumption and size. A new color interpolation algorithm is propos...A digital still camera image processing system on a chip, different from the video camera system, is pre- sented for mobile phone to reduce the power consumption and size. A new color interpolation algorithm is proposed to enhance the image quality. The system can also process fixed patten noise (FPN) reduction, color correction, gamma correction, RGB/YUV space transfer, etc. The chip is controlled by sensor regis- ters by inter-integrated circuit (I2C) interface. The voltage for both the front-end analog and the pad cir- cuits is 2.8 V, and the volatge for the image signal processing is 1.8 V. The chip running under the external 13.5-MHz clock has a video data rate of 30 frames/s and the measured power dissipation is about 75 roW.展开更多
Bioinspired neuromorphic machine vision system(NMVS)that integrates retinomorphic sensing and neuromorphic computing into one monolithic system is regarded as the most promising architecture for visual perception.Howe...Bioinspired neuromorphic machine vision system(NMVS)that integrates retinomorphic sensing and neuromorphic computing into one monolithic system is regarded as the most promising architecture for visual perception.However,the large intensity range of natural lights and complex illumination conditions in actual scenarios always require the NMVS to dynamically adjust its sensitivity according to the environmental conditions,just like the visual adaptation function of the human retina.Although some opto-sensors with scotopic or photopic adaption have been developed,NMVSs,especially fully flexible NMVSs,with both scotopic and photopic adaptation functions are rarely reported.Here we propose an ion-modulation strategy to dynamically adjust the photosensitivity and time-varying activation/inhibition characteristics depending on the illumination conditions,and develop a flexible ionmodulated phototransistor array based on MoS_(2)/graphdiyne heterostructure,which can execute both retinomorphic sensing and neuromorphic computing.By controlling the intercalated Li^(+) ions in graphdiyne,both scotopic and photopic adaptation functions are demonstrated successfully.A fully flexible NMVS consisting of front-end retinomorphic vision sensors and a back-end convolutional neural network is constructed based on the as-fabricated 28×28 device array,demonstrating quite high recognition accuracies for both dim and bright images and robust flexibility.This effort for fully flexible and monolithic NMVS paves the way for its applications in wearable scenarios.展开更多
In order to solve the problem of low measurement accuracy caused by uneven imaging resolutions,we develop a three-dimensional catadioptric vision sensor using 20 to 100 lasers arranged in a circular array called omnid...In order to solve the problem of low measurement accuracy caused by uneven imaging resolutions,we develop a three-dimensional catadioptric vision sensor using 20 to 100 lasers arranged in a circular array called omnidirectional dot maxtric projection(ODMP).Based on the imaging characteristic of the sensor,the ODMP can image the area with a high image resolution.The proposed sensor with ODMP can minimize the loss of the detail information by adjusting the projection density.In evaluating the performance of the sensor,real experiments show the designed sensor has high efficiency and high precision for the measurement of the inner surfaces of pipelines.展开更多
The service cycle and dynamic performance of structural parts are afected by the weld grinding accuracy and surface consistency. Because of reasons such as assembly errors and thermal deformation, the actual track of ...The service cycle and dynamic performance of structural parts are afected by the weld grinding accuracy and surface consistency. Because of reasons such as assembly errors and thermal deformation, the actual track of the robot does not coincide with the theoretical track when the weld is ground ofine, resulting in poor workpiece surface quality. Considering these problems, in this study, a vision sensing-based online correction system for robotic weld grinding was developed. The system mainly included three subsystems: weld feature extraction, grinding, and robot real-time control. The grinding equipment was frst set as a substation for the robot using the WorkVisual software. The input/output (I/O) ports for communication between the robot and the grinding equipment were confgured via the I/O mapping function to enable the robot to control the grinding equipment (start, stop, and speed control). Subsequently, the Ethernet KRL software package was used to write the data interaction structure to realize realtime communication between the robot and the laser vision system. To correct the measurement error caused by the bending deformation of the workpiece, we established a surface profle model of the base material in the weld area using a polynomial ftting algorithm to compensate for the measurement data. The corrected extracted weld width and height errors were reduced by 2.01% and 9.3%, respectively. Online weld seam extraction and correction experiments verifed the efectiveness of the system’s correction function, and the system could control the grinding trajectory error within 0.2 mm. The reliability of the system was verifed through actual weld grinding experiments. The roughness, Ra, could reach 0.504 µm and the average residual height was within 0.21 mm. In this study, we developed a vision sensing-based online correction system for robotic weld grinding with a good correction efect and high robustness.展开更多
The dynamic behaviors of the keyhole and weld pool are coupled together in plasma arc welding, and the geometric variations of both the keyhole and the weld pool determine the weld quality. It is of great significance...The dynamic behaviors of the keyhole and weld pool are coupled together in plasma arc welding, and the geometric variations of both the keyhole and the weld pool determine the weld quality. It is of great significance to simultaneously sense and monitor the keyhole and the weld pool behaviors by using a single low-cost vision sensor in plasma arc welding process. In this study, the keyhole and weld pool were observed and measured under different levels of welding current by using the near infrared sensing technology and the charge coupled device (CCD) sensing system. The shapes and relative position of weld pool and keyhole under different conditions were compared and analyzed. The observation results lay solid foundation for controlling weld quality and understanding the underlying process mechanisms.展开更多
A second-order divided difference filter (SDDF) is derived for integrating line of sight measurement from vision sensor with acceleration and angular rate measurements of the follower to estimate the precise relative ...A second-order divided difference filter (SDDF) is derived for integrating line of sight measurement from vision sensor with acceleration and angular rate measurements of the follower to estimate the precise relative position,velocity and attitude of two unmanned aerial vehicles (UAVs).The second-order divided difference filter which makes use of multidimensional interpolation formulations to approximate the nonlinear transformations could achieve more accurate estimation and faster convergence from inaccurate initial conditions than standard extended Kalman filter.The filter formulation is based on relative motion equations.The global attitude parameterization is given by quarternion,while a generalized three-dimensional attitude representation is used to define the local attitude error.Simulation results are shown to compare the performance of the second-order divided difference filter with a standard extended Kalman filter approach.展开更多
Based on the characteristics of line structured light sensor, a speedy method for the calibration was established. With the coplanar reference target, the spacial pose between camera and optical plane can be calibrate...Based on the characteristics of line structured light sensor, a speedy method for the calibration was established. With the coplanar reference target, the spacial pose between camera and optical plane can be calibrated by using of the camera’s projective center and the light’s information in the camera’s image surface. Without striction to the movement of the coplanar reference target and assistant adjustment equipment, this calibration method can be implemented. This method has been used and decreased the cost of calibration equipment, simplified the calibration procedure, improved calibration efficiency. Using experiment, the sensor can attain relative accuracy about 0.5%, which indicates the rationality and effectivity of this method.展开更多
In order to perform an optical assembly accurately, a multi-sensor control strategy is developed which includes an attitude measurement system, a vision system, a loss measurement system and a force sensor. A 3-DOF at...In order to perform an optical assembly accurately, a multi-sensor control strategy is developed which includes an attitude measurement system, a vision system, a loss measurement system and a force sensor. A 3-DOF attitude measuring method using linear variable differential transformers (LVDT) is designed to adjust the relation of position and attitude between the spher- ical mirror and the resonator. A micro vision feedback system is set up to extract the light beam and the diaphragm, which can achieve the coarse positioning of the spherical mirror in the optical assembly process. A rapid self-correlation method is presented to analyze the spectrum signal for the fine positioning. In order to prevent the damage of the optical components and realize sealing of the resonator, a hybrid force-position control is constructed to control the contact force of the optical components. The experimental results show that the proposed multi-sensor control strategy succeeds in accomplishing the precise assembly of the optical components, which consists of parallel adjustment, macro coarse adjustment, macro approach, micro fine adjustment, micro approach and optical contact. Therefore, the results validate the multi-sensor control strategy.展开更多
一当场,自我本地化系统为在有深入的 3D 里程碑的 3D 环境起作用的活动机器人被开发。机器人通过合并从 odometry 和单向性的照相机收集的信息的一个地图评估者递归地估计它的姿势。我们为这二个传感器造非线性的模型并且坚持说机器人...一当场,自我本地化系统为在有深入的 3D 里程碑的 3D 环境起作用的活动机器人被开发。机器人通过合并从 odometry 和单向性的照相机收集的信息的一个地图评估者递归地估计它的姿势。我们为这二个传感器造非线性的模型并且坚持说机器人运动和不精密的传感器大小的无常操作应该全部被嵌入并且追踪我们的系统。我们在一个概率的几何学观点和使用 unscented 变换描述无常框架宣传无常,它经历给定的非线性的功能。就我们的机器人的处理力量而言,图象特征在相应投射特征的附近被提取。另外,数据协会被统计距离评估。最后,一系列系统的实验被进行证明我们的系统的可靠、精确的性能。展开更多
This paper expounds the application of machine vision theory, composition and technology in the sow breeding process monitoring, auxiliary judgment, and growth of young monitoring. It also points out the problems and ...This paper expounds the application of machine vision theory, composition and technology in the sow breeding process monitoring, auxiliary judgment, and growth of young monitoring. It also points out the problems and deficiency in the application of machine vision technology, and discusses the development trends and prospects of the machine vision technology in agricultural engineering. The application of machine vision is a process in which dynamic original image from the sows estrus is collected with a CCD camera, and then black and white ash three binarization image in adjournments of the threshold value is made by using image acquisition card, through the median filtering and gray processing. The practitioners can extract respective image information from the sow estrus, pregnancy and birth delivery. Applying the computer vision system in the sow farm effectively enhances the practitioners’ objectivity and precision in their efforts to assess the whole process of sow birth delivery.展开更多
This paper theoretically analyzes and researches the coordinate frames of a 3D vision scanning system, establishes the mathematic model of a system scanning process, derives the relationship between the general non-or...This paper theoretically analyzes and researches the coordinate frames of a 3D vision scanning system, establishes the mathematic model of a system scanning process, derives the relationship between the general non-orthonormal sensor coordinate system and the machine coordinate system and the coordinate transformation matrix of the extrinsic calibration for the system.展开更多
This research is dedicated to develop a safety measurement for human-machine cooperative system, in which the machine region and the human region cannot be separated due to the overlap and the movement both from human...This research is dedicated to develop a safety measurement for human-machine cooperative system, in which the machine region and the human region cannot be separated due to the overlap and the movement both from human and from machines. Our proposal here is to automatically monitor the moving objects by image sensing/recognition method, such that the machine system can get enough information about the environment situation and about the production progress at any time, and therefore the machines can accordingly take some corresponding actions automatically to avoid hazard. For this purpose, two types of monitor systems are proposed. The first type is based on the omni directional vision sensor, and the second is based on the stereo vision sensor. Each type may be used alone or together with another type, depending on the safety system's requirements and the specific situation of the manufacture field to be monitored. In this paper, the description about these two types are given, and as for the special application of these image sensors into safety control, the construction of a hierarchy safety system is proposed.展开更多
Building fences to manage the cattle grazing can be very expensive;cost inefficient. These do not provide dynamic control over the area in which the cattle are grazing. Existing virtual fencing techniques for the cont...Building fences to manage the cattle grazing can be very expensive;cost inefficient. These do not provide dynamic control over the area in which the cattle are grazing. Existing virtual fencing techniques for the control of herds of cattle, based on polygon coordinate definition of boundaries is limited in the area of land mass coverage and dynamism. This work seeks to develop a more robust and an improved monocular vision based boundary avoidance for non-invasive stray control system for cattle, with a view to increase land mass coverage in virtual fencing techniques and dynamism. The monocular vision based depth estimation will be modeled using concept of global Fourier Transform (FT) and local Wavelet Transform (WT) of image structure of scenes (boundaries). The magnitude of the global Fourier Transform gives the dominant orientations and textual patterns of the image;while the local Wavelet Transform gives the dominant spectral features of the image and their spatial distribution. Each scene picture or image is defined by features v, which contain the set of global (FT) and local (WT) statistics of the image. Scenes or boundaries distances are given by estimating the depth D by means of the image features v. Sound cues of intensity equivalent to the magnitude of the depth D are applied to the animal ears as stimuli. This brings about the desired control as animals tend to move away from uncomfortable sounds.展开更多
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.NRF-2019R1A2C2002447)This research also was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.NRF-2014R1A6A1030419)This work also was supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0020967,Advanced Training Program for Smart Sensor Engineers).
文摘The latest developments in bio-inspired neuromorphic vision sensors can be summarized in 3 keywords:smaller,faster,and smarter.(1)Smaller:Devices are becoming more compact by integrating previously separated components such as sensors,memory,and processing units.As a prime example,the transition from traditional sensory vision computing to in-sensor vision computing has shown clear benefits,such as simpler circuitry,lower power consumption,and less data redundancy.(2)Swifter:Owing to the nature of physics,smaller and more integrated devices can detect,process,and react to input more quickly.In addition,the methods for sensing and processing optical information using various materials(such as oxide semiconductors)are evolving.(3)Smarter:Owing to these two main research directions,we can expect advanced applications such as adaptive vision sensors,collision sensors,and nociceptive sensors.This review mainly focuses on the recent progress,working mechanisms,image pre-processing techniques,and advanced features of two types of neuromorphic vision sensors based on near-sensor and in-sensor vision computing methodologies.
基金Research Grant Council of Hong Kong(15205619)the Shenzhen Science and Technology Innovation Commission(JCYJ20180507183424383)National Natural Science Foundation of China(61851402).
文摘Conventional frame-based image sensors suffer greatly from high energy consumption and latency.Mimicking neurobiological structures and functionalities of the retina provides a promising way to build a neuromorphic vision sensor with highly efficient image processing.In this review article,we will start with a brief introduction to explain the working mechanism and the challenges of conventional frame-based image sensors,and introduce the structure and functions of biological retina.In the main section,we will overview recent developments in neuromorphic vision sensors,including the silicon retina based on conventional Si CMOS digital technologies,and the neuromorphic vision sensors with the implementation of emerging devices.Finally,we will provide a brief outline of the prospects and outlook for the development of this field.
基金financially supported by the National Natural Science Foundation of China(Nos.52202181,52125205,U20A20166,52192614,52372154,52002246 and U22A2077)the National Key R&D Program of China(Nos.2021YFB3200302 and 2021YFB3200304)+3 种基金the Natural Science Foundation of Beijing Municipality(Nos.2180011 and 2222088)China Postdoctoral Science Foundation(No.2022M712166)Shenzhen Science and Technology Program(No.KQTD20170810105439418)the Fundamental Research Funds for the Central Universities。
文摘Remarkable developments in image recognition technology trigger demands for more advanced imaging devices.In recent years,traditional image sensors,as the go-to imaging devices,have made substantial progress in their optoelectronic characteristics and functionality.Moreover,a new breed of imaging device with information processing capability,known as neuromorphic vision sensors,is developed by mimicking biological vision.In this review,we delve into the recent progress of imaging devices,specifically image sensors and neuromorphic vision sensors.This review starts by introducing their core components,namely photodetectors and photonic synapses,while placing a strong emphasis on device structures,working mechanisms and key performance parameters.Then it proceeds to summarize the noteworthy achievements in both image sensors and neuromorphic vision sensors,including advancements in large-scale and highresolution imaging,filter-free multispectral recognition,polarization sensitivity,flexibility,hemispherical designs,and self-power supply of image sensors,as well as in neuromorphic imaging and data processing,environmental adaptation,and ultra-low power consumption of neuromorphic vision sensors.Finally,the challenges and prospects that lie ahead in the ongoing development of imaging devices are addressed.
基金supported by the the Innovation Program of Shanghai Municipal Education Commission(No.2021-01-07-00-07-E00096)the National Natural Science Foundation of China(Nos.62074111 and 62374115)the National Key Research and Development Program of China(No.2022YFB3203502).
文摘Artificial visual sensors(AVSs)with bio-inspired sensing and neuromorphic signal processing are essential for next-generation intelligent systems.Conventional optoelectronic devices employed in AVSs operate discretely in terms of sensing,processing,and memorization,and not ideal for applications necessitating shape deformation to achieve wide fields-of-view and deep depths-of-field.Here,we present stretchable artificial visual sensors(S-AVS)capable of concurrently sensing and processing optical signals while adapting to shape deformations.Specifically,these S-AVSs use a stretchable transistor structure with a meticulously engineered photosensitive semiconductor layer,comprising an organic semiconductor,thermoplastic elastomer,and cesium lead bromide quantum dots(CsPbBr_(3) QDs).They exhibit synaptic behaviors such as excitatory postsynaptic current(EPSC)and paired-pulse facilitation(PPF)under optical signals,maintaining functionality under 30%strain and repeated stretching.The nonlinear response and fading memory effect support in-sensor reservoir computing,achieving image recognition accuracies of 97.46%and 97.1%at 0%and 30%strain,respectively.
基金National Key R&D program of China(Grant No.2019YFB1309701)National Natural Science Foundation of China(NSFC,Grand Nos.U1813211,61804009)Beijing Institute of Technology Research Fund Program for Young Scholars and Analysis&Testing Center,Beijing Institute of Technology.
文摘Neuromorphic systems represent a promising avenue for the development of the next generation of artificial intelligence hardware.Machine vision,one of the cores in artificial intelligence,requires system-level support with low power consumption,low latency,and parallel computing.Neuromorphic vision sensors provide an efficient solution for machine vision by simulating the structure and function of the biological retina.Optoelectronic synapses,which use light as the main means to achieve the dual functions of photosensitivity and synapse,are the basic units of the neuromorphic vision sensor.Therefore,it is necessary to develop various optoelectronic synaptic devices to expand the application scenarios of neuromorphic vision systems.This review compares the structure and function for both biological and artificial retina systems,and introduces various optoelectronic synaptic devices based on low-dimensional materials and working mechanisms.In addition,advanced applications of optoelectronic synapses as neuromorphic vision sensors are comprehensively summarized.Finally,the challenges and prospects in this field are briefly discussed.
基金supported by the National"863"Program of China under Grant No.2008AA01Z130
文摘A digital still camera image processing system on a chip, different from the video camera system, is pre- sented for mobile phone to reduce the power consumption and size. A new color interpolation algorithm is proposed to enhance the image quality. The system can also process fixed patten noise (FPN) reduction, color correction, gamma correction, RGB/YUV space transfer, etc. The chip is controlled by sensor regis- ters by inter-integrated circuit (I2C) interface. The voltage for both the front-end analog and the pad cir- cuits is 2.8 V, and the volatge for the image signal processing is 1.8 V. The chip running under the external 13.5-MHz clock has a video data rate of 30 frames/s and the measured power dissipation is about 75 roW.
基金National Natural Science Foundation of China,Grant/Award Numbers:12174207,51802220,62274119Fundamental Research Funds for the Central Universities,Grant/Award Numbers:010-63233006,010-DK2300010203。
文摘Bioinspired neuromorphic machine vision system(NMVS)that integrates retinomorphic sensing and neuromorphic computing into one monolithic system is regarded as the most promising architecture for visual perception.However,the large intensity range of natural lights and complex illumination conditions in actual scenarios always require the NMVS to dynamically adjust its sensitivity according to the environmental conditions,just like the visual adaptation function of the human retina.Although some opto-sensors with scotopic or photopic adaption have been developed,NMVSs,especially fully flexible NMVSs,with both scotopic and photopic adaptation functions are rarely reported.Here we propose an ion-modulation strategy to dynamically adjust the photosensitivity and time-varying activation/inhibition characteristics depending on the illumination conditions,and develop a flexible ionmodulated phototransistor array based on MoS_(2)/graphdiyne heterostructure,which can execute both retinomorphic sensing and neuromorphic computing.By controlling the intercalated Li^(+) ions in graphdiyne,both scotopic and photopic adaptation functions are demonstrated successfully.A fully flexible NMVS consisting of front-end retinomorphic vision sensors and a back-end convolutional neural network is constructed based on the as-fabricated 28×28 device array,demonstrating quite high recognition accuracies for both dim and bright images and robust flexibility.This effort for fully flexible and monolithic NMVS paves the way for its applications in wearable scenarios.
基金supported by the National Natural Science Foundation of China(No.61471123)the Natural Science Foundation of Guangdong Province(No.2015A030313639)
文摘In order to solve the problem of low measurement accuracy caused by uneven imaging resolutions,we develop a three-dimensional catadioptric vision sensor using 20 to 100 lasers arranged in a circular array called omnidirectional dot maxtric projection(ODMP).Based on the imaging characteristic of the sensor,the ODMP can image the area with a high image resolution.The proposed sensor with ODMP can minimize the loss of the detail information by adjusting the projection density.In evaluating the performance of the sensor,real experiments show the designed sensor has high efficiency and high precision for the measurement of the inner surfaces of pipelines.
基金Supported by Hunan Provincial Natural Science Foundation of China(Grant No.2021JJ50116).
文摘The service cycle and dynamic performance of structural parts are afected by the weld grinding accuracy and surface consistency. Because of reasons such as assembly errors and thermal deformation, the actual track of the robot does not coincide with the theoretical track when the weld is ground ofine, resulting in poor workpiece surface quality. Considering these problems, in this study, a vision sensing-based online correction system for robotic weld grinding was developed. The system mainly included three subsystems: weld feature extraction, grinding, and robot real-time control. The grinding equipment was frst set as a substation for the robot using the WorkVisual software. The input/output (I/O) ports for communication between the robot and the grinding equipment were confgured via the I/O mapping function to enable the robot to control the grinding equipment (start, stop, and speed control). Subsequently, the Ethernet KRL software package was used to write the data interaction structure to realize realtime communication between the robot and the laser vision system. To correct the measurement error caused by the bending deformation of the workpiece, we established a surface profle model of the base material in the weld area using a polynomial ftting algorithm to compensate for the measurement data. The corrected extracted weld width and height errors were reduced by 2.01% and 9.3%, respectively. Online weld seam extraction and correction experiments verifed the efectiveness of the system’s correction function, and the system could control the grinding trajectory error within 0.2 mm. The reliability of the system was verifed through actual weld grinding experiments. The roughness, Ra, could reach 0.504 µm and the average residual height was within 0.21 mm. In this study, we developed a vision sensing-based online correction system for robotic weld grinding with a good correction efect and high robustness.
文摘The dynamic behaviors of the keyhole and weld pool are coupled together in plasma arc welding, and the geometric variations of both the keyhole and the weld pool determine the weld quality. It is of great significance to simultaneously sense and monitor the keyhole and the weld pool behaviors by using a single low-cost vision sensor in plasma arc welding process. In this study, the keyhole and weld pool were observed and measured under different levels of welding current by using the near infrared sensing technology and the charge coupled device (CCD) sensing system. The shapes and relative position of weld pool and keyhole under different conditions were compared and analyzed. The observation results lay solid foundation for controlling weld quality and understanding the underlying process mechanisms.
基金Sponsored by the Aerospace Technology Innovation Funding(Grant No. CASC0209)
文摘A second-order divided difference filter (SDDF) is derived for integrating line of sight measurement from vision sensor with acceleration and angular rate measurements of the follower to estimate the precise relative position,velocity and attitude of two unmanned aerial vehicles (UAVs).The second-order divided difference filter which makes use of multidimensional interpolation formulations to approximate the nonlinear transformations could achieve more accurate estimation and faster convergence from inaccurate initial conditions than standard extended Kalman filter.The filter formulation is based on relative motion equations.The global attitude parameterization is given by quarternion,while a generalized three-dimensional attitude representation is used to define the local attitude error.Simulation results are shown to compare the performance of the second-order divided difference filter with a standard extended Kalman filter approach.
文摘Based on the characteristics of line structured light sensor, a speedy method for the calibration was established. With the coplanar reference target, the spacial pose between camera and optical plane can be calibrated by using of the camera’s projective center and the light’s information in the camera’s image surface. Without striction to the movement of the coplanar reference target and assistant adjustment equipment, this calibration method can be implemented. This method has been used and decreased the cost of calibration equipment, simplified the calibration procedure, improved calibration efficiency. Using experiment, the sensor can attain relative accuracy about 0.5%, which indicates the rationality and effectivity of this method.
基金the National Natural Science Foundation of China (No. 50905105)Shanghai Municipal Natural Science Foundation (No. 13ZR1415800)+2 种基金the Innovation Program of Shanghai Municipal Education Commission of China (No. 14YZ008)the State Key Laboratory of Robotics and Systems of Harbin Institute of Technology (No. 2010MS02)the Jiangsu Key Laboratory of Advanced Robotic Technology of Soochow University (No. JAR201304)
文摘In order to perform an optical assembly accurately, a multi-sensor control strategy is developed which includes an attitude measurement system, a vision system, a loss measurement system and a force sensor. A 3-DOF attitude measuring method using linear variable differential transformers (LVDT) is designed to adjust the relation of position and attitude between the spher- ical mirror and the resonator. A micro vision feedback system is set up to extract the light beam and the diaphragm, which can achieve the coarse positioning of the spherical mirror in the optical assembly process. A rapid self-correlation method is presented to analyze the spectrum signal for the fine positioning. In order to prevent the damage of the optical components and realize sealing of the resonator, a hybrid force-position control is constructed to control the contact force of the optical components. The experimental results show that the proposed multi-sensor control strategy succeeds in accomplishing the precise assembly of the optical components, which consists of parallel adjustment, macro coarse adjustment, macro approach, micro fine adjustment, micro approach and optical contact. Therefore, the results validate the multi-sensor control strategy.
基金Supported by National Natural Science Foundation of China(60605023,60775048)Specialized Research Fund for the Doctoral Program of Higher Education(20060141006)
文摘一当场,自我本地化系统为在有深入的 3D 里程碑的 3D 环境起作用的活动机器人被开发。机器人通过合并从 odometry 和单向性的照相机收集的信息的一个地图评估者递归地估计它的姿势。我们为这二个传感器造非线性的模型并且坚持说机器人运动和不精密的传感器大小的无常操作应该全部被嵌入并且追踪我们的系统。我们在一个概率的几何学观点和使用 unscented 变换描述无常框架宣传无常,它经历给定的非线性的功能。就我们的机器人的处理力量而言,图象特征在相应投射特征的附近被提取。另外,数据协会被统计距离评估。最后,一系列系统的实验被进行证明我们的系统的可靠、精确的性能。
文摘This paper expounds the application of machine vision theory, composition and technology in the sow breeding process monitoring, auxiliary judgment, and growth of young monitoring. It also points out the problems and deficiency in the application of machine vision technology, and discusses the development trends and prospects of the machine vision technology in agricultural engineering. The application of machine vision is a process in which dynamic original image from the sows estrus is collected with a CCD camera, and then black and white ash three binarization image in adjournments of the threshold value is made by using image acquisition card, through the median filtering and gray processing. The practitioners can extract respective image information from the sow estrus, pregnancy and birth delivery. Applying the computer vision system in the sow farm effectively enhances the practitioners’ objectivity and precision in their efforts to assess the whole process of sow birth delivery.
文摘This paper theoretically analyzes and researches the coordinate frames of a 3D vision scanning system, establishes the mathematic model of a system scanning process, derives the relationship between the general non-orthonormal sensor coordinate system and the machine coordinate system and the coordinate transformation matrix of the extrinsic calibration for the system.
文摘This research is dedicated to develop a safety measurement for human-machine cooperative system, in which the machine region and the human region cannot be separated due to the overlap and the movement both from human and from machines. Our proposal here is to automatically monitor the moving objects by image sensing/recognition method, such that the machine system can get enough information about the environment situation and about the production progress at any time, and therefore the machines can accordingly take some corresponding actions automatically to avoid hazard. For this purpose, two types of monitor systems are proposed. The first type is based on the omni directional vision sensor, and the second is based on the stereo vision sensor. Each type may be used alone or together with another type, depending on the safety system's requirements and the specific situation of the manufacture field to be monitored. In this paper, the description about these two types are given, and as for the special application of these image sensors into safety control, the construction of a hierarchy safety system is proposed.
文摘Building fences to manage the cattle grazing can be very expensive;cost inefficient. These do not provide dynamic control over the area in which the cattle are grazing. Existing virtual fencing techniques for the control of herds of cattle, based on polygon coordinate definition of boundaries is limited in the area of land mass coverage and dynamism. This work seeks to develop a more robust and an improved monocular vision based boundary avoidance for non-invasive stray control system for cattle, with a view to increase land mass coverage in virtual fencing techniques and dynamism. The monocular vision based depth estimation will be modeled using concept of global Fourier Transform (FT) and local Wavelet Transform (WT) of image structure of scenes (boundaries). The magnitude of the global Fourier Transform gives the dominant orientations and textual patterns of the image;while the local Wavelet Transform gives the dominant spectral features of the image and their spatial distribution. Each scene picture or image is defined by features v, which contain the set of global (FT) and local (WT) statistics of the image. Scenes or boundaries distances are given by estimating the depth D by means of the image features v. Sound cues of intensity equivalent to the magnitude of the depth D are applied to the animal ears as stimuli. This brings about the desired control as animals tend to move away from uncomfortable sounds.