Human action recognition(HAR)is crucial for the development of efficient computer vision,where bioinspired neuromorphic perception visual systems have emerged as a vital solution to address transmission bottlenecks ac...Human action recognition(HAR)is crucial for the development of efficient computer vision,where bioinspired neuromorphic perception visual systems have emerged as a vital solution to address transmission bottlenecks across sensor-processor interfaces.However,the absence of interactions among versatile biomimicking functionalities within a single device,which was developed for specific vision tasks,restricts the computational capacity,practicality,and scalability of in-sensor vision computing.Here,we propose a bioinspired vision sensor composed of a Ga N/Al N-based ultrathin quantum-disks-in-nanowires(QD-NWs)array to mimic not only Parvo cells for high-contrast vision and Magno cells for dynamic vision in the human retina but also the synergistic activity between the two cells for in-sensor vision computing.By simply tuning the applied bias voltage on each QD-NW-array-based pixel,we achieve two biosimilar photoresponse characteristics with slow and fast reactions to light stimuli that enhance the in-sensor image quality and HAR efficiency,respectively.Strikingly,the interplay and synergistic interaction of the two photoresponse modes within a single device markedly increased the HAR recognition accuracy from 51.4%to 81.4%owing to the integrated artificial vision system.The demonstration of an intelligent vision sensor offers a promising device platform for the development of highly efficient HAR systems and future smart optoelectronics.展开更多
Conventional frame-based image sensors suffer greatly from high energy consumption and latency.Mimicking neurobiological structures and functionalities of the retina provides a promising way to build a neuromorphic vi...Conventional frame-based image sensors suffer greatly from high energy consumption and latency.Mimicking neurobiological structures and functionalities of the retina provides a promising way to build a neuromorphic vision sensor with highly efficient image processing.In this review article,we will start with a brief introduction to explain the working mechanism and the challenges of conventional frame-based image sensors,and introduce the structure and functions of biological retina.In the main section,we will overview recent developments in neuromorphic vision sensors,including the silicon retina based on conventional Si CMOS digital technologies,and the neuromorphic vision sensors with the implementation of emerging devices.Finally,we will provide a brief outline of the prospects and outlook for the development of this field.展开更多
The latest developments in bio-inspired neuromorphic vision sensors can be summarized in 3 keywords:smaller,faster,and smarter.(1)Smaller:Devices are becoming more compact by integrating previously separated component...The latest developments in bio-inspired neuromorphic vision sensors can be summarized in 3 keywords:smaller,faster,and smarter.(1)Smaller:Devices are becoming more compact by integrating previously separated components such as sensors,memory,and processing units.As a prime example,the transition from traditional sensory vision computing to in-sensor vision computing has shown clear benefits,such as simpler circuitry,lower power consumption,and less data redundancy.(2)Swifter:Owing to the nature of physics,smaller and more integrated devices can detect,process,and react to input more quickly.In addition,the methods for sensing and processing optical information using various materials(such as oxide semiconductors)are evolving.(3)Smarter:Owing to these two main research directions,we can expect advanced applications such as adaptive vision sensors,collision sensors,and nociceptive sensors.This review mainly focuses on the recent progress,working mechanisms,image pre-processing techniques,and advanced features of two types of neuromorphic vision sensors based on near-sensor and in-sensor vision computing methodologies.展开更多
As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from bo...As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from both its environment and other agents,an agent can use various methods and sensor types to localize itself.With its high flexibility and robustness,collaborative positioning has become a widely used method in both military and civilian applications.This paper introduces the basic fundamental concepts and applications of collaborative positioning,and reviews recent progress in the field based on camera,LiDAR(Light Detection and Ranging),wireless sensor,and their integration.The paper compares the current methods with respect to their sensor type,summarizes their main paradigms,and analyzes their evaluation experiments.Finally,the paper discusses the main challenges and open issues that require further research.展开更多
Remarkable developments in image recognition technology trigger demands for more advanced imaging devices.In recent years,traditional image sensors,as the go-to imaging devices,have made substantial progress in their ...Remarkable developments in image recognition technology trigger demands for more advanced imaging devices.In recent years,traditional image sensors,as the go-to imaging devices,have made substantial progress in their optoelectronic characteristics and functionality.Moreover,a new breed of imaging device with information processing capability,known as neuromorphic vision sensors,is developed by mimicking biological vision.In this review,we delve into the recent progress of imaging devices,specifically image sensors and neuromorphic vision sensors.This review starts by introducing their core components,namely photodetectors and photonic synapses,while placing a strong emphasis on device structures,working mechanisms and key performance parameters.Then it proceeds to summarize the noteworthy achievements in both image sensors and neuromorphic vision sensors,including advancements in large-scale and highresolution imaging,filter-free multispectral recognition,polarization sensitivity,flexibility,hemispherical designs,and self-power supply of image sensors,as well as in neuromorphic imaging and data processing,environmental adaptation,and ultra-low power consumption of neuromorphic vision sensors.Finally,the challenges and prospects that lie ahead in the ongoing development of imaging devices are addressed.展开更多
Recent advancements in autonomous vehicle technologies are transforming intelligent transportation systems.Artificial intelligence enables real-time sensing,decision-making,and control on embedded platforms with impro...Recent advancements in autonomous vehicle technologies are transforming intelligent transportation systems.Artificial intelligence enables real-time sensing,decision-making,and control on embedded platforms with improved efficiency.This study presents the design and implementation of an autonomous radio-controlled(RC)vehicle prototype capable of lane line detection,obstacle avoidance,and navigation through dynamic path planning.The system integrates image processing and ultrasonic sensing,utilizing Raspberry Pi for vision-based tasks and ArduinoNano for real-time control.Lane line detection is achieved through conventional image processing techniques,providing the basis for local path generation,while traffic sign classification employs a You Only Look Once(YOLO)model optimized with TensorFlow Lite to support navigation decisions.Images captured by the onboard camera are processed on the Raspberry Pi to extract lane geometry and calculate steering angles,enabling the vehicle to follow the planned path.In addition,ultrasonic sensors placed in three directions at the front of the vehicle detect obstacles and allow real-time path adjustment for safe navigation.Experimental results demonstrate stable performance under controlled conditions,highlighting the system’s potential for scalable autonomous driving applications.This work confirms that deep learning methods can be efficiently deployed on low-power embedded systems,offering a practical framework for navigation,path planning,and intelligent transportation research.展开更多
In the laser displacement sensors measurement system,the laser beam direction is an important parameter.Particularly,the azimuth and pitch angles are the most important parameters to a laser beam.In this paper,based o...In the laser displacement sensors measurement system,the laser beam direction is an important parameter.Particularly,the azimuth and pitch angles are the most important parameters to a laser beam.In this paper,based on monocular vision,a laser beam direction measurement method is proposed.First,place the charge coupled device(CCD)camera above the base plane,and adjust and fix the camera position so that the optical axis is nearly perpendicular to the base plane.The monocular vision localization model is established by using circular aperture calibration board.Then the laser beam generating device is placed and maintained on the base plane at fixed position.At the same time a special target block is placed on the base plane so that the laser beam can project to the special target and form a laser spot.The CCD camera placed above the base plane can acquire the laser spot and the image of the target block clearly,so the two-dimensional(2D)image coordinate of the centroid of the laser spot can be extracted by correlation algorithm.The target is moved at an equal distance along the laser beam direction,and the spots and target images of each moving under the current position are collected by the CCD camera.By using the relevant transformation formula and combining the intrinsic parameters of the target block,the2D coordinates of the gravity center of the spot are converted to the three-dimensional(3D)coordinate in the base plane.Because of the moving of the target,the3D coordinates of the gravity center of the laser spot at different positions are obtained,and these3D coordinates are synthesized into a space straight line to represent the laser beam to be measured.In the experiment,the target parameters are measured by high-precision instruments,and the calibration parameters of the camera are calibrated by a high-precision calibration board to establish the corresponding positioning model.The measurement accuracy is mainly guaranteed by the monocular vision positioning accuracy and the gravity center extraction accuracy.The experimental results show the maximum error of the angle between laser beams reaches to0.04°and the maximum error of beam pitch angle reaches to0.02°.展开更多
The service cycle and dynamic performance of structural parts are afected by the weld grinding accuracy and surface consistency. Because of reasons such as assembly errors and thermal deformation, the actual track of ...The service cycle and dynamic performance of structural parts are afected by the weld grinding accuracy and surface consistency. Because of reasons such as assembly errors and thermal deformation, the actual track of the robot does not coincide with the theoretical track when the weld is ground ofine, resulting in poor workpiece surface quality. Considering these problems, in this study, a vision sensing-based online correction system for robotic weld grinding was developed. The system mainly included three subsystems: weld feature extraction, grinding, and robot real-time control. The grinding equipment was frst set as a substation for the robot using the WorkVisual software. The input/output (I/O) ports for communication between the robot and the grinding equipment were confgured via the I/O mapping function to enable the robot to control the grinding equipment (start, stop, and speed control). Subsequently, the Ethernet KRL software package was used to write the data interaction structure to realize realtime communication between the robot and the laser vision system. To correct the measurement error caused by the bending deformation of the workpiece, we established a surface profle model of the base material in the weld area using a polynomial ftting algorithm to compensate for the measurement data. The corrected extracted weld width and height errors were reduced by 2.01% and 9.3%, respectively. Online weld seam extraction and correction experiments verifed the efectiveness of the system’s correction function, and the system could control the grinding trajectory error within 0.2 mm. The reliability of the system was verifed through actual weld grinding experiments. The roughness, Ra, could reach 0.504 µm and the average residual height was within 0.21 mm. In this study, we developed a vision sensing-based online correction system for robotic weld grinding with a good correction efect and high robustness.展开更多
Based on the characteristics of line structured light sensor, a speedy method for the calibration was established. With the coplanar reference target, the spacial pose between camera and optical plane can be calibrate...Based on the characteristics of line structured light sensor, a speedy method for the calibration was established. With the coplanar reference target, the spacial pose between camera and optical plane can be calibrated by using of the camera’s projective center and the light’s information in the camera’s image surface. Without striction to the movement of the coplanar reference target and assistant adjustment equipment, this calibration method can be implemented. This method has been used and decreased the cost of calibration equipment, simplified the calibration procedure, improved calibration efficiency. Using experiment, the sensor can attain relative accuracy about 0.5%, which indicates the rationality and effectivity of this method.展开更多
This research is dedicated to develop a safety measurement for human-machine cooperative system, in which the machine region and the human region cannot be separated due to the overlap and the movement both from human...This research is dedicated to develop a safety measurement for human-machine cooperative system, in which the machine region and the human region cannot be separated due to the overlap and the movement both from human and from machines. Our proposal here is to automatically monitor the moving objects by image sensing/recognition method, such that the machine system can get enough information about the environment situation and about the production progress at any time, and therefore the machines can accordingly take some corresponding actions automatically to avoid hazard. For this purpose, two types of monitor systems are proposed. The first type is based on the omni directional vision sensor, and the second is based on the stereo vision sensor. Each type may be used alone or together with another type, depending on the safety system's requirements and the specific situation of the manufacture field to be monitored. In this paper, the description about these two types are given, and as for the special application of these image sensors into safety control, the construction of a hierarchy safety system is proposed.展开更多
This paper theoretically analyzes and researches the coordinate frames of a 3D vision scanning system, establishes the mathematic model of a system scanning process, derives the relationship between the general non-or...This paper theoretically analyzes and researches the coordinate frames of a 3D vision scanning system, establishes the mathematic model of a system scanning process, derives the relationship between the general non-orthonormal sensor coordinate system and the machine coordinate system and the coordinate transformation matrix of the extrinsic calibration for the system.展开更多
The dynamic behaviors of the keyhole and weld pool are coupled together in plasma arc welding, and the geometric variations of both the keyhole and the weld pool determine the weld quality. It is of great significance...The dynamic behaviors of the keyhole and weld pool are coupled together in plasma arc welding, and the geometric variations of both the keyhole and the weld pool determine the weld quality. It is of great significance to simultaneously sense and monitor the keyhole and the weld pool behaviors by using a single low-cost vision sensor in plasma arc welding process. In this study, the keyhole and weld pool were observed and measured under different levels of welding current by using the near infrared sensing technology and the charge coupled device (CCD) sensing system. The shapes and relative position of weld pool and keyhole under different conditions were compared and analyzed. The observation results lay solid foundation for controlling weld quality and understanding the underlying process mechanisms.展开更多
A second-order divided difference filter (SDDF) is derived for integrating line of sight measurement from vision sensor with acceleration and angular rate measurements of the follower to estimate the precise relative ...A second-order divided difference filter (SDDF) is derived for integrating line of sight measurement from vision sensor with acceleration and angular rate measurements of the follower to estimate the precise relative position,velocity and attitude of two unmanned aerial vehicles (UAVs).The second-order divided difference filter which makes use of multidimensional interpolation formulations to approximate the nonlinear transformations could achieve more accurate estimation and faster convergence from inaccurate initial conditions than standard extended Kalman filter.The filter formulation is based on relative motion equations.The global attitude parameterization is given by quarternion,while a generalized three-dimensional attitude representation is used to define the local attitude error.Simulation results are shown to compare the performance of the second-order divided difference filter with a standard extended Kalman filter approach.展开更多
This paper expounds the application of machine vision theory, composition and technology in the sow breeding process monitoring, auxiliary judgment, and growth of young monitoring. It also points out the problems and ...This paper expounds the application of machine vision theory, composition and technology in the sow breeding process monitoring, auxiliary judgment, and growth of young monitoring. It also points out the problems and deficiency in the application of machine vision technology, and discusses the development trends and prospects of the machine vision technology in agricultural engineering. The application of machine vision is a process in which dynamic original image from the sows estrus is collected with a CCD camera, and then black and white ash three binarization image in adjournments of the threshold value is made by using image acquisition card, through the median filtering and gray processing. The practitioners can extract respective image information from the sow estrus, pregnancy and birth delivery. Applying the computer vision system in the sow farm effectively enhances the practitioners’ objectivity and precision in their efforts to assess the whole process of sow birth delivery.展开更多
Building fences to manage the cattle grazing can be very expensive;cost inefficient. These do not provide dynamic control over the area in which the cattle are grazing. Existing virtual fencing techniques for the cont...Building fences to manage the cattle grazing can be very expensive;cost inefficient. These do not provide dynamic control over the area in which the cattle are grazing. Existing virtual fencing techniques for the control of herds of cattle, based on polygon coordinate definition of boundaries is limited in the area of land mass coverage and dynamism. This work seeks to develop a more robust and an improved monocular vision based boundary avoidance for non-invasive stray control system for cattle, with a view to increase land mass coverage in virtual fencing techniques and dynamism. The monocular vision based depth estimation will be modeled using concept of global Fourier Transform (FT) and local Wavelet Transform (WT) of image structure of scenes (boundaries). The magnitude of the global Fourier Transform gives the dominant orientations and textual patterns of the image;while the local Wavelet Transform gives the dominant spectral features of the image and their spatial distribution. Each scene picture or image is defined by features v, which contain the set of global (FT) and local (WT) statistics of the image. Scenes or boundaries distances are given by estimating the depth D by means of the image features v. Sound cues of intensity equivalent to the magnitude of the depth D are applied to the animal ears as stimuli. This brings about the desired control as animals tend to move away from uncomfortable sounds.展开更多
Egocentric recognition is exciting computer vision research by acquiring images and video from the first-person overview.However,an image becomes noisy and dark under low illumination conditions,making subsequent hand...Egocentric recognition is exciting computer vision research by acquiring images and video from the first-person overview.However,an image becomes noisy and dark under low illumination conditions,making subsequent hand detection tasks difficult.Thus,image enhancement is necessary to make buried detail more visible.This article addresses the challenge of egocentric hand grasp recognition in low light conditions by utilizing the flex sensor and image enhancement algorithm based on adaptive gamma correction with weighting distribution.Initially,a flex sensor is installed to the thumb for object manipulation.The thumb placement that holds in a different position on the object of each grasp affects the voltage changing of the flex sensor circuit.The average voltages are used to configure the weighting parameter to improve images in the image enhancement stage.Moreover,the contrast and gamma function are used to adjust varies the low light condition.These grasp images are then separated to be training and testing with pretrained deep neural networks as the feature extractor in YOLOv2 detection network for the grasp recognition system.The proposed of using a flex sensor significantly improves the grasp recognition rate in low light conditions.展开更多
In order to perform an optical assembly accurately, a multi-sensor control strategy is developed which includes an attitude measurement system, a vision system, a loss measurement system and a force sensor. A 3-DOF at...In order to perform an optical assembly accurately, a multi-sensor control strategy is developed which includes an attitude measurement system, a vision system, a loss measurement system and a force sensor. A 3-DOF attitude measuring method using linear variable differential transformers (LVDT) is designed to adjust the relation of position and attitude between the spher- ical mirror and the resonator. A micro vision feedback system is set up to extract the light beam and the diaphragm, which can achieve the coarse positioning of the spherical mirror in the optical assembly process. A rapid self-correlation method is presented to analyze the spectrum signal for the fine positioning. In order to prevent the damage of the optical components and realize sealing of the resonator, a hybrid force-position control is constructed to control the contact force of the optical components. The experimental results show that the proposed multi-sensor control strategy succeeds in accomplishing the precise assembly of the optical components, which consists of parallel adjustment, macro coarse adjustment, macro approach, micro fine adjustment, micro approach and optical contact. Therefore, the results validate the multi-sensor control strategy.展开更多
基金funded by the National Natural Science Foundation of China(Grant Nos.62322410,52272168,624B2135,61804047)the Fundamental Research Funds for the Central Universities(No.WK2030000103)。
文摘Human action recognition(HAR)is crucial for the development of efficient computer vision,where bioinspired neuromorphic perception visual systems have emerged as a vital solution to address transmission bottlenecks across sensor-processor interfaces.However,the absence of interactions among versatile biomimicking functionalities within a single device,which was developed for specific vision tasks,restricts the computational capacity,practicality,and scalability of in-sensor vision computing.Here,we propose a bioinspired vision sensor composed of a Ga N/Al N-based ultrathin quantum-disks-in-nanowires(QD-NWs)array to mimic not only Parvo cells for high-contrast vision and Magno cells for dynamic vision in the human retina but also the synergistic activity between the two cells for in-sensor vision computing.By simply tuning the applied bias voltage on each QD-NW-array-based pixel,we achieve two biosimilar photoresponse characteristics with slow and fast reactions to light stimuli that enhance the in-sensor image quality and HAR efficiency,respectively.Strikingly,the interplay and synergistic interaction of the two photoresponse modes within a single device markedly increased the HAR recognition accuracy from 51.4%to 81.4%owing to the integrated artificial vision system.The demonstration of an intelligent vision sensor offers a promising device platform for the development of highly efficient HAR systems and future smart optoelectronics.
基金Research Grant Council of Hong Kong(15205619)the Shenzhen Science and Technology Innovation Commission(JCYJ20180507183424383)National Natural Science Foundation of China(61851402).
文摘Conventional frame-based image sensors suffer greatly from high energy consumption and latency.Mimicking neurobiological structures and functionalities of the retina provides a promising way to build a neuromorphic vision sensor with highly efficient image processing.In this review article,we will start with a brief introduction to explain the working mechanism and the challenges of conventional frame-based image sensors,and introduce the structure and functions of biological retina.In the main section,we will overview recent developments in neuromorphic vision sensors,including the silicon retina based on conventional Si CMOS digital technologies,and the neuromorphic vision sensors with the implementation of emerging devices.Finally,we will provide a brief outline of the prospects and outlook for the development of this field.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.NRF-2019R1A2C2002447)This research also was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.NRF-2014R1A6A1030419)This work also was supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0020967,Advanced Training Program for Smart Sensor Engineers).
文摘The latest developments in bio-inspired neuromorphic vision sensors can be summarized in 3 keywords:smaller,faster,and smarter.(1)Smaller:Devices are becoming more compact by integrating previously separated components such as sensors,memory,and processing units.As a prime example,the transition from traditional sensory vision computing to in-sensor vision computing has shown clear benefits,such as simpler circuitry,lower power consumption,and less data redundancy.(2)Swifter:Owing to the nature of physics,smaller and more integrated devices can detect,process,and react to input more quickly.In addition,the methods for sensing and processing optical information using various materials(such as oxide semiconductors)are evolving.(3)Smarter:Owing to these two main research directions,we can expect advanced applications such as adaptive vision sensors,collision sensors,and nociceptive sensors.This review mainly focuses on the recent progress,working mechanisms,image pre-processing techniques,and advanced features of two types of neuromorphic vision sensors based on near-sensor and in-sensor vision computing methodologies.
基金National Natural Science Foundation of China(Grant No.62101138)Shandong Natural Science Foundation(Grant No.ZR2021QD148)+1 种基金Guangdong Natural Science Foundation(Grant No.2022A1515012573)Guangzhou Basic and Applied Basic Research Project(Grant No.202102020701)for providing funds for publishing this paper。
文摘As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from both its environment and other agents,an agent can use various methods and sensor types to localize itself.With its high flexibility and robustness,collaborative positioning has become a widely used method in both military and civilian applications.This paper introduces the basic fundamental concepts and applications of collaborative positioning,and reviews recent progress in the field based on camera,LiDAR(Light Detection and Ranging),wireless sensor,and their integration.The paper compares the current methods with respect to their sensor type,summarizes their main paradigms,and analyzes their evaluation experiments.Finally,the paper discusses the main challenges and open issues that require further research.
基金financially supported by the National Natural Science Foundation of China(Nos.52202181,52125205,U20A20166,52192614,52372154,52002246 and U22A2077)the National Key R&D Program of China(Nos.2021YFB3200302 and 2021YFB3200304)+3 种基金the Natural Science Foundation of Beijing Municipality(Nos.2180011 and 2222088)China Postdoctoral Science Foundation(No.2022M712166)Shenzhen Science and Technology Program(No.KQTD20170810105439418)the Fundamental Research Funds for the Central Universities。
文摘Remarkable developments in image recognition technology trigger demands for more advanced imaging devices.In recent years,traditional image sensors,as the go-to imaging devices,have made substantial progress in their optoelectronic characteristics and functionality.Moreover,a new breed of imaging device with information processing capability,known as neuromorphic vision sensors,is developed by mimicking biological vision.In this review,we delve into the recent progress of imaging devices,specifically image sensors and neuromorphic vision sensors.This review starts by introducing their core components,namely photodetectors and photonic synapses,while placing a strong emphasis on device structures,working mechanisms and key performance parameters.Then it proceeds to summarize the noteworthy achievements in both image sensors and neuromorphic vision sensors,including advancements in large-scale and highresolution imaging,filter-free multispectral recognition,polarization sensitivity,flexibility,hemispherical designs,and self-power supply of image sensors,as well as in neuromorphic imaging and data processing,environmental adaptation,and ultra-low power consumption of neuromorphic vision sensors.Finally,the challenges and prospects that lie ahead in the ongoing development of imaging devices are addressed.
文摘Recent advancements in autonomous vehicle technologies are transforming intelligent transportation systems.Artificial intelligence enables real-time sensing,decision-making,and control on embedded platforms with improved efficiency.This study presents the design and implementation of an autonomous radio-controlled(RC)vehicle prototype capable of lane line detection,obstacle avoidance,and navigation through dynamic path planning.The system integrates image processing and ultrasonic sensing,utilizing Raspberry Pi for vision-based tasks and ArduinoNano for real-time control.Lane line detection is achieved through conventional image processing techniques,providing the basis for local path generation,while traffic sign classification employs a You Only Look Once(YOLO)model optimized with TensorFlow Lite to support navigation decisions.Images captured by the onboard camera are processed on the Raspberry Pi to extract lane geometry and calculate steering angles,enabling the vehicle to follow the planned path.In addition,ultrasonic sensors placed in three directions at the front of the vehicle detect obstacles and allow real-time path adjustment for safe navigation.Experimental results demonstrate stable performance under controlled conditions,highlighting the system’s potential for scalable autonomous driving applications.This work confirms that deep learning methods can be efficiently deployed on low-power embedded systems,offering a practical framework for navigation,path planning,and intelligent transportation research.
基金National Science and Technology Major Project of China(No.2016ZX04003001)Tianjin Research Program of Application Foundation and Advanced Technology(No.14JCZDJC39700)
文摘In the laser displacement sensors measurement system,the laser beam direction is an important parameter.Particularly,the azimuth and pitch angles are the most important parameters to a laser beam.In this paper,based on monocular vision,a laser beam direction measurement method is proposed.First,place the charge coupled device(CCD)camera above the base plane,and adjust and fix the camera position so that the optical axis is nearly perpendicular to the base plane.The monocular vision localization model is established by using circular aperture calibration board.Then the laser beam generating device is placed and maintained on the base plane at fixed position.At the same time a special target block is placed on the base plane so that the laser beam can project to the special target and form a laser spot.The CCD camera placed above the base plane can acquire the laser spot and the image of the target block clearly,so the two-dimensional(2D)image coordinate of the centroid of the laser spot can be extracted by correlation algorithm.The target is moved at an equal distance along the laser beam direction,and the spots and target images of each moving under the current position are collected by the CCD camera.By using the relevant transformation formula and combining the intrinsic parameters of the target block,the2D coordinates of the gravity center of the spot are converted to the three-dimensional(3D)coordinate in the base plane.Because of the moving of the target,the3D coordinates of the gravity center of the laser spot at different positions are obtained,and these3D coordinates are synthesized into a space straight line to represent the laser beam to be measured.In the experiment,the target parameters are measured by high-precision instruments,and the calibration parameters of the camera are calibrated by a high-precision calibration board to establish the corresponding positioning model.The measurement accuracy is mainly guaranteed by the monocular vision positioning accuracy and the gravity center extraction accuracy.The experimental results show the maximum error of the angle between laser beams reaches to0.04°and the maximum error of beam pitch angle reaches to0.02°.
基金Supported by Hunan Provincial Natural Science Foundation of China(Grant No.2021JJ50116).
文摘The service cycle and dynamic performance of structural parts are afected by the weld grinding accuracy and surface consistency. Because of reasons such as assembly errors and thermal deformation, the actual track of the robot does not coincide with the theoretical track when the weld is ground ofine, resulting in poor workpiece surface quality. Considering these problems, in this study, a vision sensing-based online correction system for robotic weld grinding was developed. The system mainly included three subsystems: weld feature extraction, grinding, and robot real-time control. The grinding equipment was frst set as a substation for the robot using the WorkVisual software. The input/output (I/O) ports for communication between the robot and the grinding equipment were confgured via the I/O mapping function to enable the robot to control the grinding equipment (start, stop, and speed control). Subsequently, the Ethernet KRL software package was used to write the data interaction structure to realize realtime communication between the robot and the laser vision system. To correct the measurement error caused by the bending deformation of the workpiece, we established a surface profle model of the base material in the weld area using a polynomial ftting algorithm to compensate for the measurement data. The corrected extracted weld width and height errors were reduced by 2.01% and 9.3%, respectively. Online weld seam extraction and correction experiments verifed the efectiveness of the system’s correction function, and the system could control the grinding trajectory error within 0.2 mm. The reliability of the system was verifed through actual weld grinding experiments. The roughness, Ra, could reach 0.504 µm and the average residual height was within 0.21 mm. In this study, we developed a vision sensing-based online correction system for robotic weld grinding with a good correction efect and high robustness.
文摘Based on the characteristics of line structured light sensor, a speedy method for the calibration was established. With the coplanar reference target, the spacial pose between camera and optical plane can be calibrated by using of the camera’s projective center and the light’s information in the camera’s image surface. Without striction to the movement of the coplanar reference target and assistant adjustment equipment, this calibration method can be implemented. This method has been used and decreased the cost of calibration equipment, simplified the calibration procedure, improved calibration efficiency. Using experiment, the sensor can attain relative accuracy about 0.5%, which indicates the rationality and effectivity of this method.
文摘This research is dedicated to develop a safety measurement for human-machine cooperative system, in which the machine region and the human region cannot be separated due to the overlap and the movement both from human and from machines. Our proposal here is to automatically monitor the moving objects by image sensing/recognition method, such that the machine system can get enough information about the environment situation and about the production progress at any time, and therefore the machines can accordingly take some corresponding actions automatically to avoid hazard. For this purpose, two types of monitor systems are proposed. The first type is based on the omni directional vision sensor, and the second is based on the stereo vision sensor. Each type may be used alone or together with another type, depending on the safety system's requirements and the specific situation of the manufacture field to be monitored. In this paper, the description about these two types are given, and as for the special application of these image sensors into safety control, the construction of a hierarchy safety system is proposed.
文摘This paper theoretically analyzes and researches the coordinate frames of a 3D vision scanning system, establishes the mathematic model of a system scanning process, derives the relationship between the general non-orthonormal sensor coordinate system and the machine coordinate system and the coordinate transformation matrix of the extrinsic calibration for the system.
文摘The dynamic behaviors of the keyhole and weld pool are coupled together in plasma arc welding, and the geometric variations of both the keyhole and the weld pool determine the weld quality. It is of great significance to simultaneously sense and monitor the keyhole and the weld pool behaviors by using a single low-cost vision sensor in plasma arc welding process. In this study, the keyhole and weld pool were observed and measured under different levels of welding current by using the near infrared sensing technology and the charge coupled device (CCD) sensing system. The shapes and relative position of weld pool and keyhole under different conditions were compared and analyzed. The observation results lay solid foundation for controlling weld quality and understanding the underlying process mechanisms.
基金Sponsored by the Aerospace Technology Innovation Funding(Grant No. CASC0209)
文摘A second-order divided difference filter (SDDF) is derived for integrating line of sight measurement from vision sensor with acceleration and angular rate measurements of the follower to estimate the precise relative position,velocity and attitude of two unmanned aerial vehicles (UAVs).The second-order divided difference filter which makes use of multidimensional interpolation formulations to approximate the nonlinear transformations could achieve more accurate estimation and faster convergence from inaccurate initial conditions than standard extended Kalman filter.The filter formulation is based on relative motion equations.The global attitude parameterization is given by quarternion,while a generalized three-dimensional attitude representation is used to define the local attitude error.Simulation results are shown to compare the performance of the second-order divided difference filter with a standard extended Kalman filter approach.
文摘This paper expounds the application of machine vision theory, composition and technology in the sow breeding process monitoring, auxiliary judgment, and growth of young monitoring. It also points out the problems and deficiency in the application of machine vision technology, and discusses the development trends and prospects of the machine vision technology in agricultural engineering. The application of machine vision is a process in which dynamic original image from the sows estrus is collected with a CCD camera, and then black and white ash three binarization image in adjournments of the threshold value is made by using image acquisition card, through the median filtering and gray processing. The practitioners can extract respective image information from the sow estrus, pregnancy and birth delivery. Applying the computer vision system in the sow farm effectively enhances the practitioners’ objectivity and precision in their efforts to assess the whole process of sow birth delivery.
文摘Building fences to manage the cattle grazing can be very expensive;cost inefficient. These do not provide dynamic control over the area in which the cattle are grazing. Existing virtual fencing techniques for the control of herds of cattle, based on polygon coordinate definition of boundaries is limited in the area of land mass coverage and dynamism. This work seeks to develop a more robust and an improved monocular vision based boundary avoidance for non-invasive stray control system for cattle, with a view to increase land mass coverage in virtual fencing techniques and dynamism. The monocular vision based depth estimation will be modeled using concept of global Fourier Transform (FT) and local Wavelet Transform (WT) of image structure of scenes (boundaries). The magnitude of the global Fourier Transform gives the dominant orientations and textual patterns of the image;while the local Wavelet Transform gives the dominant spectral features of the image and their spatial distribution. Each scene picture or image is defined by features v, which contain the set of global (FT) and local (WT) statistics of the image. Scenes or boundaries distances are given by estimating the depth D by means of the image features v. Sound cues of intensity equivalent to the magnitude of the depth D are applied to the animal ears as stimuli. This brings about the desired control as animals tend to move away from uncomfortable sounds.
基金This research is supported by the NationalResearch Council of Thailand(NRCT).NRISS No.144276 and 2589488.
文摘Egocentric recognition is exciting computer vision research by acquiring images and video from the first-person overview.However,an image becomes noisy and dark under low illumination conditions,making subsequent hand detection tasks difficult.Thus,image enhancement is necessary to make buried detail more visible.This article addresses the challenge of egocentric hand grasp recognition in low light conditions by utilizing the flex sensor and image enhancement algorithm based on adaptive gamma correction with weighting distribution.Initially,a flex sensor is installed to the thumb for object manipulation.The thumb placement that holds in a different position on the object of each grasp affects the voltage changing of the flex sensor circuit.The average voltages are used to configure the weighting parameter to improve images in the image enhancement stage.Moreover,the contrast and gamma function are used to adjust varies the low light condition.These grasp images are then separated to be training and testing with pretrained deep neural networks as the feature extractor in YOLOv2 detection network for the grasp recognition system.The proposed of using a flex sensor significantly improves the grasp recognition rate in low light conditions.
基金the National Natural Science Foundation of China (No. 50905105)Shanghai Municipal Natural Science Foundation (No. 13ZR1415800)+2 种基金the Innovation Program of Shanghai Municipal Education Commission of China (No. 14YZ008)the State Key Laboratory of Robotics and Systems of Harbin Institute of Technology (No. 2010MS02)the Jiangsu Key Laboratory of Advanced Robotic Technology of Soochow University (No. JAR201304)
文摘In order to perform an optical assembly accurately, a multi-sensor control strategy is developed which includes an attitude measurement system, a vision system, a loss measurement system and a force sensor. A 3-DOF attitude measuring method using linear variable differential transformers (LVDT) is designed to adjust the relation of position and attitude between the spher- ical mirror and the resonator. A micro vision feedback system is set up to extract the light beam and the diaphragm, which can achieve the coarse positioning of the spherical mirror in the optical assembly process. A rapid self-correlation method is presented to analyze the spectrum signal for the fine positioning. In order to prevent the damage of the optical components and realize sealing of the resonator, a hybrid force-position control is constructed to control the contact force of the optical components. The experimental results show that the proposed multi-sensor control strategy succeeds in accomplishing the precise assembly of the optical components, which consists of parallel adjustment, macro coarse adjustment, macro approach, micro fine adjustment, micro approach and optical contact. Therefore, the results validate the multi-sensor control strategy.