期刊文献+
共找到33篇文章
< 1 2 >
每页显示 20 50 100
Design and Ground Verification for Vision-Based Relative Navigation Systems of Microsatellites
1
作者 DU Ronghua LIAO Wenhe ZHANG Xiang 《Transactions of Nanjing University of Aeronautics and Astronautics》 2025年第1期37-55,共19页
This paper presents the design and ground verification for vision-based relative navigation systems of microsatellites,which offers a comprehensive hardware design solution and a robust experimental verification metho... This paper presents the design and ground verification for vision-based relative navigation systems of microsatellites,which offers a comprehensive hardware design solution and a robust experimental verification methodology for practical implementation of vision-based navigation technology on the microsatellite platform.Firstly,a low power consumption,light weight,and high performance vision-based relative navigation optical sensor is designed.Subsequently,a set of ground verification system is designed for the hardware-in-the-loop testing of the vision-based relative navigation systems.Finally,the designed vision-based relative navigation optical sensor and the proposed angles-only navigation algorithms are tested on the ground verification system.The results verify that the optical simulator after geometrical calibration can meet the requirements of the hardware-in-the-loop testing of vision-based relative navigation systems.Based on experimental results,the relative position accuracy of the angles-only navigation filter at terminal time is increased by 25.5%,and the relative speed accuracy is increased by 31.3% compared with those of optical simulator before geometrical calibration. 展开更多
关键词 microsatellites vision-based relative navigation optical simulator ground verification angles-only navigation
在线阅读 下载PDF
A Vision-based Robotic Navigation Method Using an Evolutionary and Fuzzy Q-Learning Approach
2
作者 Roberto Cuesta-Solano Ernesto Moya-Albor +1 位作者 Jorge Brieva Hiram Ponce 《Journal of Artificial Intelligence and Technology》 2024年第4期363-369,共7页
The paper presents a fuzzy Q-learning(FQL)and optical flow-based autonomous navigation approach.The FQL method takes decisions in an unknown environment and without mapping,using motion information and through a reinf... The paper presents a fuzzy Q-learning(FQL)and optical flow-based autonomous navigation approach.The FQL method takes decisions in an unknown environment and without mapping,using motion information and through a reinforcement signal into an evolutionary algorithm.The reinforcement signal is calculated by estimating the optical flow densities in areas of the camera to determine whether they are“dense”or“thin”which has a relationship with the proximity of objects.The results obtained show that the present approach improves the rate of learning compared with a method with a simple reward system and without the evolutionary component.The proposed system was implemented in a virtual robotics system using the CoppeliaSim software and in communication with Python. 展开更多
关键词 CoppeliaSim evolutionary algorithm fuzzy Q-learning optical flow reinforced learning vision-based control navigation
在线阅读 下载PDF
A vision-based navigation approach with multiple radial shape marks for indoor aircraft locating 被引量:7
3
作者 Zhou Haoyin Zhang Tao 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2014年第1期76-84,共9页
Since GPS signals are unavailable for indoor navigation, current research mainly focuses on vision-based locating with a single mark. An obvious disadvantage with this approach is that locating will fail when the mark... Since GPS signals are unavailable for indoor navigation, current research mainly focuses on vision-based locating with a single mark. An obvious disadvantage with this approach is that locating will fail when the mark cannot be seen. The use of multiple marks can solve this problem. However, the extra process to design and identify different marks will significantly increase system complexity. In this paper, a novel vision-based locating method is proposed by using marks with feature points arranged in a radial shape. The feature points of the marks consist of inner points and outer points. The positions of the inner points are the same in all marks, while the positions of the outer points are different in different marks. Unlike traditional camera locating methods (the PnP methods), the proposed method can calculate the camera location and the positions of the outer points simultaneously. Then the calculation results of the positions of the outer points are used to identify the mark. This method can make navigation with multiple marks more efficient. Simulations and real world experiments are carried out, and their results show that the proposed method is fast, accurate and robust to noise. 展开更多
关键词 Flexible mark Indoor aircraft Multiple marks Navigation vision-based locating
原文传递
Vision-based Stabilization of Nonholonomic Mobile Robots by Integrating Sliding-mode Control and Adaptive Approach 被引量:4
4
作者 CAO Zhengcai YIN Longjie FU Yili 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2013年第1期21-28,共8页
Vision-based pose stabilization of nonholonomic mobile robots has received extensive attention. At present, most of the solutions of the problem do not take the robot dynamics into account in the controller design, so... Vision-based pose stabilization of nonholonomic mobile robots has received extensive attention. At present, most of the solutions of the problem do not take the robot dynamics into account in the controller design, so that these controllers are difficult to realize satisfactory control in practical application. Besides, many of the approaches suffer from the initial speed and torque jump which are not practical in the real world. Considering the kinematics and dynamics, a two-stage visual controller for solving the stabilization problem of a mobile robot is presented, applying the integration of adaptive control, sliding-mode control, and neural dynamics. In the first stage, an adaptive kinematic stabilization controller utilized to generate the command of velocity is developed based on Lyapunov theory. In the second stage, adopting the sliding-mode control approach, a dynamic controller with a variable speed function used to reduce the chattering is designed, which is utilized to generate the command of torque to make the actual velocity of the mobile robot asymptotically reach the desired velocity. Furthermore, to handle the speed and torque jump problems, the neural dynamics model is integrated into the above mentioned controllers. The stability of the proposed control system is analyzed by using Lyapunov theory. Finally, the simulation of the control law is implemented in perturbed case, and the results show that the control scheme can solve the stabilization problem effectively. The proposed control law can solve the speed and torque jump problems, overcome external disturbances, and provide a new solution for the vision-based stabilization of the mobile robot. 展开更多
关键词 nonholonomic mobile robots vision-based stabilization sliding-mode control adaptive control neural dynamics
在线阅读 下载PDF
Neural-Fuzzy-Based Adaptive Sliding Mode Automatic Steering Control of Vision-based Unmanned Electric Vehicles 被引量:3
5
作者 Jinghua Guo Keqiang Li +2 位作者 Jingjing Fan Yugong Luo Jingyao Wang 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2021年第5期56-68,共13页
This paper presents a novel neural-fuzzy-based adaptive sliding mode automatic steering control strategy to improve the driving performance of vision-based unmanned electric vehicles with time-varying and uncertain pa... This paper presents a novel neural-fuzzy-based adaptive sliding mode automatic steering control strategy to improve the driving performance of vision-based unmanned electric vehicles with time-varying and uncertain parameters.Primarily,the kinematic and dynamic models which accurately express the steering behaviors of vehicles are constructed,and in which the relationship between the look-ahead time and vehicle velocity is revealed.Then,in order to overcome the external disturbances,parametric uncertainties and time-varying features of vehicles,a neural-fuzzy-based adaptive sliding mode automatic steering controller is proposed to supervise the lateral dynamic behavior of unmanned electric vehicles,which includes an equivalent control law and an adaptive variable structure control law.In this novel automatic steering control system of vehicles,a neural network system is utilized for approximating the switching control gain of variable structure control law,and a fuzzy inference system is presented to adjust the thickness of boundary layer in real-time.The stability of closed-loop neural-fuzzy-based adaptive sliding mode automatic steering control system is proven using the Lyapunov theory.Finally,the results illustrate that the presented control scheme has the excellent properties in term of error convergence and robustness. 展开更多
关键词 vision-based unmanned electric vehicles Automatic steering Neural-fuzzy adaptive sliding control Vehicle lateral dynamics
在线阅读 下载PDF
Vision-Based Hand Gesture Recognition for Human-Computer Interaction——A Survey 被引量:2
6
作者 GAO Yongqiang LU Xiong +4 位作者 SUN Junbin TAO Xianglin HUANG Xiaomei YAN Yuxing LIU Jia 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2020年第2期169-184,共16页
Recently,vision-based gesture recognition(VGR)has become a hot research spot in human-computer interaction(HCI).Unlike other gesture recognition methods with data gloves or other wearable sensors,vision-based gesture ... Recently,vision-based gesture recognition(VGR)has become a hot research spot in human-computer interaction(HCI).Unlike other gesture recognition methods with data gloves or other wearable sensors,vision-based gesture recognition could lead to more natural and intuitive HCI interactions.This paper reviews the state-of-the-art vision-based gestures recognition methods,from different stages of gesture recognition process,i.e.,(1)image acquisition and pre-processing,(2)gesture segmentation,(3)gesture tracking,(4)feature extraction,and(5)gesture classification.This paper also analyzes the advantages and disadvantages of these various methods in detail.Finally,the challenges of vision-based gesture recognition in haptic rendering and future research directions are discussed. 展开更多
关键词 vision-based gesture recognition human-computer interaction STATE-OF-THE-ART feature extraction
原文传递
A Robust Gaussian Mixture Model for Mobile Robots’ Vision-based Pose Estimation 被引量:4
7
作者 Chuanqi CHENG Xiangyang HAO +2 位作者 Jiansheng LI Peng HU Xu ZHANG 《Journal of Geodesy and Geoinformation Science》 2019年第3期79-90,共12页
In dynamic environments, the moving landmarks can make the accuracy of traditional vision-based pose estimation worse or even failure. To solve this problem, a robust Gaussian mixture model for vision-based pose estim... In dynamic environments, the moving landmarks can make the accuracy of traditional vision-based pose estimation worse or even failure. To solve this problem, a robust Gaussian mixture model for vision-based pose estimation is proposed. The motion index is added to the traditional graph-based vision-based pose estimation model to describe landmarks’ moving probability, transforming the classic Gaussian model to Gaussian mixture model, which can reduce the influence of moving landmarks for optimization results. To improve the algorithm’s robustness to noise, the covariance inflation model is employed in residual equations. The expectation maximization method for solving the Gaussian mixture problem is derived in detail, transforming the problem into classic iterative least square problem. Experimental results demonstrate that in dynamic environments, the proposed method outperforms the traditional method both in absolute accuracy and relative accuracy, while maintains high accuracy in static environments. The proposed method can effectively reduce the influence of the moving landmarks in dynamic environments, which is more suitable for the autonomous localization of mobile robots. 展开更多
关键词 vision-based navigation graph optimization POSE estimation COVARIANCE INFLATION EXPECTATION MAXIMIZATION
在线阅读 下载PDF
Comparison of different pseudo-linear estimators for vision-based target motion estimation
8
作者 Zian Ning Yin Zhang Shiyu Zhao 《Control Theory and Technology》 EI CSCD 2023年第3期448-457,共10页
Vision-based target motion estimation based Kalman filtering or least-squares estimators is an important problem in many tasks such as vision-based swarming or vision-based target pursuit.In this paper,we focus on a p... Vision-based target motion estimation based Kalman filtering or least-squares estimators is an important problem in many tasks such as vision-based swarming or vision-based target pursuit.In this paper,we focus on a problem that is very specific yet we believe important.That is,from the vision measurements,we can formulate various measurements.Which and how the measurements should be used?These problems are very fundamental,but we notice that practitioners usually do not pay special attention to them and often make mistakes.Motivated by this,we formulate three pseudo-linear measurements based on the bearing and angle measurements,which are standard vision measurements that can be obtained.Different estimators based on Kalman filtering and least-squares estimation are established and compared based on numerical experiments.It is revealed that correctly analyzing the covariance noises is critical for the Kalman filtering-based estimators.When the variance of the original measurement noise is unknown,the pseudo-linear least-squares estimator that has the smallest magnitude of the transformed noise can be a good choice. 展开更多
关键词 Pseudo-linear measurements Kalman filter Least-squares estimator vision-based target motion analysis Fisher information
原文传递
Prof.Ackermann,vision-based navigation and the PG master’s degree programme at HfT Stuttgart
9
作者 Michael Hahn 《Geo-Spatial Information Science》 SCIE EI CSCD 2023年第2期156-159,共4页
The two topics of the article seem to have absolutely nothing to do with each other and,as can be expected in a contribution in honor and memory of Prof.Fritz Ackermann,they are linked in his person.Vision-based Navig... The two topics of the article seem to have absolutely nothing to do with each other and,as can be expected in a contribution in honor and memory of Prof.Fritz Ackermann,they are linked in his person.Vision-based Navigation was the focus of the doctoral thesis written by the author,the 29th and last PhD thesis supervised by Prof.Ackermann.The International Master’s Program Photogrammetry and Geoinformatics,which the author established with colleagues at Stuttgart University of Applied Sciences(HfT Stuttgart)in 1999,was a consequence of Prof.Ackermann’s benevolent promotion of international knowledge transfer in teaching.Both topics are reflected in this article;they provide further splashes of color in Prof.Ackermann’s oeuvre. 展开更多
关键词 vision-based navigation visual odometry photogrammetry and geoinformatics international master’s degree program
原文传递
Vision-based Localization from Three-Line Structures (TLS)
10
作者 Zhao-Zheng Hu Na Li 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2013年第3期48-55,共8页
This paper presents a novel vision based localization algorithm from three-line structure ( TLS) .Two types of TLS are investigated: 1) three parallel lines ( Structure I) ; 2) two parallel lines and one orthogonal li... This paper presents a novel vision based localization algorithm from three-line structure ( TLS) .Two types of TLS are investigated: 1) three parallel lines ( Structure I) ; 2) two parallel lines and one orthogonal line ( Structure II) .From single image of either structure,the camera pose can be uniquely computed for vision localization.Contributions of this paper are as follows: 1 ) both TLS structures can be used as simple and practical landmarks,which are widely available in daily life; 2) the proposed algorithm complements existing localization methods,which usually use complex landmarks,especially in the partial blockage conditions; 3) compared with the general Perspective-3-Lines ( P3L) problem,camera pose can be uniquely computed from either structure.The proposed algorithm has been tested with both simulation and real image data.For a typical simulated indoor condition ( 75 cm-size landmark,less than 7.0 m landmark-to-camera distance,and 0.5-pixel image noises) ,the means of localization errors from Structure I and Structure II are less than 3.0 cm.And the standard deviations are less than 3.0 cm and 1.5 cm,respectively.The algorithm is further validated with two actual image experiments.Within a 7.5 m × 7.5 m indoor situation,the overall relative localization errors from Structure I and Structure II are less than 2.2% and 2.3% ,respectively,with about 6.0 m distance.The results demonstrate that the algorithm works well for practical vision localization. 展开更多
关键词 vision-based localization three-line structure camera pose computer vision
在线阅读 下载PDF
EyeScreen:A Vision-Based Gesture Interaction System
11
作者 李善青 徐一华 贾云得 《Journal of Beijing Institute of Technology》 EI CAS 2007年第3期315-320,共6页
EyeScreen is a vision-based interaction system which provides a natural gesture interface for humancomputer interaction (HCI) by tracking human fingers and recognizing gestures. Multi-view video images are captured ... EyeScreen is a vision-based interaction system which provides a natural gesture interface for humancomputer interaction (HCI) by tracking human fingers and recognizing gestures. Multi-view video images are captured by two cameras facing a computer screen, which can be used to detect clicking actions of a fingertip and improve the recognition rate. The system enables users to directly interact with rendered objects on the screen. Robustness of the system has been verified by extensive experiments with different user scenarios. EyeScreen can be used in many applications such as intelligent interaction and digital entertainment. 展开更多
关键词 vision-based interaction system finger tracking gesture recognition
在线阅读 下载PDF
Design and Evaluation of a Vision-Based UI for People with Large Cognitive-Motor Disabilities
12
作者 Sergio Martínez Antonio Peñalver Juan Manuel Sáez 《Journal of Biomedical Science and Engineering》 2021年第4期185-201,共17页
<div style="text-align:justify;"> <span style="font-family:Verdana;">Recovering from multiple traumatic brain injury (TBI) is a very difficult task, depending on the severity of the les... <div style="text-align:justify;"> <span style="font-family:Verdana;">Recovering from multiple traumatic brain injury (TBI) is a very difficult task, depending on the severity of the lesions, the affected parts of the brain and the level of damage (locomotor, cognitive or sensory). Although there are some software platforms to help these patients to recover part of the lost capacity, the variety of existing lesions and the different degree to which they affect the patient, do not allow the generalization of the appropriate treatments and tools in each case. The aim of this work is to design and evaluate a machine vision-based UI (User Interface) allowing patients with a high level of injury to interact with a computer. This UI will be a tool for the therapy they follow and a way to communicate with their environment. The interface provides a set of specific activities, developed in collaboration with the multidisciplinary team that is currently evaluating each patient, to be used as a part of the therapy they receive. The system has been successfully tested with two patients whose degree of disability prevents them from using other types of platforms.</span> </div> 展开更多
关键词 Brain Damage REHABILITATION DISABILITIES vision-based User Interface
在线阅读 下载PDF
A review on the deformation tracking methods in vision-based tactile sensing technology
13
作者 Benzhu Guo Shengyu Duan +3 位作者 Panding Wang Hongshuai Lei Zeang Zhao Daining Fang 《Acta Mechanica Sinica》 2025年第10期146-164,共19页
In daily life,human need various senses to obtain information about their surroundings,and touch is one of the five major human sensing signals.Similarly,it is extremely important for robots to be endowed with tactile... In daily life,human need various senses to obtain information about their surroundings,and touch is one of the five major human sensing signals.Similarly,it is extremely important for robots to be endowed with tactile sensing ability.In recent years,vision-based tactile sensing technology has been the research hotspot and frontier in the field of tactile perception.Compared to conventional tactile sensing technologies,vision-based tactile sensing technologies are capable of obtaining highquality and high-resolution tactile information at a lower cost,while not being limited by the size and shape of sensors.Several previous articles have reviewed the sensing mechanism and electrical components of vision-based sensors,greatly promoting the innovation of tactile sensing.Different from existing reviews,this article concentrates on the underlying tracking method which converts real-time images into deformation information,including contact,sliding and friction.We will show the history and development of both model-based and model-free tracking methods,among which model-based approaches rely on schematic mechanical theories,and model-free approaches mainly involve machine learning algorithms.Comparing the efficiency and accuracy of existing deformation tracking methods,future research directions of vision-based tactile sensors for smart manipulations and robots are also discussed. 展开更多
关键词 vision-based tactile sensing technology Model-based approaches Machine learning-based approaches Deformation tracking methods
原文传递
Vision-Based Adaptive Prescribed-Time Control of UAV for Uncooperative Target Tracking with Performance Constraint 被引量:1
14
作者 SHE Xuehua MA Hui +1 位作者 REN Hongru LI Hongyi 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2024年第5期1956-1977,共22页
This paper discusses the uncooperative target tracking control problem for the unmanned aerial vehicle(UAV)under the performance constraint and scaled relative velocity constraint,in which the states of the uncooperat... This paper discusses the uncooperative target tracking control problem for the unmanned aerial vehicle(UAV)under the performance constraint and scaled relative velocity constraint,in which the states of the uncooperative target can only be estimated through a vision sensor.Considering the limited detection range,a prescribed performance function is designed to ensure the transient and steady-state performances of the tracking system.Meanwhile,the scaled relative velocity constraint in the dynamic phase is taken into account,and a time-varying nonlinear transformation is used to solve the constraint problem,which not only overcomes the feasibility condition but also fails to violate the constraint boundaries.Finally,the practically prescribed-time stability technique is incorporated into the controller design procedure to guarantee that all signals within the closed-loop system are bounded.It is proved that the UAV can follow the uncooperative target at the desired relative position within a prescribed time,thereby improving the applicability of the vision-based tracking approach.Simulation results have been presented to prove the validity of the proposed control strategy. 展开更多
关键词 Nonlinear transformation performance constraint prescribed-time tracking uncooperative target vision-based measurement
原文传递
A software platform for vision-based UAV autonomous landing guidance based on markers estimation 被引量:7
15
作者 XU XiaoBin WANG Zhao DENG YiMin 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2019年第10期1825-1836,共12页
The paper concentrated on descripting the methods of UAV autonomous landing on moving target. GPS navigation and visionbased navigation were employed during different stage of autonomous landing in the simulation envi... The paper concentrated on descripting the methods of UAV autonomous landing on moving target. GPS navigation and visionbased navigation were employed during different stage of autonomous landing in the simulation environment and virtual reality.Uncertain markers estimation is the main step for UAV autonomous landing. It contains the convex hull transformation,interference preclusion, ellipse fitting and specific feature matching. Furthermore, the complete visual measurement program and guidance strategy were proposed in this paper. Considerable comprehensive experiments indicated the significance and feasibility of method of vision-based UAV autonomous landing on moving target. 展开更多
关键词 autonomous LANDING MOVING target vision-based navigation GUIDANCE MARKERS ESTIMATION
原文传递
Precise monocular vision-based pose measurement system for lunar surface sampling manipulator 被引量:5
16
作者 WANG Gang SHI ZhongChen +3 位作者 SHANG Yang SUN XiaoLiang ZHANG WenLong YU QiFeng 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2019年第10期1783-1794,共12页
Space manipulator has been playing an increasingly important role in space exploration due to its flexibility and versatility. This paper is to design a vision-based pose measurement system for a four-degree-of-freedo... Space manipulator has been playing an increasingly important role in space exploration due to its flexibility and versatility. This paper is to design a vision-based pose measurement system for a four-degree-of-freedom(4-DOF) lunar surface sampling manipulator relying on a monitoring camera and several fiducial markers. The system first employs double plateaus histogram equalization for the markers to improve the robustness to varying noise and illumination. The markers are then accurately extracted in sub-pixel based on template matching and curved surface fitting. Finally, given the camera parameters and 3D reference points, the pose of the manipulator end-effector is solved from the 3D-to-2D point correspondences by combining a plane-based pose estimation method with rigid-body transformation. Experiment results show that the system achieves highprecision positioning and orientation performance. The measurement error is within 3 mm in position, and 0.2° in orientation,meeting the requirements for space manipulator operations. 展开更多
关键词 vision-based POSE measurement POSE estimation LUNAR surface sampling SUB-PIXEL CORNER detection Chang’e 5
原文传递
Vision-based Vehicle Tracking and Classification on the Highway 被引量:1
17
作者 Guolian Yun Qimei Chen Bo Li Xindao Wang 《Journal of Systems Science and Information》 2007年第2期141-149,共9页
This paper presents algorithms for vision-based tracking and classification of vehicles in image sequences of traffic scenes recorded by a stationary camera. In the algorithms, the central moment and extended Kalman f... This paper presents algorithms for vision-based tracking and classification of vehicles in image sequences of traffic scenes recorded by a stationary camera. In the algorithms, the central moment and extended Kalman filter of tracking processes optimizes the amount of spent computational resources. Moreover, it robust to many difficult situations such as partial or full occlusions of vehicles. Vehicle classification performance is improved by Bayesian network, especially from incomplete data. The methods are test on a single Intel Pentium 4 processor 2.4 GHz and the frame rate is 25 frames/s. Experimental results from highway scenes are provided, which demonstrate the effectiveness and robust of the methods. 展开更多
关键词 vision-based vehicle tracking Bayesian network vehicle classification
原文传递
Vision-based positioning system
18
作者 Song Meina Ou Zhonghong +2 位作者 E Haihong Song Junde Zhao Xuejun 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2016年第5期88-96,共9页
Conventional outdoor navigation systems are usually based on orbital satellites, e.g., global positioning system (GPS) and global navigation satellite system (GLONASS). The latest advances from wearable, e.g., Bai... Conventional outdoor navigation systems are usually based on orbital satellites, e.g., global positioning system (GPS) and global navigation satellite system (GLONASS). The latest advances from wearable, e.g., BaiduEye and Google Glass, have enabled new approaches to leverage information from the surrounding environment. For example, they enable the change from passively receiving information to actively requesting information. Thus, such changes might inspire brand new application scenarios that were not possible before. In this work, we propose a vision-based navigation system based on wearable like Baidu Eye. We discuss the associated challenges and propose potential solutions for each of them. The system utilizes crowd sensing to collect and build a traffic signpost database for positioning reference. Then it leverages context information, such as cell identification (Cell ID), signal strength, and altitude combined with traffic sign detection and recognition to enable real-time positioning. A hybrid cloud architecture is proposed to enhance the capability of sensing devices (SD) to realize the proposed vision. 展开更多
关键词 vision-based positioning system WEARABLE machine vision
原文传递
An Iterative Pose Estimation Algorithm Based on Epipolar Geometry With Application to Multi-Target Tracking 被引量:3
19
作者 Jacob H.White Randal W.Beard 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2020年第4期942-953,共12页
This paper introduces a new algorithm for estimating the relative pose of a moving camera using consecutive frames of a video sequence. State-of-the-art algorithms for calculating the relative pose between two images ... This paper introduces a new algorithm for estimating the relative pose of a moving camera using consecutive frames of a video sequence. State-of-the-art algorithms for calculating the relative pose between two images use matching features to estimate the essential matrix. The essential matrix is then decomposed into the relative rotation and normalized translation between frames. To be robust to noise and feature match outliers, these methods generate a large number of essential matrix hypotheses from randomly selected minimal subsets of feature pairs, and then score these hypotheses on all feature pairs. Alternatively, the algorithm introduced in this paper calculates relative pose hypotheses by directly optimizing the rotation and normalized translation between frames, rather than calculating the essential matrix and then performing the decomposition. The resulting algorithm improves computation time by an order of magnitude. If an inertial measurement unit(IMU) is available, it is used to seed the optimizer, and in addition, we reuse the best hypothesis at each iteration to seed the optimizer thereby reducing the number of relative pose hypotheses that must be generated and scored. These advantages greatly speed up performance and enable the algorithm to run in real-time on low cost embedded hardware. We show application of our algorithm to visual multi-target tracking(MTT) in the presence of parallax and demonstrate its real-time performance on a 640 × 480 video sequence captured on a UAV. Video results are available at https://youtu.be/Hh K-p2 h XNn U. 展开更多
关键词 Aerial robotics epipolar geometry multi-target tracking pose estimation unmanned aircraft systems vision-based flight
在线阅读 下载PDF
Benchmarking dynamic properties of structures using non-contact sensing 被引量:2
20
作者 Boshra Besharatian Amrita Das +2 位作者 Abdelrahman Awawdeh Sattar Dorafshan Marc Maguire 《Earthquake Engineering and Engineering Vibration》 SCIE EI CSCD 2023年第2期387-405,共19页
Non-contact sensing can be a rapid and convenient alternative for determining structure response compared to conventional instrumentation.Computer vision has been broadly implemented to enable accurate non-contact dyn... Non-contact sensing can be a rapid and convenient alternative for determining structure response compared to conventional instrumentation.Computer vision has been broadly implemented to enable accurate non-contact dynamic response measurements for structures.This study has analyzed the effect of non-contact sensors,including type,frame rate,and data collection platform,on the performance of a novel motion detection technique.Video recordings of a cantilever column were collected using a high-speed camera mounted on a tripod and an unmanned aerial system(UAS)equipped with visual and thermal sensors.The test specimen was subjected to an initial deformation and released.Specimen acceleration data were collected using an accelerometer installed on the cantilever end.The displacement from each non-contact sensor and the acceleration from the contact sensor were analyzed to measure the specimen′s natural frequency and damping ratio.The specimen′s first fundamental frequency and damping ratio results were validated by analyzing acceleration data from the top of the specimen and a finite element model. 展开更多
关键词 dynamic response non-contact sensing infrared thermography vision-based unmanned aerial system computer vision
在线阅读 下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部