期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Use of land's cooperative object to estimate UAV's pose for autonomous landing 被引量:11
1
作者 Xu Guili Qi Xiaopeng +3 位作者 Zeng Qinghua Tian Yupeng Guo Ruipeng Wang Biao 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2013年第6期1498-1505,共8页
The research of unmanned aerial vehicles'(UAVs')autonomy navigation and landing guidance with computer vision has important signifcance.However,because of the image blurring,the position of the cooperative points ... The research of unmanned aerial vehicles'(UAVs')autonomy navigation and landing guidance with computer vision has important signifcance.However,because of the image blurring,the position of the cooperative points cannot be obtained accurately,and the pose estimation algorithms based on the feature points have low precision.In this research,the pose estimation algorithm of UAV is proposed based on feature lines of the cooperative object for autonomous landing.This method uses the actual shape of the cooperative-target on ground and the principle of vanishing line.Roll angle is calculated from the vanishing line.Yaw angle is calculated from the location of the target in the image.Finally,the remaining extrinsic parameters are calculated by the coordinates transformation.Experimental results show that the pose estimation algorithm based on line feature has a higher precision and is more reliable than the pose estimation algorithm based on points feature.Moreover,the error of the algorithm we proposed is small enough when the UAV is near to the landing strip,and it can meet the basic requirements of UAV's autonomous landing. 展开更多
关键词 Computer vision Cooperative object Landing Position measurement UAV Vanishing line
原文传递
Optimization methods in fully cooperative scenarios:a review ofmultiagent reinforcement learning
2
作者 Tao YANG Xinhao SHI +3 位作者 Qinghan ZENG Yulin YANG Cheng XU Hongzhe LIU 《Frontiers of Information Technology & Electronic Engineering》 2025年第4期479-509,共31页
Multiagent reinforcement learning(MARL)has become a dazzling new star in the field of reinforcement learning in recent years,demonstrating its immense potential across many application scenarios.The reward function di... Multiagent reinforcement learning(MARL)has become a dazzling new star in the field of reinforcement learning in recent years,demonstrating its immense potential across many application scenarios.The reward function directs agents to explore their environments and make optimal decisions within them by establishing evaluation criteria and feedback mechanisms.Concurrently,cooperative objectives at the macro level provide a trajectory for agents’learning,ensuring alignment between individual behavioral strategies and the overarching system goals.The interplay between reward structures and cooperative objectives not only bolsters the effectiveness of individual agents but also fosters interagent collaboration,offering both momentum and direction for the development of swarm intelligence and the harmonious operation of multiagent systems.This review delves deeply into the methods for designing reward structures and optimizing cooperative objectives in MARL,along with the most recent scientific advancements in this field.The article meticulously reviews the application of simulation environments in cooperative scenarios and discusses future trends and potential research directions in the field,providing a forward-looking perspective and inspiration for subsequent research efforts. 展开更多
关键词 Multiagent reinforcement learning(MARL) Cooperative framework Reward function Cooperative objective optimization
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部