摘要
针对人类示教轨迹样本存在的时间和空间不对齐导致难以提取运动特征的问题,首先提出了基于典型时间规整(Canonical Time Warping,CTW)算法用于多条轨迹对齐的方法,并将其引入到软-动态时间规整(soft-dynamic time warping,soft-DTW)算法中以提取轨迹模板,其次在CTW算法中引入了一个新的变量,以提升CTW算法在对齐多条轨迹方面的能力;最后,在实验中利用多种轨迹验证了所提出的轨迹模板提取方法,实验结果表明所提出的方法可以从人类示教轨迹中快速地提取共有的运动特征,并且对示教轨迹在时间和空间上的差异具有较好的鲁棒性.
To solve the problem that the time and space misalignment of human demonstration trajectories makes it difficult to extract motion features.Firstly,a method based on canonical time warping(CTW)algorithm for multiple trajectories alignments is proposed,and introduced into soft-dynamic time warping(soft-DTW)algorithm to extract trajectory templates.Secondly,a new variable is introduced into the CTW algorithm to improve the CTW algorithm′s ability to align multiple trajectories.Finally,using multiple trajectories verifies the proposed trajectory template extraction method in the experiment.The results show that the method can quickly extract common motion features from demonstration trajectories,and has good robustness to differences in time and space of demonstration trajectories.
作者
薛俊楠
李志海
于洪鹏
XUE Junnan;LI Zhihai;YU Hongpeng(Shenyang Institute of Automation,Chinese Academy of Sciences,Shenyang 110016,China;Institutes for Robotics and Intelligent Manufacturing,Chinese Academy of Sciences,Shenyang 110169,China;University of Chinese Academy of Sciences,Beijing 100049,China)
出处
《小型微型计算机系统》
北大核心
2025年第3期528-534,共7页
Journal of Chinese Computer Systems
基金
广东省重点领域研发计划项目(2020B090925001)资助.
关键词
示教学习
人类示教轨迹
轨迹模板
CTW
soft-DTW
learning from demonstration
human demonstration trajectory
trajectory templates
CTW
soft-DTW