期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
A New Reward System Based on Human Demonstrations for Hard Exploration Games
1
作者 Wadhah Zeyad Tareq Mehmet Fatih Amasyali 《Computers, Materials & Continua》 SCIE EI 2022年第2期2401-2414,共14页
The main idea of reinforcement learning is evaluating the chosen action depending on the current reward.According to this concept,many algorithms achieved proper performance on classic Atari 2600 games.The main challe... The main idea of reinforcement learning is evaluating the chosen action depending on the current reward.According to this concept,many algorithms achieved proper performance on classic Atari 2600 games.The main challenge is when the reward is sparse or missing.Such environments are complex exploration environments likeMontezuma’s Revenge,Pitfall,and Private Eye games.Approaches built to deal with such challenges were very demanding.This work introduced a different reward system that enables the simple classical algorithm to learn fast and achieve high performance in hard exploration environments.Moreover,we added some simple enhancements to several hyperparameters,such as the number of actions and the sampling ratio that helped improve performance.We include the extra reward within the human demonstrations.After that,we used Prioritized Double Deep Q-Networks(Prioritized DDQN)to learning from these demonstrations.Our approach enabled the Prioritized DDQNwith a short learning time to finish the first level of Montezuma’s Revenge game and to perform well in both Pitfall and Private Eye.We used the same games to compare our results with several baselines,such as the Rainbow and Deep Q-learning from demonstrations(DQfD)algorithm.The results showed that the new rewards system enabled Prioritized DDQN to out-perform the baselines in the hard exploration games with short learning time. 展开更多
关键词 Deep reinforcement learning human demonstrations prioritized double deep q-networks atari
在线阅读 下载PDF
Reactive Whole-body Locomotion-integrated Manipulation Based on Combined Learning and Optimization
2
作者 Jianzhuang Zhao Tao Teng +1 位作者 Elena De Momi Arash Ajoudani 《Machine Intelligence Research》 2025年第4期627-640,共14页
Reactive planning and control capacity for collaborative robots is essential when the tasks change online in an unstructured environment.This is more difficult for collaborative mobile manipulators(CMM)due to high red... Reactive planning and control capacity for collaborative robots is essential when the tasks change online in an unstructured environment.This is more difficult for collaborative mobile manipulators(CMM)due to high redundancies.To this end,this paper proposed a reactive whole-body locomotion-integrated manipulation approach based on combined learning and optimization.First,human demonstrations are collected,where the wrist and pelvis movements are treated as whole-body trajectories,mapping to the end-effector(EE)and the mobile base(MB)of CMM,respectively.A time-input kernelized movement primitive(T-KMP)learns the whole-body trajectory,and a multi-dimensional kernelized movement primitive(M-KMP)learns the spatial relationship between the MB and EE pose.According to task changes,the T-KMP adapts the learned trajectories online by inserting the new desired point predicted by MKMP.Then,the updated reference trajectories are sent to a hierarchical quadratic programming(HQP)controller,where the EE and the MB trajectories tracking are set as the first and second priority tasks,generating the feasible and optimal joint level commands.An ablation simulation experiment with CMM of the HQP is conducted to show the necessity of MB trajectory tracking in mimicking human whole-body motion behavior.Finally,the tasks of the reactive pick-and-place and reactive reaching were undertaken,where the target object was randomly moved,even out of the region of demonstrations.The results showed that the proposed approach can successfully transfer and adapt the human whole-body loco-manipulation skills to CMM online with task changes. 展开更多
关键词 Embodied intelligence robot learning mobile manipulation whole-body motion planning and control learning from human demonstrations
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部