期刊文献+

基于未知物体三维点云特征的机器人六自由度抓取 被引量:10

Six-degree-of-freedom robot grasping based on three-dimensional point cloud features of unknown objects
在线阅读 下载PDF
导出
摘要 针对非结构化环境中任意位姿的未知物体,提出了一种基于点云特征的机器人六自由度抓取位姿检测方法,以解决直接从点云中获取目标抓取位姿的难题.首先,根据点云的基本几何信息生成抓取候选,并通过力平衡等方法优化这些候选;然后,利用可直接处理点云的卷积神经网络ConvPoint评估样本,得分最高的抓取将被执行,其中抓取位姿采样和评估网络都是以原始点云作为输入;最后,利用仿真和实际抓取实验进行测试.结果表明,该方法在常用对象上实现了88.33%的抓取成功率,并可以有效地拓展到抓取其他形状的未知物体. In this paper,a six-degree-of-freedom robot grasp detection method based on point cloud features is proposed to address the challenging problem of directly obtaining the grasp pose of unknown objects in unstructured environments from the point cloud.Firstly,several grasp candidates will be sampled with essential geometry information of the point cloud,and optimized through methods such as force balance.Secondly,each grasp candidate is evaluated through ConvPoint,a convolutional neural network which can directly process point cloud of object,then the grasp candidate with the highest score will be executed.Both grasp sampler and grasp evaluation network take 3D point clouds observed by a depth camera as input.Finally,the performance of the proposed method is measured with simulation experiments and practical test.The results show that our approach achieves 88.33%success rate on various commonly used objects,and generalizes well to other objects of unknown shape in unstructured environments.
作者 李会军 瞿孝昌 叶宾 LI Hui-jun;QU Xiao-chang;YE Bin(School of Information and Control Engineering,China University of Mining and Technology,Xuzhou Jiangsu 221116,China)
出处 《控制理论与应用》 EI CAS CSCD 北大核心 2022年第6期1103-1111,共9页 Control Theory & Applications
基金 国家重点研发计划项目(2020YFB1314102) 徐州市科技重点研发计划项目(KC20020)资助。
关键词 机器人抓取 点云特征 卷积神经网络 力平衡 非结构化环境 robotic grasping point cloud feature convolutional neural network force balance unstructured environment
  • 相关文献

参考文献3

二级参考文献32

  • 1Paolini R, Rodriguez A, Srinivasa S S, Mason M T. A data-driven statistical framework for post-grasp manipulation. The International Journal of Robotics Research, 2014, 33(4):600-615.
  • 2Droniou A, Ivaldi S, Sigaud O. Deep unsupervised network for multimodal perception, representation and classification. Robotics and Autonomous Systems, 2015, 71(9):83-98.
  • 3Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks. Science, 2006, 313(5786):504-507.
  • 4Bengio Y. Learning deep architectures for AI. Foundations and TrendsoledR in Machine Learning, 2009, 2(1):1-127.
  • 5L?ngkvist M, Karlsson L, Loutfi A. A review of unsupervised feature learning and deep learning for time-series modeling. Pattern Recognition Letters, 2014, 42:11-24.
  • 6Erhan D, Bengio Y, Courville A, Manzagol P A, Vincent P, Bengio S. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 2010, 11:625-660.
  • 7Salakhutdinov R, Hinton G. Deep Boltzmann machines. In:Proceedings of the 12th International Conference on Artificial Intelligence and Statistics (AISTATS) 2009. Florid, USA, 2009. 448-455.
  • 8Ngiam J, Khosla A, Kim M, Nam J, Lee H, Ng A Y. Multimodal deep learning. In:Proceedings of the 28th International Conference on Machine Learning. Bellevue, USA, 2011. 689-696.
  • 9Baldi P, Lu Z Q. Complex-valued autoencoders. Neural Networks, 2012, 33:136-147.
  • 10Wu P C, Hoi S C H, Xia H, Zhao P L, Wang D Y, Miao C Y. Online multimodal deep similarity learning with application to image retrieval. In:Proceedings of the 21st ACM International Conference on Multimedia. Barcelona, Spain:ACM, 2013. 153-162.

共引文献64

同被引文献105

引证文献10

二级引证文献9

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部