期刊文献+

分布式多机器人通信仿真系统 被引量:3

A simulated communications system for distributed multi-robots
在线阅读 下载PDF
导出
摘要 针对目前多机器人通信仿真系统较少的问题,进行了多机器人通信仿真系统的设计研究.提出的多移动机器人通信仿真系统设计方案,侧重于反映通信网络的拓扑变化情况,以及多个机器人之间是如何进行通信的.仿真系统预留了机器人控制算法的接口,便于结合机器人避碰、任务分配、连通覆盖等进行综合研究.多机器人覆盖研究是目前多移动机器人和无线传感器网络中的一个研究热点,针对这个问题,采用了虚拟力分配策略,使得多机器人在保持连通性的同时尽可能大地覆盖某一区域,最后以六边形覆盖为约束条件进行了区域覆盖,并实现了该仿真系统的原型.实验表明,该仿真系统能准确地模拟多机器人在保持相互通信的情况下,达到最大化的区域覆盖.证实了基于虚拟力覆盖策略的有效性. An architecture for simulating a communication system for multi-robots was developed and tested. The proposed system primarily reflects changes in the communication modes of robots and the topology of their communication network. A modularized design technology was adopted. Some interfaces were reserved in advance for research in areas such as robot avoidance, task distribution and coverage connectivity, and so on. At present, multirobots coverage is a research hotspot in the field of multiple mobile robots and wireless sensor networks. We developed a special strategy to allocate robots using virtual forces among them; the robots would then cover the monitored area as much as possible. It has been proven that a regular hexagon provides maximum coverage with least waste; this was the basis of our approach. Experiments showed that this distributed algorithm moved robots to extend their coverage by the virtual force among robots while maintaining network connectivity. This proves the effectiveness of the virtual force coverage strategy.
出处 《智能系统学报》 2009年第4期309-313,共5页 CAAI Transactions on Intelligent Systems
基金 国家自然科学基金资助项目(90820302 60805027) 国家博士点基金资助项目(200805330005) 湖南省院士基金资助项目(2009FJ4030) 质检公益行业科研专项项目(20081002)
关键词 多机器人系统 仿真系统 通信网络 连通覆盖 虚拟力 multi-robots system simulation system communication network coverage connectivity virtual force
  • 相关文献

参考文献12

二级参考文献212

共引文献97

同被引文献99

  • 1Laura RAY.Hierarchical state-abstracted and socially augmented Q-Learning for reducing complexity in agent-based learning[J].控制理论与应用(英文版),2011,9(3):440-450. 被引量:2
  • 2MURRAY R M,ASTROM K M,BODY S P,et al.Future directions in control in an information-rich world[J].IEEE Control Systems Magazine,2003,23 (2):20-23.
  • 3WIERING M,OTTERLO M V.Reinforcement learning state-of-the-art[M].Berlin:Springer-Verlag,2012:3-42.
  • 4SUTTON R S.Learning to predict by the methods of temporal differences[J].Machine Learning,1988,3(1):9-44.
  • 5CHEN Xingguo,GAO Yang,WANG Ruili.Online selective kernel-based temporal difference learning[J].IEEE Transactions on Neural Networks and Learning Systems,2013,24(12):1944-1956.
  • 6ZOU Bin,ZHANG Hai,XU Zongben.Learning from uniformly ergodic Markov chains[J].Journal of Complexity,2009,25(2):188-200.
  • 7YU Huizhen,BERTSEKAS D P.Convergence results for some temporal difference methods based on least squares[J].IEEE Transactions on Automatic Control,2009,54(7):1515-1531.
  • 8WATKINS C,DAYAN P.Q-learning[J].Machine Learning,1992,8(3):279-292.
  • 9CHEN Chunlin,DONG Daoyi,LI Hanxiong.Fidelitybased probabilistic Q-learning for control of quantum systems[J].IEEE Transactions on Neural Networks and Learning Systems,2014,25(5):920-933.
  • 10RUMMERY G,NIRANJAN M.On-line Q-learning using connectionist systems[D].Cambridge:University of Cambridge,1994.

引证文献3

二级引证文献15

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部