Multi-robot Cooperative Pursuit Under Dynamic Environment
ZHOU Pu-cheng1, HONG Bing-rong1, WANG Yue-hai2
1. School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China; 2. College of Information Engineering, North China University of Technology, Beijing 100041, China
Abstract:This paper mainly discusses the pursuit-evasion games, in which a team of autonomous mobile robots act as pursuers to pursue multiple moving targets cooperatively. The traditional ContractNet protocol is extended, including using case-based reasoning to reduce the scope of inviting bidding, and introducing assistant decision matrix to improve the alliance decision, so as to reduce the communication load during the task negotiation process. The concepts such as the alliance life value and penalty are also introduced. Based on these extensions, a kind of multi-robot cooperative pursuit algorithm that allows dynamic alliance is proposed. Simulation results show the feasibility and validity of the given algorithm.
[1] Benda M, Jagannathan V, Dodhiawalla R. On Optimal Cooperation of Knowledge Sources [R]. Seattle, USA: Boeing Computer Services,1985. [2] Koff R E. A simple solution to pursuit games[A]. Proceedings of the 11th International Workshop on Distributed Artificial Intelligence[C]. Glen Arbor, USA: 1992. 183-194. [3] Ono N, Fukumoto K. Multi-agent reinforcement learning: a modular approach[A]. The Second International Conference on Multi-Agent Systems[C]. Kyoto, Japan: 1996. 252-258. [4] Yamaguchi H. A cooperative bunting behavior by mobile robot troops[A]. Proceedings of the IEEE International Conference on Robotics and Automation[C]. Leuven, Belgium: IEEE, 1998. 3204-3209. [5] Grinton C. A Tested for Investigating Agent Effectiveness in a Multiagent Pursuit Game[D]. Victoria, Australia: The University of Melbourne, 1996. [6] Irwig K, Wobcke W. Multi-agent reinforcement learning with vicarious rewards[J]. Computer and Information Science, 1999, 4(34):25-32. [7] Hespanha J P, Kim H, Sastry S. Multiple-agent probabilistic pursuitevasion games[A]. Proceeding of the 38th IEEE Conference on Decision and Control[C]. Phoenix, USA: 1999. 2432-2437. [8] Hespanha J P, Prandini M, Sastry S. Probabilistic pursuit-evasion games: A one-step nash approach[A]. Proceeding of the 39th IEEE Conference on Decision and Control [C]. Sydney, Australia: 2000.2272-2277. [9] Vidal R, Rashid S, Sharp C, et al. Pursuit-evasion games with unmanned ground and aerial vehicles[A]. Proceedings of the IEEE International Conference On Robot and Automation[C]. Seoul, Korea:2001. 2948-2955. [10] Vidal R, Shakernia O, Kim H J, et al. Probabilistic pursuit-evasion games: theory, implementation, and experimental evaluation [J].IEEE Transactions on Robotics and Automation, 2002, 18(5): 662-669. [11] CAO Z Q, Zhang B, Wang Sh, et al. Cooperative hunting of multiple mobile robots in an unknown environment [J]. Acta Automation Sinica, 2003, 29(4): 536-543. [12] 苏治宝,陆际联,童亮.一种多移动机器人协作围捕策略[J].北京理工大学学报,2004,24(5):403-406. [13] Smith R G. The contract-net protocol: high-level communication and control in a distributed problem solver[J]. IEEE Transactions on Computers, 1980, 19(12): 1104-1113. [14] Ohko T, Hiraki K, Anzai Y. Addressee learning and message interception for communication load reduction in multiple robot environments[J]. Lecture Notes in Artificial Intelligence, 1997, 1221: 242-258. [15] 史忠植.知识发现[M].北京:清华大学出版社,2002. [16] Alexander S, Ronald C. Combining Deliberation, Reactive, and Motivation in the Context of a Behavior-based Robot Architecture[R]. USA: Carnegie Mellon University, 2001. [17] 唐振民.智能移动机器人及群体关键技术研究[D].南京:南京理工大学,2002. [18] 王月海,洪炳镕.机器人部队运动多目标合作追捕算法[J].西安交通大学学报,2003,37(6):573-576.