沈晶, 顾国昌, 刘海波. 未知动态环境中基于分层强化学习的移动机器人路径规划[J]. 机器人, 2006, 28(5): 544-547,552.
引用本文: 沈晶, 顾国昌, 刘海波. 未知动态环境中基于分层强化学习的移动机器人路径规划[J]. 机器人, 2006, 28(5): 544-547,552.
SHEN Jing, GU Guo-chang, LIU Hai-bo. Mobile Robot Path Planning Based on Hierarchical Reinforcement Learning in Unknown Dynamic Environment[J]. ROBOT, 2006, 28(5): 544-547,552.
Citation: SHEN Jing, GU Guo-chang, LIU Hai-bo. Mobile Robot Path Planning Based on Hierarchical Reinforcement Learning in Unknown Dynamic Environment[J]. ROBOT, 2006, 28(5): 544-547,552.

未知动态环境中基于分层强化学习的移动机器人路径规划

Mobile Robot Path Planning Based on Hierarchical Reinforcement Learning in Unknown Dynamic Environment

  • 摘要: 提出了一种基于分层强化学习的移动机器人路径规划算法.该算法利用强化学习方法的无环境模型学习能力以及分层强化学习方法的局部策略更新能力,克服了路径规划方法对全局环境的静态信息或动态障碍物的运动信息的依赖性.仿真实验结果表明了算法的可行性,尽管在规划速度上没有明显的优势,但其应对未知动态环境的学习能力是现有其它方法无法比拟的.

     

    Abstract: A path-planning algorithm based on hierarchical reinforcement learning is presented.Since the reinforcement learning approach is introduced,the algorithm is provided with the capability of learning without environment model.The hierarchical reinforcement learning method is mainly employed for updating local strategies.So,this algorithm can eliminate its dependence on the static information of the global environment or the moving information of the dynamic obstacles. Simulation experiment shows the feasibility of the algorithm.Although there is no obvious advantage in planning speed,the learning ability of the algorithm in unknown dynamic environment is unique.

     

/

返回文章
返回