基于增量式RBF网络的Q学习算法

Q-Learning Algorithm Based on Incremental RBF Network

  • 摘要: 为提升机器人的行为智能水平,提出一种基于增量式径向基函数网络(IRBFN)的Q学习(IRBFN-QL)算法.其核心是通过结构的自适应增长与参数的在线学习,实现对Q值函数的学习与存储,从而使机器人可以在未知环境中自主增量式地学习行为策略.首先,采用近似线性独立(ALD)准则在线增加网络节点,使机器人的记忆容量伴随状态空间的拓展自适应增长.同时,节点的增加意味着网络拓扑内部连接的改变.采用核递归最小二乘(KRLS)算法更新网络拓扑连接关系及参数,使机器人不断扩展与优化自身的行为策略.此外,为避免过拟合问题,将L2正则项融合到KRLS算法中,得到L2约束下的核递归最小二乘算法(L2KRLS).实验结果表明,IRBFN-QL算法能够实现机器人与未知环境的自主交互,并逐步提高移动机器人在走廊环境中的导航行为能力.

     

    Abstract: An IRBFN (incremental radial basis function network) based Q-learning (IRBFN-QL) algorithm is proposed to upgrade the behavioural intelligence of robots. The key is to learn and store Q-value function based on adaptive growth of the structure and online learning of the parameters, to make robots learn the behavioral strategy autonomously and incrementally in unknown environment. Firstly, approximate linear independence (ALD) criterion is used to online increase the network nodes, thus the memory capacity of robots can grow adaptively along with the expansion of state space. The new added nodes change the inner connection of network topology. Kernel recursive least square (KRLS) algorithm is used to update the connection of network topology and its parameters, therefore the robot can extend and optimize its behavioral strategy constantly. Besides, L2 regularization term is integrated to KRLS algorithm to avoid the overfitting problem, which forms the L2 constrained KRLS (L2KRLS) algorithm. The experimental results show that IRBFN-QL algorithm can realize autonomous interaction between the robot and the unknown environment and gradually improve the navigation behavior ability of mobile robot in corridor environments.

     

/

返回文章
返回