基于物体逻辑状态推理的未知物体视觉分割

Visual Segmentation of Unknown Objects Based on Inference of Object Logic States

  • 摘要: 为提高机器人的视觉感知能力,提出了一种新颖的基于物体逻辑状态推理的未知物体视觉分割方法.在语义层定义物体逻辑状态空间,根据机器人抓取动作反馈对物体逻辑状态进行推理.在数据层利用RGB-D摄像头采集生成场景3维着色点云,基于物体放置于支撑平面的假设,对所有可能的物体点进行空间聚类和分割,得到初始未知物体集并生成物体初始逻辑状态表示.当物体逻辑状态发生变化时,根据设定规则结合点集运算对物体点云进行重新分割,物体点云集的变化又用于指导物体逻辑状态空间的更新.在7自由度移动机械手系统上,进行了真实积木模拟环境下未知物体的视觉分割与抓取实验,实验结果证实本文方法可以有效提升机器人的视觉感知能力.

     

    Abstract: To improve the visual perceptual capabilities of robots, a novel method is proposed to visually segment unknown objects based on inference of object logic states. In the semantic level, the space of object logic states is defined. The object logic states are deduced according to the feedback of robot grasping actions for object. In the data level, an RGB-D (RGB-depth) camera is employed to capture the 3D colored point cloud of the situated environment. Based on the assumption that objects are always supported by a planar surface, all possible object points are spatially clustered and segmented, resulting in the initial unknown-object set where the logic state of each object is also initialized. When the logic states of objects change, several predefined rules are activated and some point sets are calculated in order to re-segment the point clouds of the changed objects. The changed object set can also be used to update the space of object logic states. The proposed method is tested on our 7-DOF (degree of freedom) mobile manipulator which is commanded to segment and grasp unknown objects in the simulated environment consisting of real blocks. Experimental results demonstrate that the proposed method can improve the visual perceptual capabilities of robots effectively.

     

/

返回文章
返回