Abstract:
To improve the visual perceptual capabilities of robots, a novel method is proposed to visually segment unknown objects based on inference of object logic states. In the semantic level, the space of object logic states is defined. The object logic states are deduced according to the feedback of robot grasping actions for object. In the data level, an RGB-D (RGB-depth) camera is employed to capture the 3D colored point cloud of the situated environment. Based on the assumption that objects are always supported by a planar surface, all possible object points are spatially clustered and segmented, resulting in the initial unknown-object set where the logic state of each object is also initialized. When the logic states of objects change, several predefined rules are activated and some point sets are calculated in order to re-segment the point clouds of the changed objects. The changed object set can also be used to update the space of object logic states. The proposed method is tested on our 7-DOF (degree of freedom) mobile manipulator which is commanded to segment and grasp unknown objects in the simulated environment consisting of real blocks. Experimental results demonstrate that the proposed method can improve the visual perceptual capabilities of robots effectively.