YANG Yang, CAO Qixin, ZHU Xiaoxiao, CHEN Peihua. A 3D Modeling Method for Robot's Hand-Eye Coordinated Grasping[J]. ROBOT, 2013, 35(2): 151-155. DOI: 10.3724/SP.J.1218.2013.00151
Citation: YANG Yang, CAO Qixin, ZHU Xiaoxiao, CHEN Peihua. A 3D Modeling Method for Robot's Hand-Eye Coordinated Grasping[J]. ROBOT, 2013, 35(2): 151-155. DOI: 10.3724/SP.J.1218.2013.00151

A 3D Modeling Method for Robot's Hand-Eye Coordinated Grasping

  • For robot's hand-eye coordinated grasping, a 3D modeling method for common objects in the household environment is proposed. By simultaneously collecting RGB image and depth image from the RGB-D sensor, feature points and feature descriptors are extracted from the RGB image. The correspondences between adjacent frames are set up through matching of the feature descriptors. The RANSAC (RANdom SAmple Consensus) based three point algorithm is used to compute the relative pose between adjacent frames. Based on loop closure, the result is refined by minimizing the re-projection error with Levenberg-Marquardt algorithm. With this method, object's dense 3D point cloud model can be obtained simply by placing the object on a plane table, and collecting ten to twenty frames data around the object. 3D models are set up for twenty household objects which are appropriate for the service robot to grasp. The experiment results show that the error is about 1mm for models with diameters between 5cm and 7cm, which fully satisfies the requirements for the pose determination in robot grasping.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return