面向机器人手眼协调抓取的3维建模方法

A 3D Modeling Method for Robot's Hand-Eye Coordinated Grasping

  • 摘要: 面向机器人手眼协调抓取,提出一种针对家庭环境中常见物体的3维建模方法.利用RGB-D传感器能同时获取RGB图像与深度图像的特点,从RGB图像中提取特征点与特征描述子, 利用特征描述子的匹配建立相邻帧数据间的对应关系,利用基于随机抽样一致性的三点算法实现帧间的相对位姿计算, 并基于路径闭环用Levenberg-Marquardt算法最小化再投影误差以优化位姿计算结果.利用该方法,只需将待建模物体放置在平整桌面上, 环绕物体采集10~20帧数据即可建立物体的密集3维点云模型.对20种适于服务机器人抓取的家庭常见物体建立了3维模型.实验结果表明,对于直径5cm~7cm的模型, 其误差约为1mm,能够满足机器人抓取时位姿计算的需要.

     

    Abstract: For robot's hand-eye coordinated grasping, a 3D modeling method for common objects in the household environment is proposed. By simultaneously collecting RGB image and depth image from the RGB-D sensor, feature points and feature descriptors are extracted from the RGB image. The correspondences between adjacent frames are set up through matching of the feature descriptors. The RANSAC (RANdom SAmple Consensus) based three point algorithm is used to compute the relative pose between adjacent frames. Based on loop closure, the result is refined by minimizing the re-projection error with Levenberg-Marquardt algorithm. With this method, object's dense 3D point cloud model can be obtained simply by placing the object on a plane table, and collecting ten to twenty frames data around the object. 3D models are set up for twenty household objects which are appropriate for the service robot to grasp. The experiment results show that the error is about 1mm for models with diameters between 5cm and 7cm, which fully satisfies the requirements for the pose determination in robot grasping.

     

/

返回文章
返回