Fast Spatial Object Location Method for Service Robot Based on Co-saliency
XU Tao1,2, JIA Songmin1, ZHANG Guoliang1
1. Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China;
2. School of Mechanical and Electrical Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China
Abstract:By the traditional algorithm model, the service robot performs detection, identification and location successively, which causes low execution efficiency. In order to solve this problem, a fast spatial object location method based on co-saliency detection is proposed for service robots. Firstly, N pairs of RGB images and depth images containing the object to be located are obtained by using RGB-D sensor, and the object to be located is regarded as the co-salient target. Through fully exploring the saliency propagation mechanism of a single image in RGB images, a two-stage guided co-saliency detection model is constructed on the basis of inter-image saliency propagation and intra-image manifold ranking. At the same time, the background and non-collaborative salient objects are excluded to get pixel coordinate set of the co-salient object region. Then, the spatial coordinates of the object centroid are determined by the correspondence between the RGB image and the depth image to achieve the fast location of the spatial object. Finally, some experiments are performed on iCoseg standard database and the service robot manipulator grasping platform with hand-eye calibration. The experiment results show that the proposed method is superior to the current five co-saliency detection algorithms, can meet the real-time needs of the gripping system, and has better accuracy and robustness in the cases of complicated background, multi-target interference and varying illumination.
[1] Tang K, Joulin A, Li L J, et al. Co-localization in real-world images[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2014: 1464-1471.
[2] Cho M, Kwak S, Schmid C, et al. Unsupervised object discovery and localization in the wild: Part-based matching with bottom-up region proposals[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2015: 1201-1210.
[3] 伍锡如,黄国明,孙立宁.基于深度学习的工业分拣机器人快速视觉识别与定位算法 [J].机器人,2016,38(6):711-719.Wu X R, Huang G M, Sun L N. Fast visual identification and location algorithm for industrial sorting robots based on deep learning[J]. Robot, 2016, 38(6): 711-719.
[4] 尹建芹,田国会,高鑫,等.面向病房巡视的服务机器人目标搜寻 [J].机器人,2011,33(5):570-578,620.Yin J Q, Tian G H, Gao X, et al. Object searching scheme for service robot oriented to wards inspection[J]. Robot, 2011, 33(5): 570-578,620.
[5] 韩铮,刘华军,黄文炳,等.基于 Kinect 的机械臂目标抓取 [J].智能系统学报,2013,8(2):149-155.Han Z, Liu H J, Huang W B, et al. Kinect-based object grasping by manipulator[J]. CAAI Transactions on Intelligent Systems, 2013, 8(2): 149-155.
[6] 杨扬,曹其新,朱笑笑,等.面向机器人手眼协调抓取的 3 维建模方法 [J].机器人,2013,35(2):151-155.Yang Y, Cao Q X, Zhu X X, et al. A 3D modeling method for robot's hand-eye coordinated grasping[J]. Robot, 2013, 35(2): 151-155.
[7] Qin Y, Lu H C, Xu Y Q, et al. Saliency detection via cellular automata[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2015: 110-119.
[8] Jacobs D E, Goldman D B, Shechtman E. Cosaliency: Where people look when comparing images[C]//23rd Annual ACM Symposium on User Interface Software and Technology. New York, USA: ACM, 2010: 219-227.
[9] Li H L, Ngan K N. A co-saliency model of image pairs[J]. IEEE Transactions on Image Processing, 2011, 20(12): 3365-3375.
[10] 张艳邦,韩军伟,郭雷,等.利用稀疏表达检测多幅图像协同显著性目标 [J].西北工业大学学报,2013,\columnbreak31(2):206-209.Zhang Y B, Han J W, Guo L, et al. A new algorithm for detecting co-saliency in multiple images through sparse coding representation[J]. Journal of Northwestern Polytechnical University, 2013, 31(2): 206-209.
[11] Fu H Z, Cao X C, Tu Z W. Cluster-based co-saliency detection[J]. IEEE Transactions on Image Processing, 2013, 22(10): 3766-3778.
[12] Cao X C, Tao Z Q, Zhang B, et al. Self-adaptively weighted co-saliency detection via rank constraint[J]. IEEE Transactions on Image Processing, 2014, 23(9): 4175-4186.
[13] Li Y J, Fu K R, Liu Z, et al. Efficient saliency-model-guided visual co-saliency detection[J]. IEEE Signal Processing Letters, 2015, 22(5): 588-592.
[14] Ge C J, Fu K R, Liu F H, et al. Co-saliency detection via inter and intra saliency propagation[J]. Signal Processing: Image Communication, 2016, 44: 69-83.
[15] Chang K Y, Liu T L, Lai S H. From co-saliency to co-segmentation: An efficient and fully unsupervised energy minimization model[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2011: 2129-2136.
[16] Achanta R, Shaji A, Smith K, et al. SLIC superpixels compared to state-of-the-art superpixel methods[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012, 34(11): 2274-2282.
[17] Xu B, Bu J J, Chen C, et al. EMR: A scalable graph-based ranking model for content-based image retrieval[J]. IEEE Transactions on Knowledge and Data Engineering, 2015, 27(1): 102-114.
[18] Batra D, Kowdle A, Parikh D, et al. iCoseg: Interactive co-segmentation with intelligent scribble guidance[C]//IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 2010: 3169-3176.
[19] Liu Z, Zou W B, Li L N, et al. Co-saliency detection based on hierarchical segmentation[J]. IEEE Signal Processing Letters, 2014, 21(1): 88-92.
[20] Ye L W, Liu Z, Li J H, et al. Co-saliency detection via co-salient object discovery and recovery[J]. IEEE Signal Processing Letters, 2015, 22(11): 2073-2077.