黄敏, 鲍苏苏, 邱文超. 基于可见光下双目视觉的手术导航研究与仿真[J]. 机器人, 2014, 36(4): 461-468,476.DOI: 10.13973/j.cnki.robot.2014.0461.
HUANG Min, BAO Susu, QIU Wenchao. Study and Simulation of Surgical Navigation Based on BinocularVision under Visible Light. ROBOT, 2014, 36(4): 461-468,476. DOI: 10.13973/j.cnki.robot.2014.0461.
Two key techniques of surgical navigation system are studied. For the problem of motion tracking of a surgical instrument in cluttered background, a surgical instrument identification plate with circular texture pattern is designed, and it is tracked by continuously adaptive mean-shift (CamShift) algorithm based on hue, saturation and texture features. Texture is extracted by uniform local binary pattern (LBP) which is a way of point sample estimation. And then Hough-circle transform is introduced to overcome the problem of re-initializing search window when the object is lost in order to implement a fully automatic tracking. For visualization of probe in the three-dimensional model, the probe position captured by the camera is put into the coordinate system of three-dimensional focus model reconstructed preoperatively, according to the relationship that there is only a scale factor difference between the probe position under patient coordinate system and the probe position under model mark coordinate system. The simulation experiment result shows that the system is simple and stable, and can meet the requirements of surgical navigation precision.
[1] Ewers R, Schicho K, Undt G, et al. Basic research and 12 years of clinical experience in computer-assisted navigation technology: A review[J]. International Journal of Oral and Maxillofacial Surgery, 2005, 34(1): 1-8. [2] Peterhans M, vom Berg A, Dagon B, et al. A navigation system for open liver surgery: Design, workflow and first clinical applications[J]. International Journal of Medical Robotics and Computer Assisted Surgery, 2011, 7(1): 7-16. [3] Pallath N T, Thomas T. Real time computer assisted surgical navigation[C]//International Conference on Data Science & Engineering. Piscataway, USA: IEEE, 2012: 128-133.[4] 沈轶.高精度手术导航的研究与应用[D].上海:上海交通大学,2012.Shen Y. High precision surgical navigation and its application[D]. Shanghai: Shanghai Jiao Tong University, 2012.[5] Northern Digital Inc. Passive Polaris Spectra User Guide [EB/OL]. (2011-03-01) [2013-09-01]. http://www.ndigital.com.[6] Claron Technology Inc. MicronTracker Developer's Manual[EB/OL]. (2012-09-21) [2013-09-01]. http://www.calarontech.com.[7] 白晶.IGS系统注册导航和术前计划问题的研究[D].北京:清华大学,2007.Bai J. A study on registration and preoperative planning issues of image guided surgery system[D]. Beijing: Tsinghua University, 2007.[8] Luo H L, Jia F C, Zheng Z Z, et al. An IGSTK-based surgical navigation system connected with medical robot[C]//IEEE Youth Conference on Information Computing and Telecommunications. Piscataway, USA: IEEE, 2010: 49-52.[9] Shi J, Tomasi C. Good features to track[C]//IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway, USA: IEEE, 1994: 593-600.[10] Bradski G, Kaehler A. Learning OpenCV: Computer vision with the OpenCV library[M]. Sebastopol, USA: O'Reilly Media, 2008.[11] Yilmaz A, Javed O, Shah M. Object tracking: A survey[J]. ACM Computing Surveys, 2006, 38(4): 13.[12] Bradski G. Computer vision face tracking for use in a perceptual user interface[J]. Intel Technology Journal, 1998, 2(2): 1-15.[13] Allen J G, Xu R Y D, Jin J S. Object tracking using CamShift algorithm and multiple quantized feature spaces[C]//Proceedings of the Pan-Sydney Area Workshop on Visual Information Processing. 2004: 3-7.[14] Ojala T, Valkealahti K, Oja E, et al. Texture discrimination with multidimensional distributions of signed columnbreak gray-level difference[J]. Pattern Recognition, 2001, 34(3): 727-739. [15] Ojala T, Pietikainen M, Maenpaa T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(7): 971-987. [16] Ning J F, Zhang L, Zhang D, et al. Robust object tracking using joint color-texture histogram[J]. International Journal of Pattern Recognition and Artificial Intelligence, 2009, 23(7): 1245-1263. [17] Sonka M, Hlavac V, Boyle R. Image processing, analysis, and machine vision[M]. Stanford, USA: Cengage Learning, 2008.