Open source video-based hand-eye calibration
Published in SPIE Medical Imaging, 2023
Recommended citation: Kemper, TH., Allen, DR, Rankin, A., Peters, TM, and Chen ECS. (2023). "Open source video-based hand-eye calibration"; in SPIE Medical Imaging, 12466 https://doi.org/10.1117/12.2651160
Augmented reality is becoming prevalent in modern video-based surgical navigation systems. Augmented reality in forms of image-fusion between the virtual objects (i.e. virtual representation of the anatomy derived from pre-operative imaging modalities) and the real objects (i.e. anatomy imaged by a spatially-tracked surgical camera) facilitate the visualization and perception of the surgical scene. However, this requires spatial calibration between the external tracking system and the optical axis of the surgical camera, known as hand-eye calibration. With the standard implementation of the most common hand-eye calibration techniques being static-photo-based, the time required for data collection may inhibit the thoroughness and robustness to achieve an accurate calibration. To address these translational issues, we introduce a video-based hand-eye calibration technique with open-source implementation that is accurate and robust. Based on the point-to-line Procrustean registration, a short video of a tracked and pivot-calibrated ball-tip stylus was recorded where, in each frame of the tracked video, the 3D position of the ball-tip (point) and its projection onto the video (line) serve as a calibration data point. We further devise a data sampling mechanism designed to optimize the spatial configuration of the calibration fiducials, leading to consistently high quality hand-eye calibrations. To demonstrate the efficacy of our work, a Monte Carlo simulation was performed to obtain the mean target projection error as a function of the number of calibration data points. The results obtained, exemplified using a Logitech C920 Pro HD Webcam with an image resolution of 640 × 480, show that the mean projection error decreased as more data points were used per calibration, and the majority of mean projection errors fell below four pixels. An open-source implementation, in the form of a 3D Slicer module, is available on GitHub.