A co-calibration framework for the accuracy assessment of vision-based tracking systems

Published in SPIE Medical Imaging, 2022

Recommended citation: Allen, D., Peters, TM, and Chen ECS. (2022). "A co-calibration framework for the accuracy assessment of vision-based tracking systems"; in SPIE Medical Imaging, 120342J https://doi.org/10.1117/12.2606815

Advancements in Head-Mounted-Display (HMD) technology have led to an increasing focus in the development of Augmented Reality (AR) applications in the image-guided surgery field. These applications are often enabled by third-party vision-based tracking techniques, allowing virtual models to be registered to their corresponding real-world objects. The accuracy of the underlying vision-based tracking technique is critical towards the efficacy of these systems, and must be thoroughly evaluated before integration into clinical practice. In this paper, we propose a framework for the purpose of evaluating the technical accuracy of the HMD’s intrinsic vision-based tracking techniques using an extrinsic tracking system as ground truth. Specifically, we assess the tracking accuracy of the Vuforia Augmented Reality Software Development Kit, a vision-based tracking technique commonly used in conjunction with the Microsoft Hololens 2, against a commercial optical tracking system using a co calibration apparatus. The framework follows a two-stage pipeline of first calibrating the cameras with respect to the optical tracker (hand-eye calibration), and then calibrating a Vuforia target to the optical tracker using a custom calibration apparatus. We then evaluate the absolute tracking accuracy of three Vuforia target types (image, cylinder, and cube) using a stand-alone Logitech webcam and the front-facing camera on the Hololens 2. The hand-eye calibration projection errors were 1.4 ± 0.6 pixels for the Logitech camera and 2.3 ± 1.2 pixels for the Hololens 2 camera. The cylinder target provided the most stable and accurate tracking, with mean errors of 12.5 ± 0.6 mm and 10.7 ± 0.0 mm for the Logitech and Hololens 2 cameras, respectively. These results show that Vuforia has promising potential for integration into surgical navigation systems, but the type and size of target must be optimized for the particular surgical scenario to minimize tracking error. Future work will use our framework to perform a more robust analysis of optimal target shapes and sizes for vision-based navigation systems, both independently and when fused with the Simultaneous Localization and Mapping (SLAM) based tracking embedded with the Microsoft Hololens 2.

Download paper here BibTeX