Software Platform for Image-Guided Interventions


Image-guidance is a basis for a wide range of minimally-invasive interventions, from percutaneous ablations to robotic surgeries. It enables accurate targeting and monitoring of the lesion with minimal incisions by mapping image information onto the operating field. Image-guidance, in its ideal form, is a closed-loop process; the physician plans how he or she wishes to approach to a lesion using images (planning), maneuvers surgical tools to the lesion by hand or by using a robotic device (control), delivers a therapeutic effect (delivery), monitors the effect using imaging and/or sensors (feedback), and updates the plan accordingly. A cycle of this closed-loop process must be short enough to keep up with rapid changes in the operating field, such as target displacement due to organ motion or temperature variations during thermal ablations. Making the process closed-loop is a challenge, especially when the process involves various hardware/software components from different vendors; those components must communicate in real time during the process, although there is no URTC standard that is suitable for the closed-loop process in IGT. Because of the lack of a standard, vendors only can provide proprietary interfaces, which are not interoperable.

To address this engineering challenge, we have developed an open-source network communication interface specifically designed for image-guided interventions, named OpenIGTLink, under support from NIH (R01EB020667, R01CA111288, P41EB015898, U54EB005149). It aims to provide a plug-and-play integration of multiple components in the OR for unified real-time communication, where imagers, sensors, surgical robots, and computers from different vendors work cooperatively, and ensure the seamless data flow among those components for closed-loop interventions. OpenIGTLink has been used in a number of research software packages including 3D Slicer, IGSTK, BioImage Suite, and PLUS, MITK, MeVisLab, NifTK, CustusX, and IBIS, and supported successful clinical translation of new imaging and robotics technologies. 

More recently, we have been extending the idea of plug-and-play integration of components by reaching out the broader research community including robotics, to facilitate component-based development of image-guided robot-assisted intervention (IGRI) systems. Specifically, we prototyped a bridge software named ROS-OpenIGTLink Bridge, which interfaces ROS and 3D Slicer using OpenIGTLink; we envisioned that ROS-OpenIGTLink Bridge will help developers to build systems using existing components for robotics available in ROS and those for medical image computing available in 3D Slicer. As a proof-of-concept, an IGSR system using 3D Slicer, ROS, and Lego Mindstorms EV3 was prototyped at Winter Project Week 2016 (January 4-8, 2016, Cambridge, MA), a hackathon-style event hosted by NIH National Alliance for Medical Image Computing (U54 EB005149). The system mimics a surgical robot that actuates its end-effector
to follow a trajectory defined on a medical
image (e.g. CT, MRI). The system consisted of an
active 3-degree-of-freedom (DoF) parallel-link manipulator,
control “brick”, ROS master computer, and navigation computers that
ran the ROS master server and navigation software, 3D Slicer. We performed a mock
procedure using an MR image of the brain, and a 2D phantom created from the image. This mock procedure demonstrated that this Lego-based system can be used to build a robotic system with research- or commercial-grade architecture and mimic a realistic clinical workflow, making it an ideal tool for rapid prototyping and education.