At the annual meeting for the American Association for Neurological Surgeons in San Diego, CA, I was honored to have an opportunity to speak in front of some of the most prolific spine surgeons from around the nation and world.
I gave a talk titled “Intraoperative Mixed Reality Holographic Visualization during Spinal Fixation: Early Insights” to what I assumed would be a mildly enthusiastic audience at best. When I didn’t receive any questions from the audience during the Q&A session, this just affirmed my suspicions.
The reason for this expectation? Surgeons don’t really like change, and they certainly don’t like voodoo. My talk had the makings of both.
In medicine, all of the imaging data we acquire comes in two dimensions. From X-rays to ultrasounds to CT and MRI scans. The human body, however, is of course not a two-dimensional system. As surgeons, we are tasked with compiling all of the 2D data to build a 3D representation of the patient’s anatomy in our head. This representation is what we each use individually to guide our surgical plan.
However, why are we relying on "guestimations" of the 3D relationships of what we are operating on? Why not directly visualize the relevant structures in real 3D anatomical space, and have this information during the operation itself? This could provide a dramatic shift in the precision and ergonomics of surgery. As it turns out, the technology to do this has become a mixed reality (pun intended!).
Augmented reality (AR) technology in the medical field has exploded in the past two to three years, since the release of consumer AR headsets such as the Microsoft Hololens. There are a significant number of biotech startup companies creating software packages for these headsets that automatically create 3D models of entire 2D image sequences, a process known as volume rendering. While wearing the headset, these volume renderings can then be used by surgeons pre-operatively for surgical planning. However, there is a huge leap between using something before surgery to using something during surgery.
This is where we wanted to set out to test the feasibility of using a technology like this in the operating room itself for future surgical guidance. We created our own pipeline to build, deploy, and register 3D models in real-time to individual anatomy. We found that there is a steep learning curve, but over the course of several months, we were able to bring our accuracy measurements down to the 1-3mm range, which is close to the accepted range of currently existing 2D navigation technologies.
After leaving the stage without a single question asked, I thought the future of this technology may be more bleak than I had hoped. However, after the formalities of the presentation setting went away, the story was quite the opposite. I was approached by various surgeons who actually told me they enjoyed my presentation and thought the technology had significant potential for the future. They were interested in how we built the pipeline, how long it takes to make the models, and how difficult it would be to implement.
Make no mistake: this technology is not fully ready. There are still many kinks to work out, but as we and others continue to develop it, hopefully the idea of voodoo and change will morph into clarity and a new gold standard of cutting-edge precision surgery.
Vivek Buch, MD is a senior neurosurgical resident at the University of Pennsylvania. He is on the state board of the Pennsylvania Neurosurgical Society. He has a background in coding, computational neuroscience, and neural engineering. He is performing innovative work for the development of novel surgical technologies. He Tweets @VivekBuchMD.