The American Association of Neurological Surgeons (AANS) Annual Scientific Meeting is an important event for most neurosurgeons. Each year thousands of attending neurosurgeons, residents, fellows, advanced care practitioners, researchers, administrators, medical students, and industry leaders gather from around the world to learn about the latest advances in the field.
I have regularly attended, participated and presented at AANS meetings since the 1980’s and I have always found it interesting to see the concentration of showcased emerging technologies that are discussed and on display. Neurosurgery is an inherently technical field, and we rely heavily on advances in technology to improve the care of patients with brain tumors, cerebrovascular disorders, spine and spinal cord problems, epilepsy, and other ailments.
Advancing digital technology in the benefit of patients has been an enduring personal interest. I believe every neurosurgeon should educate themselves about the relevant advances and attempt to learn how to use evolving techniques to their advantage — not only in in the operating rooms, but in planning, consulting, educating, and even during patient consultations.
After all, intraoperative imaging, precise navigation, integrated software, surgical planning tools, robotics, state-of-the-art surgical microscopes with 4K and 3D visualization, heads-up displays, and virtual reality (VR) and augmented reality (AR) have real potential to make us better, safer, more informed, and more precise neurosurgeons.
One practical example of how advanced technologies can converge for the benefit of patients is in the combined, integrated use of 1) virtual reality, 2) microscope navigation integration, and 3) Heads-Up Display (HUD), all during the same case. In this increasingly common application the workflow begins by obtaining MRI, CT/CTA and sometimes catheter angiograms with very specific and rigorous parameters.
Image acquisition should be coordinated with our colleagues in Neuroradiology to obtain the optimal data upon which 3D VR “scenarios” can be created. Using software from a number of different industry partners, critical structures are segmented, selected and painted before surgery and these are then shared with the surgical navigation platforms.
In step 2, surgical frameless navigation is painstakingly confirmed and rigorously rechecked right after induction of anesthesia. The operating microscope (or an endoscope or exoscope) are now linked to the navigation. This recent advance switches the “navigated” point from a hand held probe to the surgeons focus point on the operating microscope. Microscope integration means that the surgeon is navigating with his or her eyes, as opposed to their hands. This may initially seem like a small advance, but it opens a new world of precision and improved spatial awareness when combined with the next step.
In step 3, portions of the 3D VR scenario we created in step 1 are now injected into the oculars of the operating microscope (or the screen of the exoscope or endoscope), directly overlaying highlighted structures and functional information onto the surgical anatomy seen through the eyes of the surgeon. This means that critical and dangerous structures, such as major blood vessels and cranial nerves can be identified before they are encountered surgically. Their location can be monitored in real time by the surgeon, even during difficult maneuvers like drilling the skull base, separating deep arachnoid planes, or navigating through deep white matter tracts. The primary advance is the ability to see these structures at the same moment as the operator needs the information. This is very different than the traditional method of suspending surgery to look at a navigation screen and then restarting the operation.
The same augmented information stream is available to everyone in the operating room so that nurses and scrub technicians, residents and students all gain a greater appreciation of the approach and detailed surgical anatomy during each case.
The field is rapidly evolving such that surgeon controlled robotics, image guided robotics, and enhanced exoscope features are already available to improve on the workflows I just described. At this year’s AANS, I saw a significant convergence of advanced digital technologies applied to care in the operating room and enabling our capacity to make our once complex surgeries, simple.
In December, the Department of Neurosurgery at Mount Sinai will be hosting a symposium in artificial intelligence, advanced digital technologies, and device development in neurosurgery. Attendees of the symposium will have the opportunity to engage with a unique range of domain scientists, physicians, engineers, and entrepreneurs, whom together form the backbone of successful medical device development and clinical translation.
Dr. Joshua B. Bederson is a professor and chair of Neurosurgery of Mount Sinai Health System. He specializes in brain tumor, skull base, and vascular surgeries. He can be followed on Twitter @JoshBedersonMD and the Department of Neurosurgery can be followed on @MountSinaiNeuro.