Not too long ago, I found myself in a boutique coffee shop with my 24-year-old cousin, who is a second-year medical student just finishing up her pre-clinical education and starting on her clinical rotations. After spending the first few minutes catching up on the latest family gossip, our conversation turned to medical school and clinical rotations.
“How are the hours?” I asked her. “Brutal?” I nearly aspirated my coffee when she responded with “No… it’s actually not too bad — easier than college!” I probed her for more details only to find out that in her first two years of pre-clinical coursework, she attended only a handful of classes in person (under 10 to be exact!) Instead, she stayed home most mornings after sleeping in, drinking coffee and watching the lectures online at double speed (in other words, half the time!) in her pajamas. All lectures were required to be recorded and posted online, and most students watched remotely at home, and only a limited few actually attended class in person. My shock grew more and more as she told me about her open-book exams, where students are now tested on their ability to access the information in a timely manner, rather than reciting it from memory, as we had to do. But what left me reeling the most was the fact that her first clinical rotation, which was surgery, had only two overnight calls during the entire rotation! So much for good old “Massachusetts General style” surgical rotations with “24 on… 24 off” schedules for three months and the HST/Harvard Medical School curriculum of 8–10 hours of daily didactic lectures that I remember from my medical school days.
I left the coffee shop perplexed and confused about what I had heard. How had the model of medical education evolved so dramatically in just a few years since I was in training with such a shift away from classroom learning? Had technology allowed us to make so much progress that the traditional memorization model of medical education was no longer necessary? Were peer-to-peer interactions now mostly conducted through chat rooms and online groups rather than face-to-face? And did this evolution reflect serious progress or are the doctors of tomorrow going to be ill-prepared to care for complex and critically-ill patients in high-stress situations with conflicting data because their skillset would be limited to “looking stuff up”?
I dissected our conversation piece-by-piece. First, the part where we discussed watching lectures from home at double speed. Her justification to me was that she could review the material (watch once “for an overview” and the second time “for consolidation of knowledge”) in the same amount of time that an in-person lecture took. It actually made a lot of sense and I wondered why we didn’t do that ourselves. But then again, wasn’t one of the best parts of medical school bouncing ideas and complex cases off your fellow students? Asking questions of the professor in person? Hanging out after anatomy class to review the muscles of the leg together? Weren’t those the instances that made those colleague and mentor connections we all have for life? And, what will happen to these students now whose model of learning is interaction with their laptops instead of each other? Did we have it right or is this a more efficient way to learn?
With the advent of the internet, education and the way we learn and process knowledge has dramatically evolved over the past decade. Educational platforms such as The Khan Academy (founded by my friend and MIT college classmate, Salman Khan) has revolutionized the way we begin learning and processing information from a very young age. With such technology, learning has definitely shifted outside the classroom in all disciplines. Colleges such as MIT are now participating in Open CourseWare, with educational materials, accessible on a public platform for use. These types of changes have taken down the walls of the hallowed ivory towers and leveled the education playing field, making the ivory towers accessible to anyone from any socioeconomic background that has access to the internet. Clearly, this is progress.
But, can the same apply for medical education? Can we rely upon an app to run a Code Blue if a doctor doesn’t have the algorithm memorized? Is that a step forward or backward? Is a doctor still a doctor when their training is teaching them how to access information for the patient rather than generating it themselves? In my opinion, no matter how easily technology can warn us about the interaction between amiodarone and warfarin, there are still some fundamental topics around physiology, pathology and pharmacology that I strongly believe every doctor must commit to memory. And, in complex clinical situations with conflicting clinical data, a fundamental understanding of disease processes is crucial to making decisions that are not always black and white.
But, I do suppose I can appreciate that the general trend in our society has shifted away from the individual and towards the devices. For example, how many phone numbers, addresses or directions can you recite from memory? So, it should come as no surprise then that there is now a startup company working on an “Alexa” a smart personal assistant for doctors, which can help with everything from drug-drug interactions to looking up the differential diagnosis for cough and fever.
There is no doubt that the way we learn and interact with new information has changed. The question I am still struggling with is whether it will make the next generation of physicians more efficient, better doctors or whether it has fundamentally weakened our ability to interact with our peers and our mentors and to synthesize and apply critical thinking to situations that aren’t always available on Google or Alexa. In the last decade, has medical education made progress forward or regressed? What do you think?
Payal Kohli, MD is a cardiologist and a 2018–2019 Doximity Author.