Article Image

Is the Future of Human Physicians in Jeopardy?

Op-Med is a collection of original articles contributed by Doximity members.

“Could a robot do my job?” It’s a question that workers across all kinds of occupations have likely wondered at some point. Recent studies show that roughly 14% of workers have reported being replaced by automation and a total of 47% of U.S. jobs are at risk of becoming automated between 2010 and 2030. In fact, as technology continues to be utilized further and incorporated into new facets of industry, younger generations are likely considering this question much more often as they evaluate the job security of a potential future career. However, one profession stands out as being particularly quick to turn up their nose at the idea of being replaced by a robot. To become a physician requires some of the most rigorous education and training of any career. Good physicians are described as intelligent, observant, and hardworking, with a wealth of knowledge and expertise that in theory would be extremely difficult to recreate. But is replication as impossible as many physicians would like to believe?

Enter Watson, a “robot” designed by IBM using machine learning, handed a vast information database, and made to play one of America’s classic trivia game shows, Jeopardy. Hearing about and watching Watson perform in 2011 was one of the first times I — and many others — had truly witnessed artificial intelligence (AI) working in real time, right before my eyes, in a human-like fashion. 

Machine learning was not exactly a new concept in 2011, but it had previously spent decades in somewhat obscurity as academics and scientists debated its usefulness and tried shaping it into a tool that could be applied easily and broadly. At its core, machine learning involves computer algorithms that analyze past scenarios, examples, or other “data” to attempt to identify, quantify, and “learn” relationships between certain aspects. The AI then applies what it has “learned” to future or real-time situations. In the case of Watson, the AI was given several hundred episodes worth of Jeopardy clues and instructed to analyze word choice, phrasing, and category names to “learn” what was being asked along with what parameters (i.e., a certain number of letters or time period) while searching its extensive database for an answer. Combine this with a button-pushing arm and audible answer dictation, and Watson became not only a functional Jeopardy contestant, but one which beat Jeopardy champions Ken Jennings and Brad Rutter.

The proof of concept provided by the Jeopardy challenge stimulated additional interest in — and resources into — turning Watson into something greater. If Watson could “learn” how to search and apply vast information to answer intricate Jeopardy clues better than the game’s brightest minds, could it do the same to answer questions about patient care and diagnoses? 

Further development pivoted the direction of Watson from analyzing past Jeopardy games to analyzing past patient charts in order to create a new health care product capable of clinical decision making. Although a fully functional “Dr. Watson” was never achieved, it was not technology that limited the efforts the most. It was relatively smaller issues that led to the project being disbanded and sold off (i.e., allocation of resources by IBM and inability to purchase access to sufficient patient data). Because patients are more complex than Jeopardy clues, “Dr. Watson” would have been a considerably more complex system than the Watson that won Jeopardy. However, if achieving “Dr. Watson” was seen as a feasible goal more than a decade ago, technological advances have only provided easier application of machine learning and better processing/computing power since then.

For instance, in 2022, we witnessed the meteoric unveiling of ChatGPT, a system capable of not only passing U.S. medical licensing exams, but also of expertly explaining its logic in full conversation with a user. Further, attempts are already in full swing to make such intelligent chatbots proficient in expressing empathy while engaging in conversations with patients. As such, it may be unexpectedly soon when an artificial physician can realistically mimic the words and actions of their sentient counterparts in emotionally appropriate ways, and at least surpass a level of patience and empathy expressed by some of the more awkward or burnt-out clinicians among us.

Therefore, when (not if) AI diagnosis and treatment plans are reached, fine-tuned, and employed, where would that leave the much less affordable, productive, and consistent human physician? It is a question most physicians do not ever contemplate due to the naivety that AI could never adequately imitate their knowledge base or decision-making. Perhaps this is a similar naivety to that of past physicians who once considered the ability of da Vinci to rival their surgical precision or the EHR to replace their meticulous record keeping. Meanwhile, the development of the AI physician is, has been, and will continue to be underway with or without their help or approval. 

Of greater concern is that if physicians refuse to accept the incorporation of AI into their profession, they may be an important part left out of this eventual revolution of medicine. Innumerable funds and years could be saved by the help of physicians in creating AI that will “think” like they do. Additionally, by risking their own job security, physicians may be able to help ensure their thoughts and considerations when dealing with patients are effectively implemented into the code, which may protect patient safety and equity when these systems are eventually tested and implemented. Simply put, it is inevitable that either physicians will become programmers, or programmers will become physicians, and I would surely prefer the former.

How do you feel about the use of AI in medicine? Share your concerns and hopes in the comments!

Matthew J. Duggan is a current MD/MBA student at Thomas Jefferson University Sidney Kimmel Medical College. He is a Temple University graduate and intends to focus his future career in emergency medicine and hospital administration. 

Image by GoodStudio / Shutterstock

All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med