Article Image

AI in Cancer Care

Op-Med is a collection of original essays contributed by Doximity members.

A mentor of mine, an oncologist who directs a busy cancer center, once described a familiar scene: Patients often walk in already armed with information from Google, then leave the visit still planning to search again that night. Even after hearing about scans, treatment options, and side effects, their understanding feels incomplete. That observation has stayed with me. In oncology, every word matters, yet time is our most limited resource. And with staffing stretched thin and medical complexity growing, the need for tools that can reinforce patient understanding has never been greater. Could artificial intelligence help fill these gaps, or will it create new ones?

The Promise

In the right hands, AI can be a force multiplier. In the U.K., one platform boosted cancer detection rates in primary care by nearly 10%. Imaging tools now rival, and sometimes even surpass, radiologists in accuracy. In pathology, AI can scan slides in seconds, helping guide more personalized treatment decisions.

Even patient education is beginning to benefit. Chatbots have shown promise in answering common cancer questions with reasonable accuracy. Some are even being tested to support patients in underserved communities, where language barriers or limited access to specialists can leave dangerous gaps in understanding. The idea is simple: Let AI handle the repetitive, time-consuming explanations so clinicians can focus on the human side of care.

The Peril

But oncology isn’t just about delivering information, it’s about delivering the right information, tailored to each patient’s circumstances. AI is powerful, but it’s still only as good as the data it’s fed. If those datasets underrepresent minorities, the recommendations risk reinforcing disparities.

Regulation is another weak spot. The FDA has cleared hundreds of AI-enabled devices, most in radiology, but there’s still no dedicated pathway for tools that interact directly with patients. Without consistent oversight, quality and safety can vary widely.

And when something goes wrong, who is responsible? The developer? The physician? Most doctors say AI companies should carry the liability, yet many also admit they wouldn’t feel confident spotting a biased or flawed tool in practice. That tension, between trust and uncertainty, remains unresolved.

Where We Go From Here

AI has the potential to reduce burnout by cutting down on documentation and freeing up time for more meaningful patient interactions. But in oncology, where decisions can alter the course of a life, speed cannot trump safety.

Three things are essential if AI is to deliver on its promise:

1) Better data. Large, diverse, and continuously updated datasets to ensure AI reflects the full spectrum of patients we serve.

2) Clear regulation. A robust framework to guarantee AI tools meet enforceable safety and quality standards before reaching patients.

3) Trained clinicians. Doctors must be equipped not only to use AI, but to recognize and correct its mistakes in real time.

If we get this right, AI won’t replace oncologists. Instead, it will strengthen what matters most: helping patients leave a visit with more clarity, understanding, and confidence in the road ahead. And when you’re facing cancer, that clarity is not a luxury, it’s a lifeline.

How should physicians navigate the promise and peril of AI in oncology? Share in the comments.

Birpartap Thind is a medical student with a Master of Healthcare Administration, pursuing a career in anesthesiology with a strong interest in workflow optimization, quality improvement, and interdisciplinary collaboration.

Illustration by Diana Connolly

All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med