A multimodal approach using artificial intelligence (AI) applications surpassed radiologist performance in detecting prostate cancer, according to new research presented at the 2025 annual meeting of the American Urological Association.
As compared with radiologist read MRI scans, the multimodal AI framework achieved a significantly higher area under the Receiver Operating Characteristic curve (AUROC) and specificity, and similar sensitivity.
“This AI system basically combines MRI and ultrasound in an intelligent kind of way to detect prostate cancer,” said study author Hassan Jahanandish, PhD, Postdoctoral Fellow at Stanford School of Medicine, who presented the findings. “This goes beyond the traditional way of diagnosing prostate cancer that typically looks at MRI alone by radiologists for diagnosis.”
This approach was compared to prostate cancer diagnoses that were made by radiologists with expertise in this area. “It was more effective at pinpointing specific suspicious areas within the prostate for biopsy,” he explained, “And ultimately improved outcomes for patients.”
MRI-guided fusion biopsies have become the gold standard for prostate cancer diagnosis and have significantly improved the detection of clinically significant disease. The MRI-detected lesions are then mapped to transrectal ultrasound (TRUS) images during biopsy. When using pre-biopsy MRI scans to identify suspicious lesions for targeting, expert radiologists have been able to increase the overall diagnostic sensitivity. Not surprisingly, the adoption of prebiopsy MRI has dramatically increased during the past two decades but interpretation of prostate MR images has been reported to vary considerably across radiologists. In addition, ultrasound-guided biopsies and pre-biopsy MRI scans are performed at separate intervals, and this involves changes in patient positioning which can result in significant variations in the imaging field.
AI approaches for detecting prostate cancer in MR images have been widely studied, but integrating MRI with TRUS in a multimodal AI system remains unexplored, Dr. Jahanandish explained.
“Our ultimate goal is to improve accuracy and consistency in prostate cancer detection and eventually help radiologists and urologists who are not specialists in this are to better diagnose prostate cancer,” he said. “This will ultimately lead to fewer missed cancer and better outcomes for prostate cancer patients”
Dr. Jahanandish and colleagues developed a multimodal AI framework that simultaneously uses MRI and TRUS image sequences to detect prostate cancer lesions, and in this study, evaluated its performance against radiologists reading MRI in standard clinical care.
The AI framework automatically aligns MRI with TRUS volumes, and that is followed by a multimodal deep neural network for cancer detection. It includes a separate encoder for T2, ADC, DWI, and TRUS image sequences, plus a shared decoder to integrate the learned representation from all sequences to identify clinically significant prostate cancer (grade group≥2).
The framework was then trained on 1388 patients with biopsy-confirmed pathology labels, and subsequently evaluated on an independent group of 111 patients who had undergone radical prostatectomy. The performance of practicing radiologists reading MRI during routine clinical care at the authors' facility (academic center) was compared against the AI model using MRI and TRUS in the same test cohort.
Their results showed that the multimodal AI framework was superior to, or at least on par, with the radiologists reading. It achieved an area under the Receiver Operating Characteristic curve (AUROC), sensitivity, and specificity of 90%, 79%, and 88%, respectively, as compared to compared to radiologists reading MRI who achieved an AUROC, sensitivity, and specificity of 79%, 79%, and 78%. Overall, the AI model achieved a significantly higher ROC and specificity, and similar sensitivity.
“The next step for this project will be to validate it in larger multi center studies, hopefully integrated into clinical workflows and make it usable across different scanners and hospitals,” said Dr. Jahanandish. “We want to make sure that it's useful for every one of those scenarios and hopefully can lead to better outcomes for prostate cancer patients.”
The study was supported by Stanford Radiology and Urology Departments, NIH/NCI grant (R37CA260346 to M.R.), and the generous philanthropic support of patients (G.S.)
IP21-24 MULTIMODAL MRI-TRUS AI MODEL EXCEEDS RADIOLOGIST PERFORMANCE IN PROSTATE CANCER DETECTION. Presented April 27, 2025. 2025 Annual Meeting of the American Urological Association
Illustration by Yi Min Chun