Article Image

The State of AI in Radiology

Op-Med is a collection of original articles contributed by Doximity members.

I attended the virtual RSNA 2020 Virtual Meeting this year. After traveling to Chicago for this meeting a number of times over the years, it was a markedly different experience to attend it virtually this year. Lectures were broadcast all over the world, many of which were prerecorded, and feedback from attendees came in real-time via chat functions, accompanied with a direct reply from presenters. As someone who has a background and interest in artificial intelligence, RSNA 2020 was a good opportunity to see how the thinking about AI has evolved over the years.

I noticed that a lot of the presentations involved close collaboration between physicians, as clinical experts, with computer scientists and engineers. In addition, datasets have become progressively larger, often as a result of multi-institutional collaborations. One particularly interesting talk by Dr. Borstelmann focused on generative adversarial networks (GANs), which have the ability to use previous images to generate synthetic data, such as synthetic chest radiographs. Amongst many applications, such a technique could increase the diversity of data presented to train machine learning algorithms. Another interesting topic was the legal consequences of AI algorithms. Dr. Harvey, both an MD and a JD, spoke about the legal intricacies of this new technology. One fundamental question is, who will get sued if AI makes a mistake? Nevertheless, similar to Computer-Aided Detection (CAD) for mammography, AI is unlikely to absolve physicians of legal liability.

There has recently been a proliferation of publicly available datasets. A number of them are hosted by the government, including the Cancer Imaging Archive (TCIA) and the National Biomedical Imaging Archive (NBIA). Others are hosted by universities, such as the shared datasets available at Stanford’s Center for Artificial Intelligence in Medicine & Imaging; these include applications ranging from pediatric X-rays to knee MRIs. Others include the Medical Information Mart for Intensive Care (MIMIC) and the Alzheimer’s Disease Neuroimaging Initiative, among many others. The popular radiology website, Radiopaedia, has a rather long list of available datasets. RSNA 2020 had dedicated lectures on how institutions can share data to contribute to publicly available datasets, including navigating legal and regulatory concerns. The wider availability of large datasets should lead to an improved quality of algorithms, as researchers all over the world work on these problems.

In particular, the COVID-19 pandemic has spurred a collaborative initiative to enable machine learning innovation on this topic, termed the Medical Imaging and Data Resource Center (MIDRC). This initiative involves over 20 institutions in the U.S. and organizations such as the RSNA, ACR, and AAPM, with 60,000 COVID studies in the first year. The goal is to allow researchers to create studies that would be limited if relegated to a single institution, with the overall goal of clinically translating these tools for direct patient benefit. There are future plans to expand MIDRC to other disease processes.

Commercial machine learning applications have increasingly been emerging. I learned that, as of 2019, there were over 100 commercial machine learning algorithms that had received regulatory approval. A large number of these approvals have been in chest imaging and neuroimaging. For instance, numerous algorithms have been applied to chest X-rays, which, as 2D images, easily fit into standard machine learning frameworks. Other applications, such as abdominal imaging, have had comparatively less activity due to greater variation in anatomy. Commercial applications have favored certain imaging modalities, particularly X-rays and CT scans because they are easier to work with using standard machine learning frameworks. Other modalities, such as ultrasound, have been more difficult due to operator-dependence and the heterogeneity of imaging acquisition. Although challenging, these applications offer new opportunities.

There has been increasing awareness of the challenges involved in developing AI algorithms. Techniques such as deep learning thrive with large and diverse datasets, which can be hard to assemble due to legal, regulatory, and political hurdles associated with sharing medical data. If models are trained with limited or non-representative data, they can often fail to generalize to new clinical scenarios, such as new patient populations, institutions, imaging technology, or imaging parameters. Another major challenge is the time that it can take to annotate a large clinical dataset, which may contain tens of thousands of cases or more. While valuable information can be contained within radiology reports, this data is often noisy. Related to the problem of annotation is the heterogeneity in labels that one can get if different readers are involved. For instance, there is significant variation in chest radiograph interpretation. It would be ideal to have multiple annotations with a consensus process, but this can be even more time-consuming. Finally, algorithms will often take shortcuts in order to come to their conclusions; a pneumothorax detector may learn to detect the presence of chest tubes.

One topic that received increased attention this year was the ethics of AI. One major problem is that if certain populations are not represented in the training sets used to develop algorithms, then it will probably not perform as well on that population. This is a problem, particularly for those who come from disadvantaged communities, who are often not as well represented in research studies. Another problem is related to the nature of AI itself. Algorithms are trained to learn patterns. However, if the data provided is biased in any way, then the algorithms will simply reinforce that bias. To that end, Prof. Barzilay and Prof. Lehman, from MIT and MGH respectively, presented their research on deep learning with mammography, which was equally accurate for Caucasian, Black, and Asian women, a significant improvement over previous models.

Overall, RSNA 2020 was a fascinating showcase of both the promise of AI and its challenges. I look forward to seeing how this field evolves at future RSNA meetings, hopefully once again attending in person.

Illustration by April Brust

All opinions published on Op-Med are the author’s and do not reflect the official position of Doximity or its editors. Op-Med is a safe space for free expression and diverse perspectives. For more information, or to submit your own opinion, please see our submission guidelines or email opmed@doximity.com.

More from Op-Med