Artificial intelligence in radiology? Do we need as many radiologists in the future?
Recent advances in artificial intelligence have led to speculation that AI might one day replace human radiologists. Researchers have developed deep learning neural networks that can identify pathologies in radiological images such as bone fractures and potentially cancerous lesions, in some cases more reliably than an average radiologist. For the most part, though, the best systems are currently on par with human performance and are used only in research settings.
That said, deep learning is rapidly advancing, and it’s a much better technology than previous approaches to medical image analysis. This probably does portend a future in which AI plays an important role in radiology. Radiological practice would certainly benefit from systems that can read and interpret multiple images quickly, because the number of images has increased much faster over the last decade than the number of radiologists. Hundreds of images can be taken for one patient’s disease or injury. Imaging and radiology are expensive, and any solution that could reduce human labor, lower costs, and improve diagnostic accuracy would benefit patients and physicians alike.
What does this mean for radiologists? Some medical students have reportedly decided not to specialize in radiology because they fear the job will cease to exist. We’re confident, however, that the great majority of radiologists will continue to have jobs in the decades to come — jobs that will be altered and enhanced by AI. One of us (Keith) is a radiologist and artificial intelligence researcher, and the other (Thomas) has researched the impact of AI on jobs for several years. We see several reasons why radiologists won’t be disappearing from the labor force, which we describe below. We also believe that several of these factors will inhibit the large-scale automation of other jobs supposedly threatened by AI.
First, radiologists do more than read and interpret images. Like other AI systems, radiology AI systems perform single tasks (narrow AI). The deep learning models we mentioned are trained for specific image recognition tasks (such as nodule detection on chest CT or hemorrhage on brain MRI). But thousands of such narrow detection tasks are necessary to fully identify all potential findings in medical images, and only a few of these can be done by AI today. Furthermore, the job of image interpretation encompasses only one set of tasks that radiologists perform. They also consult with other physicians on diagnosis and treatment, treat diseases (for example providing local ablative therapies), perform image-guided medical interventions (interventional radiology), define the technical parameters of imaging examinations to be performed (tailored to the patient’s condition), relate findings from images to other medical records and test results, discuss procedures and results with patients, and many other activities. Even in the unlikely event that AI took over image reading and interpretation, most radiologists could redirect their focus to these other essential activities.
Second, clinical processes for employing AI-based image work are a long way from being ready for daily use. Dreyer’s investigations with the Data Science Institute at the American College of Radiology (ACR) found that different imaging technology vendors and deep learning algorithms are focused on different aspects of the use cases they address. Even among deep learning–based nodule detectors that are approved by the FDA, there were different foci: the probability of a lesion, the probability of cancer, a nodule’s feature or its location. These distinct foci would make it very difficult to embed deep learning systems into current clinical practice. Therefore, the ACR is beginning to define the inputs and outputs for the vendors of deep learning software. The FDA requires, and the ACR provides methodologies for, vendors to verify the effectiveness and value of the algorithms before and after they are taken to market. At the same time, the ACR is working toward a comprehensive collection of use cases — by body part, modality, and disease type — for which the clinical process, image requirements, and explanation of outputs are all well-defined and consistent with current and future clinical practices. Of course, to create a comprehensive collection of use cases will take many years, further expanding the role for radiologists in the AI world.
Third, deep learning algorithms for image recognition must be trained on “labeled data.” In radiology, this means images from patients who have received a definitive diagnosis of cancer, a broken bone, or other pathology. In other types of image recognition where deep learning has achieved high levels of success, it has been trained on millions of labeled images, such as cat photos on the internet. But there is no aggregated repository of radiology images, labeled or otherwise. They are owned by vendors, hospitals and physicians, imaging facilities, and patients, and collecting and labeling them to accumulate a critical mass for AI training will be challenging and time-consuming.
Finally, just as it’s clear that autonomous vehicles will require changes in automobile regulation and insurance, changes will be required in medical regulation and health insurance for automated image analysis to take off. Who’s responsible, for example, if a machine misdiagnoses a cancer case — the physician, the hospital, the imaging technology vendor, or the data scientist who created the algorithm? And will health care payers reimburse for an AI diagnosis as a single set of eyes, or as a second set in combination with a human radiologist? All these issues need to be worked out, and it’s unlikely that progress will happen as fast as deep learning research in the lab does. AI radiology machines may need to become substantially better than human radiologists — not just as good — in order to drive the regulatory and reimbursement changes needed.
Read my more blogs from here