The cost and time required to obtain a magnetic resonance imaging (MRI) scan may be significantly reduced in the future, thanks to the application of artificial intelligence (AI) strategies. MRI scanning is a noninvasive way to see soft-tissue structures deep inside the body, frequently used to detect tumors in the brain or abdomen, or to assess knee and other joint injuries.
New York University (NYU) School of Medicine launched the Center for Advanced Imaging Innovation and Research (CAI2R) in 2014 to develop rapid, comprehensive imaging, with a focus on MRI. It also develops techniques and tools to improve other scanning methods, including computed tomography (CT), a computerized x-ray imaging procedure which obtains cross-sectional images of the body. CAI2R receives support from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), part of NIH.
CAI2R is directed by Daniel Sodickson, M.D., Ph.D., a professor of Radiology and of Physiology and Neuroscience at NYU School of Medicine. CAI2R’s interdisciplinary team includes radiologists with various specialties, together with imaging physicists, biophysicists, biomedical engineers, electrical engineers, mechanical engineers, mathematicians, and computer scientists.
In August, the NYU center established a collaboration with the Facebook Artificial Intelligence Research (FAIR) group, launched by Facebook in 2013 to work with researchers in academia to advance the state-of-the art in the rapidly emerging AI field. The imaging project, called fastMRI, will use AI to make MRI scans up to 10 times faster. FAIR will provide access to AI models, metrics, and techniques; CAI2R offers extensive medical imaging expertise and an enormous dataset. The resulting tools and data will be openly accessible to the research community.
“The fastMRI project builds on work conducted by this innovative NIBIB-funded center to perform image reconstruction with deep learning,” said Guoying Liu, Ph.D., director of the NIBIB Program in Magnetic Resonance Imaging and Spectroscopy. “It is exciting that the technology industry is interested in real-world medical imaging problems like this. Facebook brings to bear new thinking and complementary expertise, and advanced computational tools that will accelerate and amplify the impact of the center’s work.”
Sodickson explains that, whereas AI has been explored widely for the automated interpretation of pre-existing images, it is only more recently that it has been used to generate images, and most of this generative work has been focused on the creation of plausible fictions – images that look real but are not—such as farm scenes in which the original horses have been converted into convincing-looking zebras. The fastMRI project seeks to use similar techniques for medical imaging in order to create rigorously faithful renderings, but with less data and time than ever was required before.
“In medical imaging, we generally acquire every data point that we think is remotely necessary, because it is so critical that we don’t miss some feature that contributes to an accurate diagnosis, or that helps to guide the therapy of a patient,” Sodickson said. “But we know that we are probably overdoing it.”
He explains that medical images, like almost all images, are compressible using the kinds of computer algorithms that are commonly available on any cell phone. Once all the data have been gathered, the resulting image can be analyzed, and unimportant data can be discarded. This begs the question of whether some time-consuming data can be skipped in the first place. Indeed, research in the past decade suggests that some degree of pre-compression is possible. When imagers gather just a fraction of the data points in the right way, a full image can be reconstructed without loss of key information. But the process is not perfect.
“Unless we are very careful, we can smooth over certain features or get characteristic artefacts,” Sodickson said. “That’s where AI comes to the rescue. AI can learn complicated functions that succinctly represent the typical relationships between various features in an image.” Just as two points uniquely characterize a line, he explained, these relationships can then be ascertained with a comparatively small number of data points. “We then don’t need to waste a lot of time gathering data to characterize common background features; we can focus on just what we need to capture patient-specific information,” he said.
NYU School of Medicine will provide the images from 10,000 clinical cases that will be used by the fastMRI project. It amounts to about 3 million knee, brain, and liver MRIs. Whereas typical data-driven AI approaches require large data sets to train the neural networks robustly, the fastMRI project will investigate different approaches to reconstruction of under-sampled data, some of which require smaller data sets.
“We have decided that we want to try lots of different techniques,” Sodickson said. “…the ones that are completely data-driven and require hundreds of thousands of images, and the ones that are more physics-driven which may only require 10 images. But with this whole data set, we can try a variety of strategies and compare them competitively against each other.” He added that a data set this size will enable researchers around the country and the world to experiment with a variety of approaches, including the most data-intensive ones.
As for the specter that AI will displace humans in medical imaging, Sodickson suggests that that fear is abating as people see the benefit in human-machine collaboration. “Everybody knows that there are tasks that radiologists absolutely hate to do—repetitive tasks that can likely be handled by machines; and there are other, more integrative tasks in which machines can assist but humans are still going to play a major role for a long time to come,” he said.
CAI2R is supported by a grant from NIBIB (EB017183).