Interpretability and Visualization of Deep Learning Models in Magnetic Resonance Imaging and Radiology
Deep learning approaches are achieving state-of-the-art results in Magnetic Resonance Imaging (MRI) and Radiology (Computed Tomography, X-ray) on such downstream tasks as image classification and image segmentation. However, they still suffer from a lack of human interpretability critical for increasing understanding of the methods’ operation, enabling clinical translation and gaining clinician trust. Therefore, it is crucial to provide clear and interpretable rationale for model decisions.
In order to understand model predictions, there are algorithms that can be applied to a fully trained deep learning model and produce heatmaps highlighting portions of the image that contributed to the model predictions. These algorithms are (but are not limited to) Guided-Backpropagation, Class Activation Mapping (CAM), Grad-CAM, Grad-CAM++ and Score-CAM.
The recent COVID‐19 pandemic made clear that rapid clinical translation of Artificial Intelligence (AI) systems may be critically important for assisting treatment‐related decisions and improving patient outcomes. Recently, there has been a particular interest in applying the above algorithms to interpret COVID-19 Chest X-rays and assist clinicians in making the correct diagnosis.
We will be working with the open source MRI, CT and X-ray public datasets and will initially investigate:
To summarize, this project is expected to identify best performing approaches for providing explainable and interpretable AI output and discuss their advantages and disadvantages for the MRI and Radiology datasets considered in this study. Further, we expect to provide recommendations for appropriate incorporation of these techniques to improve deep learning models and evaluate their performance.