Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here

Anam Hashmi

Student

Project Title

Interpretability and Visualization of Deep Learning Models in Magnetic Resonance Imaging and Radiology

Project Description

Deep learning approaches are achieving state-of-the-art results in Magnetic Resonance Imaging (MRI) and Radiology (Computed Tomography, X-ray) on such downstream tasks as image classification and image segmentation. However, they still suffer from a lack of human interpretability critical for increasing understanding of the methods’ operation, enabling clinical translation and gaining clinician trust. Therefore, it is crucial to provide clear and interpretable rationale for model decisions.

In order to understand model predictions, there are algorithms that can be applied to a fully trained deep learning model and produce heatmaps highlighting portions of the image that contributed to the model predictions. These algorithms are (but are not limited to) Guided-Backpropagation, Class Activation Mapping (CAM), Grad-CAM, Grad-CAM++ and Score-CAM.

The recent COVID‐19 pandemic made clear that rapid clinical translation of Artificial Intelligence (AI) systems may be critically important for assisting treatment‐related decisions and improving patient outcomes. Recently, there has been a particular interest in applying the above algorithms to interpret COVID-19 Chest X-rays and assist clinicians in making the correct diagnosis.

We will be working with the open source MRI, CT and X-ray public datasets and will initially investigate:

  1. How different Convolutional Neural Networks (CNNs) architectures contribute to the generation of reliable heatmaps. Here, we will focus on both custom CNNs and established pre-trained transfer learning based architectures such as VGG, ResNet, Inception, etc.
  2. Weight initialization in CNNs has been also shown (and is in line with our experience) to be important to the performance of the interpretability algorithms and we will investigate different weight initialization protocols. We will attempt to answer the research question why the initialization with ImageNet weights that are completely unrelated to the medical imaging domain leads to the astonishing performance of some interpretability algorithms on medical imaging datasets.
  3. Most works in the literature are focused on the interpretability methods for the classification models but there are some limited works that also examine the interpretability of segmentation networks. We will examine both downstream tasks and for this purpose we will be working with the 2D T1-weighted CE-MRI dataset compiled by Cheng et al. as this dataset contains both classification and segmentation labels.
  4. Therefore, this study will also consider weakly supervised image segmentation of the Cheng et al. dataset using class-specific heatmaps, possibly Conditional Random Fields (CRF), and thresholding techniques.
  5. Many interpretability algorithms are applied to the last convolutional layer of CNNs and therefore the resulting heatmaps are coarse. We will investigate how to leverage the intermediate layers for the generation of finer heatmaps.

To summarize, this project is expected to identify best performing approaches for providing explainable and interpretable AI output and discuss their advantages and disadvantages for the MRI and Radiology datasets considered in this study. Further, we expect to provide recommendations for appropriate incorporation of these techniques to improve deep learning models and evaluate their performance.