Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here

Oluwabukola Adegboro

Project Title

Weakly-Supervised Brain Tumour Segmentation in MRI Data Using Explainable Classification Models.

Project Description

Brain cancer ranks as the 10th most prevalent cause of mortality among adults. Timely diagnosis not only plays a pivotal role in preserving lives but also substantially reduces the financial burden associated with treatment. Brain tumours are one of the most lethal types of cancer and are typically diagnosed using non-invasive Magnetic Resonance Imaging (MRI). Currently, biopsy is the first choice for clinicians when classifying cancerous tissue prior to surgery. If MRI images are available, an experienced radiologist performs the classification. Nevertheless, manual classification and segmentation of brain tumours in images are time-consuming, expensive, and in certain cases, subjective and prone to errors [3].

On the other hand, deep learning has become an invaluable tool in healthcare, serving diverse and crucial functions. Its application ranges from disease diagnosis to treatment optimization, heralding a new era of innovation and improved patient care. Nevertheless, the complex reasoning of such models, also known as a black-box problem, makes it difficult to be accepted and adopted in the medical imaging field which requires a high degree of transparency and trust. There is a special branch of Artificial Intelligence (AI) which focuses on creating explainable and interpretable models or applying post-hoc explainability methods on classifiers to build trust in deep learning models [1].

In recent years, deep learning has shown a promise to support the diagnosis of brain cancer and to assist clinical decision making in brain cancer treatment [5]. However, there are still some open problems. The main goal of this project is to develop transparent and comprehensible models for the weakly-supervised segmentation of brain tumours within MRI data. Furthermore, the project seeks to offer recommendations to healthcare professionals on the integration of this technology into medical practice.

To address this research problem, we identify three key objectives:
(i) To create deep learning-based models to classify brain tumours in MRI;

(ii) To enhance deep learning models by incorporating explainable modules capable of generating decision heatmaps, with the goal of employing eXplainable AI (XAI) through the use of decision heatmaps. The first hypothesis is that the classification pipeline extended by a Class Activation Mapping (CAM) algorithm will focus on the tumour area and thus will localise the tumours without the need for the time-consuming segmentation labels. In order to explore this hypothesis, several classical and recent CAM algorithms [4] will be benchmarked on an open-source 2D CE-MRI dataset [2] containing three types of brain tumours such as Meningioma, Glioma, and Pituitary. The second hypothesis is that the inclusion of a Global Average Pooling layer just before the last convolutional layer in a modified segmentation network [1] will contribute to the explainability of the model trained only with classification labels;

(iii) Propose a weakly-supervised brain tumour segmentation by applying thresholding techniques to the decision heatmaps obtained.