Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here

Bulat Maksudov

Student

Project Title

Explainable AI for Medical Images

Project Description

Despite the success of Deep Learning in natural language processing and computer vision, the lack of human interpretability hinders their use in high-stake decision making. The aim of the proposed research idea is to tackle the problem of generating deep explanations that rely on multimodal data, and where the interpretation of some specific features in one modality can support explanation of features in another modality. The idea is to explore the use of techniques for semantic linking of causal and relational structures extracted from deep representations to identify how they correlate multimodal representations. One possible way of doing that would be to leverage functional graphs representing the neural activity within the deep network, and use probabilistic graphical models, statistical relational learning and/or link prediction to predict and validate semantic connections across modalities. The ultimate goal should be to not only explain the outcome of deep learning models, but also to link concepts from one modality to another to generate better explanations. One application scenario that can be investigated to demonstrate the approach is multimodal diagnostics (eg. triage).This project aims to explore several research challenges regarding the usage of AI for medical imaging and the challenge of transparency and explainability: how can our model provide actionable and clinically significant output? What are the differences between the decision-making process of radiologists and medical imaging models? How can we incorporate additional data and knowledge to affect trust and interpretability of the output of the model?