Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here

Courtney Ford

Student

Project Title

Explaining With Cases: Computational & Psychological Explorations in Explainable AI (XAI).

Project Description

Artificial Intelligent (AI) systems are playing an increasing role in decision-making tasks for a variety of industry sectors and government bodies. As such, interactions between human users and AI systems are becoming much more commonplace, and there is a pressing need to understand how people can understand these systems and come to trust their abilities on diverse and critical tasks. However, these developments raise two fundamental problems: (i) how can we explain the black-box decision-processes of these AI systems and (ii) what type of explanation strategy will work best for people interacting with these systems

Recently, the field of eXplainable AI (XAI) has emerged as a major research effort, underpinned by its own DARPA program (Gunning, 2017 DARPA Report), to find answers to these questions. For example, Kenny and Keane (2019 IJCAI-19) have proposed a Twin System approach to explain the decisions, classifications and predictions of deep-learning systems by mapping the feature-weights of a black-box AI into a much more interpretable case-based reasoning (CBR) system to find explanatory cases. This type of post-hoc explanationby- example has a long history in the CBR literature but is marked by a paucity of user studies; that is, it is not at all clear whether people find these case-based explanations useful.

The proposed research will explore both computationally and psychologically the most effective ways in which cases can be used to explain black-box AI systems. Computationally, new algorithmic methods for finding different types of cases will be developed (e.g., to find counterfactual, semi-factual and factual cases) and explored in the context of the Twin Systems approach involving the three main data types used in deep learning systems (i.e., images, text and tabular data). Psychologically, user studies will be performed to evaluate the explanatory validity of case-based explanations and to identify the optimal forms these might take to aid human users.

Outcomes from the work will be (i) a generic computational framework that can applied to any decisionmaking AI system, (ii) definitive knowledge about how and what cases may be deployed to accurately explain the decision processes of such AI systems. The outcomes of the work will be a generic framework and solution to the XAI problem in the context of post-hoc explanation by example, to help users garner a better understanding of AI systems and to help them be more satisfied and trusting of their decision making processes.