If we have two different explanations from the same machine learning algorithm, or from two different machine learning algorithms, which explanation is better?
Can we quantitatively compare the usefulness of explanations by linking them to certain tasks, where we can assess that an explanation is better because it helps with solving the task better. For example, can the given explanation help a human or a machine improve the accuracy or speed of labelling of a given set of examples? How do we objectively compare explanations for given tasks, and what are good ways to compute and compare the usefulness of explanations?
This project focuses on building supervised machine learning models in the context of sequence and/or time series classification, and developing methods for providing explanations and assessing their usefulness for different applications, for example in the sports science or smart agriculture domains. We will start from deep learning methods and post-hoc explanation methods that aim to explain black-box models such as CAM (Content Activation Maps) and compare these to state-of-the-art linear models (that are intrinsically easier to explain) and their associated explanation. Such techniques, when used in the time series classification task, aim to highlight parts of the time series signal that are useful for the classifier in reaching a classification decision, the so called discriminative parts of the signal.
Can we make use of these highlights/explanations in achieving higher classification accuracy by a second stage classifier or by a human, can we improve the robustness of labelling and do all of this faster, thus allowing us to quantify and compare the usefulness of different explanations.