Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here

Sagar Saxena

Student

Project Title

Post-hoc methods of explainability and interpretability of convolutional and recurrent neural networks

Project Description

In the recent past, the exponential growth in computational power has led to the development and deployment of many Machine Learning models for tasks in Computer Vision, Natural Language Processing among the others. With AI techniques revolutionizing a myriad of sectors of human life in a positive way, there is a fundamental problem which if unaddressed, can be quite detrimental with far-reaching consequences. This problem concerns the “black-box” nature of some of the sophisticated, State-Of-The-Art (SOTA) AI models which in turn lead to skepticism, distrust, lack of confidence and reluctance over the acceptance of the predictions generated by them. Its human psychology to reason, validate decisions rather than accepting something randomly generated by a black-box. Predictions coupled with explanations aids in their understandability and better acceptance, supporting decision-making. Thus, to improve the acceptance rate of such complex models, by winning over the trust, it’s highly recommended to make models more transparent and interpretable to the end-user. This research focuses on addressing the sine qua non of AI, i.e., explainability and interpretability, to make it trustworthy and reliable across various practical domains such as medicine and also act as a catalyst to motivate progress and development in the field of AI. This research intends to provide explanations related to the functioning of models learned with CNNs (Convolutional Neural Network), with RNNs (Recurrent Neural Network) and in hybrid CNN/RNN neural networks. There are various methods that have been explored in the past to provide explainability to AI, such as visual representation, symbolic reasoning, causal-inferencing, rule-based systems, fuzzy-inference systems, to name a few. This research focuses on post-hoc methods related to explainability, wherein, the model architecture is left unperturbed while the predictions of the model are explained using propositional rules.