Explainable ML in Corporate Credit Ratings
A credit rating evaluates the creditworthiness of an entity that is seeking to borrow money, with regards to a particular financial obligation (Investopedia, 2020). Credit rating in the case of corporates or governments is normally supplied by a credit rating agency, such as Fitch, Standard and Poor’s, and Moody’s. Credit rating also determines the cost of borrowing for the entity issuing a financial instrument. However, the rating agencies have a significant conflict of interest in evaluating the creditworthiness of an entity as it is the entity who pays for the rating. The consequences of such conflict of interest have been seen in the financial crisis of 2007-2008 when the highest ratings were assigned to financial products that were of a significantly poorer quality (Stirier, 2008). Additionally, credit ratings are usually expensive due to the amount of labour that is required to produce them. Inaccurate or out-of-date credit score can result in a company, especially a small to medium-size enterprise (SME), having a higher cost of borrowing. Thus, in my PhD, I would like to design a method, based on explainable machine learning (explainable ML), that would accurately evaluate an entity’s credit rating. A well-performing model would tackle the issue of rating agency conflict of interest providing an unbiased evaluation of the creditworthiness of an entity. Academic literature has explored both statistical modelling and machine learning methods, finding the deep learning models to overall yield better performance (Dastile et al., 2020). However, deep learning lacks transparency which is required for credit rating models under Basel II accord (BIS, 2006). As a result, research has been hesitant to implement machine learning models that do not satisfy the legal transparency requirements in credit research. Dastile et al. in a literature survey conducted in 2020 found that only 8% of the investigated studies used transparency techniques. In my research, I will focus on creating an explainable machine learning approach which would yield a performance equivalent or similar to that of the deep learning models. In line with the Basel II accord, it would provide a sufficient level of detail into how the model arrives at a decision. Some success has been achieved using explainable ML in credit scoring to date (Fahner, 2018; Bussman et al., 2020). However, significant progress is yet to be made in the field in terms of model performance and explainability trade-off in corporate credit scoring.