Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here
Add Your Heading Text Here

Jean Francois Itangayenda

Student

Project Title

Bias in Financial Lending Models

Project Description

The research will delve into machine learning models and their perceived and received bias in lending financial transactions. By perceived bias, we want to understand how bias is seen and analyzed by model developers (for example, through their own judgment, model builders, with an intent of increasing fairness in their model, can exclude certain variables, features and categories in order to obtain a desired output). We want to understand how this act of removing certain items from models can “bias” or “influence” their outputs; by received bias, we mean bias that a model might communicate to its own process (training, test) – through data corruption, noise, etc.) We want to understand from a socio-technical perspective how this received bias —whether from model training, datasets or the models themselves—affects model outcomes. The project will be two-sided, looking at both the social impacts of bias in machine learning algorithms that predict and recommend loan refusal/acceptance to clients of financial institutions, the definition(s) of fairness in financial systems with regards to lending activities, the socio-technical forces that may or may not unknowingly impact data collection (for example, causing its corruption) and training; the technical demands of developing and running these models (socio-technical cost-benefit of these models, how are they deployed, how the data is collected, who or what controls these machines that run models (financial institutions, governments, etc.) and how ultimately all these factors impact these models’ outputs.