Human-In-The-Loop Rule Based Feedback Towards Interactive Deep Learning
As technology evolves, the more common it has become to encounter cases of faulty systems and data that have a deep impact on people’s lives. The most prominent ones are in the field of facial recognition and state surveillance in which innocent people are being arrested, gender bias in work recruitment and credit in which women are eliminated from recruitment processes and are denied credit because of “sexist” data sets, people dying in hospital triage because of the use of wrong data sets for classification models, and the list goes on. First and foremost: Do people know what Artificial Intelligence (AI) is? Do they know when they’re engaging with it? The answer is hardly certain, therefore how can they be fully aware if they are being harmed by AI? What is currently seen in practice are algorithmic audits which are key when tackling the source of the problem, but there is still a long way to go until these procedures are fully established and largely implemented. It’s still depending on companies and governments to search for qualified people to conduct the audits, but what happens when there’s no interest? As for what this project proposes, is to listen to individual experiences, one-by-one, trying to build a network and search for patterns. We will take an ethnographic approach to data sets, as we do with people’s narratives, and that will be our starting point. This project initially consists of figuring out the impact AI/ML systems have/ are having/had on people’s lives and to further understand precisely how they are/were affected. For that, interviews will be conducted and if possible recorded, so that we can produce not only oral but also visual records of people telling their stories. In the next phase of the project, we will run topic modelling and Latent Dirichlet Allocation (LDA) which seems to be a powerful tool for recognizing such patterns in discourse and also innovative in ethnographic research. When we draw from people’s experiences we are able to build a more relatable narrative which can help us build systems that take them into consideration. There is a necessity of building bridges between computer scientists, data scientists and social scientists in order to create AI systems, machine learning models and data sets that can be used in a non-harmful way.