Dr Marguerite Barry –

AI technologies are becoming deeply integrated into everyday life from communication to health, finance to education and beyond. Like all technologies, AI creates new possibilities for human action and behaviour and so must be examined from an ethical perspective. Much attention is rightly placed on the data that machine learning systems are trained with and the data they use in order to generate an output, decision or recommendation. This  raises important questions such as: Is the training data biased and is it representative of the people for whom it will be making decisions? Are algorithms using private data and have data owners given consent? Is the decision or recommendation explainable, is the process transparent and who is responsible if something goes wrong? Beyond the data are further questions about whether AI is an appropriate ‘solution’ in certain social contexts.

 Recent years have seen a growing focus on ethics of AI in research, education and policy across Europe and beyond. This ‘turn’ to ethics is visible in industry consultancy reports, the codes of ethics of professional computing associations, statements of ‘principles’ by transnational technology corporations and national and international policy guidelines for AI. Many of these establish agreed principles such as fairness, accountability, transparency, explainability and trustworthiness in the use of data driven machine learning.

The EU High Level Expert Group on AI Principles of Trustworthy AI identified the following requirements: Human agency and oversight; Technical Robustness and safety; Privacy and Data governance; Transparency; Diversity, non-discrimination and fairness; Societal and environmental wellbeing; Accountability. Their detailed approach to consultation – and inclusion of human oversight – is an important step in producing a joint set of norms and expectations around how AI should be implemented. Such formal principles-based initiatives are highly influential in shaping societal expectations around AI and offer some reassurance of ethical development. But generalised guidelines can also divert attention from the context and purpose of use of AI and it is unclear how they will impact on everyday practice in specific domains.

Meanwhile, surveys suggest that the general public believes it is possible and necessary to design ethical AI, but also shows that people have difficulty with abstract ethical principles like fairness and transparency. Public attitudes are more likely to be influenced by high profile news stories about AI or fictional representations in film and TV and to a lesser degree by everyday experiences with machine learning (many of which are invisible). Everyday uses of AI – from chatbots to social media – involve social and regulatory norms and practices as well as ethical questions and relate more to expectations and traditions of political accountability and transparency. Like all technologies, AI is a socio-technical system and so is governed by social, economic and political relationships and values.

However, the professional AI development context can offer minimal space for ethical reflection or oversight, creating a significant gap between public expectations, high level principles and the ‘performance’ of ethics in practice. Many people working in AI research and development find themselves in a bind, both under increasing pressure from society to engage in ‘ethical’ AI practice, but also expected to adhere to a range of organisational and professional incentives in the race to innovate and compete globally, which may be in conflict with ethical principles. This gap will remain until we understand and accept the limitations of AI in complex social contexts and recognise that non-technological policies and human workers are required for more ‘ethical’ AI. Organisational and professional policies must be supplemented by robust transnational policies and regulations that make corporations, research institutes and organisations responsible for the human labour involved in deploying AI, and accountable for its social impacts. In this way, ethical principles for AI can be enacted in practice ensuring that all of society is included in decisions about how we integrate AI into our lives.

A full paper discussing the performativity of ethics and AI is available open access in the journal Big Data and Society at journals.sagepub.com/doi/full/10.1177/2053951720915939