ABSTRACT
Calls for heightened consideration of fairness and accountability in algorithmically-informed public decisions — like taxation, justice, and child protection — are now commonplace. How might designers support such human values? In this talk, I will draw on two recent papers to explore the issue. In the first part, I will discuss interviews 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work, and contrast them against current work in computer science to make fairer and more accountable algorithms. The results suggest a disconnect between organisational and institutional realities, constraints and needs, and those addressed by current research into usable, transparent and `discrimination-aware' machine learning --- absences likely to undermine practical initiatives unless addressed. In the second part, we will turn to look at the recent European data regulations, and how attempts to make algorithms more ‘explainable’ might not just be difficult in some cases, but might not be the remedy most people are looking for.
Two papers being drawn upon
Slave to the algorithm? Why a ‘right to an explanation’ is probably not the remedy you are looking for. (w/ Lilian Edwards)
Mixed-methods scholar
Michael Veale is a machine learning researcher and mixed-methods scholar researching into fairness, transparency and resilience of algorithmic systems as part of a doctorate at University College London’s Department of Science, Technology, Engineering & Public Policy (STEaPP) and the Department of Computer Science. His research spans qualitative and quantitative approaches, including computer science–legal work on the so-called ‘right to an explanation’ in the GDPR, qualitative research with public sector modellers developing algorithmic decision-making systems, and quantitative analysis of algorithmic bias in online content moderations systems. He has presented work internationally, to governments, significant conferences and workshops, and in business and NGO communities, advises the Red Cross Red Crescent Climate Centre on machine learning systems in humanitarian crisis, and was drafting author for the Royal Society and British Academy’s report *Data Management and Use: Governance in the 21st Century*. He previously worked in the European Commission on IoT and health governance, at Bonsucro as a data scientist, and holds degrees from LSE (BSc) and Maastricht University (MSc).
Michael Veale
Machine learning researcher
Mixed-methods scholar
Organizer: Division of Social Science and Division of Public Policy