We are seeing a rise in national and international interest around the use of predictive analytics, machine learning and automated systems. These algorithms can determine a range of outcomes across public sectors, from supporting decisions around immigration policy, to when to make a children’s services intervention.
The available evidence suggests that there has so far been a mixed responses from government agencies in the UK in uses of algorithmic and automated systems in public services. Some departments and agencies have implemented these programmes, some are piloting them, and others have cancelled the use of these systems altogether.
We have partnered with the Data Justice Lab to undertake new research that examines the latter of these scenarios. The case-study approach will analyse a selection of national and international ‘cancelled or paused systems’ in the areas of predictive policing, child welfare and fraud detection to investigate:
- What is the range of automated or predictive systems that have been proposed, piloted and cancelled?
- Why have different government agencies cancelled plans for, or the use, of these automated systems?
- What rationales and decision making processes are leading to cancellation?
- What kind of individual and social processes lead to cancellation?
- Can any comparative factors be identified across countries?
The final full report will be published later this year and will present key themes that have emerged from the analysis and lessons learned that can be applied.
This project is a partnership between the Carnegie UK Trust and the Data Justice Lab. The project team includes:
Data Justice Lab – Joanna Redden, Jessica Brand, Ina Sander and Harry Warne.
Carnegie UK Trust – Anna Grant and Douglas White