A Guardian investigation has found that nearly half of local councils in England, Scotland and Wales are using or have used algorithms to make decisions about benefit claims, who gets social housing and other issues, despite concerns about their reliability.
Councils are under pressure financially. Algorithms are often expected to be more accurate, or less susceptible to human issues (e.g. getting sick, being biased), helping to improve council decision-making at greater value to the tax payer.
However, there is clearly risk. The algorithms are used for a variety of purposes but, as they are being used by local councils, will affect members of the public. According to the investigation, one of the most commonly used purposes is risk-based verification to identify potential housing claim fraud. Even if the algorithm is only there to assist a decision, there is a risk a human simply rubber-stamps the algorithm’s output.
Interestingly, some councils have dropped the use of algorithms (for now, at least) citing low accuracy. Whether or not the algorithm’s accuracy is better than a council’s previous approach is unclear. And whilst a company behind one algorithmic service said low accuracy was ‘because people often entered information wrongly‘ it is not clear who those people are (the company’s employees, the council, members of the public?).
There are concerns that there is a lack of transparency in how local councils use of algorithms in decision-making. That it required a Guardian investigation through Freedom of Information requests may only add to the calls in other contexts (eg criminal justice) for a register of when and how algorithms are used.
A Guardian freedom of information investigation has established that 100 out of 229 councils have used or are using automated decision-making programmes, many without consulting at all with the public on their use.