Photo: Maciek905 | Dreamstime.com

Councils to pilot UK standard for algorithm transparency

30 November 2021

by Sarah Wray

The UK government has launched a new standard for transparency about how algorithmic tools are used to support decisions.

Algorithms and artificial intelligence (AI) have the potential to help public sector organisations deliver services faster, more proactively and at lower cost. However, the results can have profound legal or economic impacts on individuals’ lives, and risks include inaccuracy and bias.

The algorithmic transparency standard – developed by the Cabinet Office’s Central Digital and Data Office (CDDO) with input from the Centre for Data Ethics and Innovation (CDEI) – will be piloted with government departments and local authorities across the UK and further developed based on feedback. Details of pilot organisations will be revealed soon.

The publication of the standard follows a review into bias in algorithmic decision-making published last year in which the CDEI recommended that “the UK government should place a mandatory transparency obligation on all public sector organisations using algorithms that have a significant influence on significant decisions affecting individuals.”

In the UK last summer, the government was forced to back-track on calculating A-level results based on a controversial algorithm after allegations that the system was biased against students from poorer backgrounds. Prime Minister Boris Johnson later called it a “mutant algorithm”.

Earlier this year a report by privacy campaign group Big Brother Watch on “the hidden algorithms shaping Britain’s welfare state” claimed dozens of UK councils were using privately developed software to “mass profile” benefit claimants. It said that algorithms designed to predict fraud and rent arrears “treat the poor with suspicion and prejudice”.

Transparency

The new government standard is organised into two tiers. The first includes a short description of the algorithmic tool, including how and why it is being used, while the second includes more detailed information about how the tool works, the datasets that have been used to train the model and the level of human oversight.

It follows the frameworks put in place by cities such as  AmsterdamHelsinki and New York, which the government referenced in its news release. France and the Netherlands are also developing public sector algorithmic transparency measures.

Lord Agnew, Minister of State at the Cabinet Office, said: “Algorithms can be harnessed by public sector organisations to help them make fairer decisions, improve the efficiency of public services and lower the cost associated with delivery.

“However, they must be used in decision-making processes in a way that manages risks, upholds the highest standards of transparency and accountability, and builds clear evidence of impact.”

Imogen Parker, Associate Director for Policy at the Ada Lovelace Institute, said: “Meaningful transparency in the use of algorithmic tools in the public sector is an essential part of a trustworthy digital public sector.

“We look forward to seeing trials, tests and iterations, followed by government departments and public sector bodies publishing completed standards to support modelling and development of good practice.”

Following the piloting phase, CDDO will review the standard based on feedback gathered and seek formal endorsement from the Data Standards Authority in 2022.

On whether use of the standard will eventually be mandatory, a government spokesperson told Cities Today that no further details are available at this stage and that feedback from the consultation on the future of the UK’s data protection regime is currently being analysed.

Image: Maciek905 | Dreamstime.com

  • Reuters Automotive
https://cities-today.com/wp-content/uploads/2024/04/CB3295-Avec_accentuation-Bruit-wecompress.com_-2048x1365-1.jpg

Bordeaux Métropole calls for unity to tackle digital divide

  • Reuters Automotive