A new report from the UK’s Centre for Data Ethics and Innovation (CDEI) proposes new measures to address the risks of bias in algorithmic decision-making.
As part of a review announced in 2018, the UK government’s advisory body on the responsible use of artificial intelligence (AI) and data-driven technology analysed the use of algorithms in four sectors: financial services, local government, policing and recruitment.
The CDEI report recommends that the government should develop national guidance to support local authorities to legally and ethically procure or develop algorithmic decision-making tools. This should include specific advice on how to identify and mitigate biases.
“There is also a need for wider sharing of best practice between local authorities,” the report states.
Local authorities are increasingly using data to inform decision-making and target services more effectively but mapping usage in detail remains challenging for researchers, the report says. An investigation by The Guardian published in October found that at least 100 councils in England, Wales and Scotland have used or are using computer algorithms to help make decisions about matters such as benefit claims, social housing and more. The report raised concerns about the reliability of some systems and said that many councils do not consult with citizens on their use.
The CDEI review notes that data infrastructure and data quality are still significant barriers to developing and deploying data-driven tools in local government and that investing in these is necessary before developing more advanced systems.
Further, the CDEI calls for a mandatory transparency obligation on all public sector organisations using algorithms that have an impact on significant decisions affecting individuals. This would include the proactive publication of information on how the decision to use an algorithm was made, the type of algorithm, how it is used in the overall decision-making process, and steps taken to ensure fair treatment of individuals.
In the UK this summer, the government was forced to back-track on calculating A-level results based on a controversial algorithm after accusations that the system was biased against students from poorer backgrounds. Demonstrations saw students chanting “F**k the algorithm” outside the Department for Education and Prime Minister Boris Johnson blamed a “mutant” algorithm for the situation.
Adrian Weller, Board Member for the CDEI, said: “It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators and industry need to work together with interdisciplinary experts, stakeholders and the public to ensure that algorithms are used to promote fairness, not undermine it.
“Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”
The proposed CDEI measures reflect a trend of individual cities around the world introducing measures to manage the risks of emerging technologies such as advanced algorithms. For instance, London, UK is developing an Emerging Technologies Charter, Amsterdam and Helsinki have launched AI registers and New York created its first Algorithms Management and Policy Officer (AMPO) post. The City of London in Canada is implementing an AI tool that was developed internally to predict and prevent homelessness. ‘Explainable AI’ is an important aspect of the system to ensure transparency about decisions.