There’s plenty of hype about how COVID-19 is accelerating the roll-out of artificial intelligence (AI), drones, robotics, surveillance systems and more but alongside this a quieter, more fundamental shift is taking place. A growing number of cities are working to better define the rules of engagement to ensure technology deployed in their communities is fair, open and explainable.
These considerations are not new – the rise and fall of the Sidewalk Labs initiative in Toronto, for instance, was a microcosm for debates about the use of data, privacy and the role of big technology companies in the public realm. The controversial project to create a smart neighbourhood in a disused area of Toronto’s Quayside district was shut down in May, with Sidewalk Labs CEO, Dan Doctoroff, citing “unprecedented economic uncertainty”.
The pandemic and related crises are putting increased emphasis on these issues. Milou Jansen, Coordinator on behalf of the Cities Coalition for Digital Rights core team, which comprises Amsterdam, Barcelona and New York, notes that considerations about the use of technology go beyond privacy and surveillance.
She said that in light of the pandemic: “There is a momentum for the digital agenda in cities – not any digital agenda, but one that is inclusive, addressing the digital divide; ethical, placing digital rights at the forefront; and green, making the linkages between the digital and the ecological transition.
“The downside of this is that, if there is not active leadership by public institutions to bring the public debate in this direction, the discussion around digitalisation can easily fall into technocratic considerations that lead to business-as-usual in the tech field in cities.”
Living through and learning from history
Digital technology and data have been important tools to help individuals cope during the pandemic and in governments’ and cities’ response efforts. This has included online dashboards to make sense of complex information, apps and chatbots to provide updates and signpost support, and the use of AI to understand issues such as how people are moving around the city and whether they are wearing masks and social distancing.
Technology offers great benefits to cities and citizens and will remain critical to helping tackle other challenges such as economic recovery and climate change. However, the use of algorithms, contact-tracing apps and video surveillance during the coronavirus crisis has also brought important debates into focus about not only the balance between public health/benefits and privacy but also how these systems actually work. Further, COVID-19 has laid bare the digital divide and inequalities in who benefits from – and can be disadvantaged by – technology.
In the UK this summer, the government was forced to back-track on calculating A-level results based on a controversial algorithm after accusations that the system was biased against students from poorer backgrounds. Demonstrations saw students chanting “F**k the algorithm” outside the Department for Education.
At the city level, San Diego’s Mayor, Kevin Faulconer, recently ordered sensors and cameras on the city’s 3,200 smart streetlights to be deactivated until an ordinance is in place governing the programme. When it was announced in 2017, the initiative was touted as “the world’s largest IoT platform” set to deliver cost savings and data-driven benefits for mobility and public safety but it drew mounting criticism over privacy and surveillance and additional controversy recently relating to San Diego police accessing video footage.
There are further steps before the proposals become law but last week, San Diego City Council voted unanimously in favour of an ordinance governing the use of surveillance technologies in the city – going beyond just the streetlight programme. The Council also backed a second ordinance to establish a Privacy Advisory Board comprised of volunteer citizen members. San Diego would be the second city after Oakland to have such a board.
Facing the future
There is now a growing trend of cities taking steps to get ahead of technology-related ethics issues.
More than ten US cities have banned facial recognition technology, for instance – most recently Portland, which went a step further and banned its use not only by city departments but also private companies. The ban aims to address growing concerns about facial recognition technology regarding privacy and surveillance, as well as errors and potential racial and gender bias.
Kevin Martin, Smart City PDX Program Manager, City of Portland, says that facial recognition is just the first step in a broader initiative. These systems were chosen as a ‘low-hanging fruit’ because the technology wasn’t used by the city itself or widely in public places. “We saw it as an opportunity to have the city approach technology in a more proactive way with community at the table, and do it in a way that we could set the precedent for other types of technology going forward,” he said.
Some technologists may be worried that policy could get in the way of innovation but these initiatives could ultimately help cities deploy faster as they grapple with fast-moving situations and complex technologies from contact-tracing systems to advanced machine learning and AI.
“All of these have privacy and surveillance implications,” Martin said. That’s why we’re really pushing to get that structure in place – so we don’t start on a slippery slope of technologies being used in a crisis situation that then open the door to greater surveillance of our community going forward.”
As Portland turns its attention to systems which are already installed – starting with those that use AI and machine learning – Martin stressed that the city is not “anti-technology” and wants to work with the vendor community to make sure their technology “is being deployed and developed responsibly, and that there is no harm that is coming into the communities.”
In the UK, London too is doing more to make technology systems more transparent to residents – including how it uses data and how third-party systems deployed in the city work.
Theo Blackwell, London’s Chief Digital Officer, says the coronavirus crisis has brought many of these issues to the fore but the work is an extension of the city’s ongoing smart city approach, which has been to tell a story of digital transformation rather than tech solutions or platforms.
Blackwell commented: “We need to start with what problem we need to solve and what the users need, and approach it from there. Although there will always be an element of solutionism with smart technology, smart cities became too solutionist and focused on the technology.”
In early March, before COVID-19 took hold around the world, London held a Citizens’ Summit to deliberate the issues around the sharing of health data. Over four days, 100 “representative Londoners” shared their views.
“Our study really showed that most Londoners are in the place where [if you treat them] like adults, tell people what you will use the data for, and tell them what the safeguards, remedies and benefits are, they are willing to proceed with that,” Blackwell said.
During the pandemic, London councils have deployed several data-driven services and repurposed existing systems to respond. These have included hot meal delivery services for vulnerable people, requiring the council to be able to match those with needs with support – whether from the council, volunteers or private sector – within weeks.
London is also working with the Turing Institute to use data from cameras, traffic intersection monitors, air quality sensors, point of sale counts and public transit activity metrics to assess ‘busyness’ in the capital and inform targeted interventions and policies.
Blackwell said: “We’ve been a force for more transparency around the use of government data,” noting that one of the most visited pages on the London DataStore is on COVID-19 cases and deaths. It brings together various datasets to create maps and visualisations, and present “one source of the truth,” including helping the city and its residents understand how COVID-19 is impacting particular communities such as people in insecure employment or minority and ethnic groups.
Demystifying ‘black box’ technology
Turning attention to external systems as well, Blackwell is now leading the initiative to develop an Emerging Technologies Charter with input from citizens and private sector companies. The charter will outline a set of criteria that digital innovations should meet if they are deployed in the capital.
Blackwell says this could include a requirement – or at least a strong recommendation — for vendors whose technology is implemented in the public realm in London to publish information about their systems to a central online hub where citizens can access it. It advances other work in London to evaluate emerging technologies in specific instances, such as Transport for London’s guidance on the trialling of connected autonomous vehicles; work on the collection of anonymous Wi-Fi data on the Tube; and the London Policing Ethics Panel report on the use of live facial recognition technology.
It also builds on a related approach from the cities of Helsinki and Amsterdam, which have worked together to develop and each launch an Artificial Intelligence Register. The registers are thought to be the first of their kind in the world and incorporate an overview of the AI systems as well as detail on the datasets they use, how data is processed, how inclusion is ensured, risks, and whether the tools have human oversight.
London will encourage companies to add information about their systems to a central hub themselves. While they could use Data Protection Impact Assessments, which are a legal requirement for many data-gathering technologies – and the city could simply pull these in if businesses don’t – Blackwell hopes vendors will do more to voluntarily provide material that is as clear as possible. His team will work with suppliers to make sure the information they provide is in plain English and demonstrates transparency.
He said: “The worry shared by most people is [about] ‘black box technology’, that they don’t know quite know how it works or what it does. We need to make it more understandable.”
On whether he thinks vendors will open up in this way, Blackwell said: “Well, we’ll see, won’t we?”, but he stressed, that, like Portland, London wants to work in collaboration with the industry.
“We want to help you [companies] do innovation in London,” he said. “And here are some of the things we’d like you to do in order to have a good conversation.”
London hopes to publish best practice examples this year or early next.
Blayne Haggart, Associate Professor of Political Science, Brock University, Canada, believes initiatives such as those highlighted mark a change in cities’ approach. “[They are] important because they’re injecting something into the debate other than just straight-up efficiency as a criteria for adopting these [technologies]. And they’re trying to think about the second or third-order effects of using them,” he said.
He urges cities to develop data and intellectual property frameworks before entering into partnerships with private entities.
“City governments should build up their expertise in understanding these issues so they don’t have to outsource their thinking to the tech giants, who might be doing good and interesting work but have their own interests which reflect a particular business model,” Haggart commented.
Cities globally are at a critical point – they are on the frontlines of complex challenges and under pressure to do much more with less. They won’t be able to achieve what they need to without trusted private partners, but companies expecting to ‘move fast and break things’ may increasingly need to think again.