Photo: Sasinparaksa | Dreamstime.com

How cities are embracing generative AI

27 September 2023

by Sarah Wray

San Jose and Seattle were early movers on generative AI, which includes tools such as ChatGPT, Google Bard and Stable Diffusion. The cities’ technology leaders share the journey so far.

“Nobody buys a bicycle, gives it to their child and says: ‘Go figure it out’,” observes Khaled Tawfik, Chief Information Officer for the City of San Jose. “But we did that with smartphones.”

When it comes to the internet, society is still addressing unintended consequences such as cybersecurity threats, online abuse and the rise of misinformation.

Tawfik sees an opportunity – and a responsibility – to learn lessons from this and act sooner with emerging technologies such as artificial intelligence (AI).

Image: Khaled Tawfik, San Jose

Alongside cities such as Boston and Seattle, San Jose was a frontrunner in producing guidelines to enable staff to experiment safely with generative AI, covering areas such as confidentiality, attribution and accuracy.

This builds on existing processes established in San Jose to evaluate the equity, privacy and security of technologies, and to be transparent about the use of AI through a register.

With the generative AI guidelines released in July, Tawfik said the city wanted to be open, and garner feedback from employees, residents and others.

“We recognise that we have no clue how this technology is going to drive innovation and change our lives,” he says. “Education is a huge part of that.

“There are a lot of good things that will come from this new technology, but there are also a lot of potential risks that we need to be aware of and think about how we can collectively mitigate.”

Novel concerns

Generative AI also spurred new policy efforts in Seattle.

“At the beginning of the year, we saw a huge amount of attention and a rapid adoption rate for ChatGPT and related technologies,” says Jim Loter, Seattle’s interim Chief Technology Officer. “Seattle IT knew we needed to be in a position to evaluate and respond to requests from our city partners who were interested in using the technology to conduct city business.

“We have practices in place already to evaluate new and non-standard technologies along various axes, such as privacy, security, compatibility, data stewardship, usability, accessibility, and so on. Generative AI technology seemed to introduce enough new and novel concerns that we felt it merited consideration of a specific policy.”

Seattle issued a provisional generative AI policy in April, which represented “best thinking” at the time. An advisory team then spent four months researching the potential utility and risks of the technology. Their recommendations are now being incorporated into a final policy, which will be issued before the end of the year.

The draft rules direct staff to obtain permissions from the IT department before accessing or acquiring a generative AI product.

Image: Jim Loter, Seattle

“Anecdotally, people are saying that [the guidelines have] caused them to stop and be thoughtful about the impacts of using computer-generated content in city publications,” says Loter. “We haven’t told people to not use generative AI; just to be responsible in doing so and to remember that city employees are doing work as representatives of the government and that we have an obligation to be accountable to our constituents for the work we produce.

“Most people I have heard from within the city feel that the guidelines are reasonable and reflect our city’s principles of transparency and accountability.”

San Jose’s guidelines require staff to record their use of generative AI via an internal form to help the city understand how people are using the tools. Some key areas of interest have emerged so far: writing memos to the council, developing requests for proposals (RFPs) for procurement, and HR use cases related to job classification and postings. Monthly working groups have been set up in these areas to share experience and mitigate risks.

The guidelines issued were a first draft and updates are already expected soon.

“We have to recognise that it is not mature and we are not mature in understanding what’s happening. My expectation is that we will have a few generations before this is stable enough to issue a policy,” says Tawfik.

Magic wearing off?

While cities are often risk-averse, with technologies such as generative AI they have no choice but to embrace uncertainty.

“I think what challenges me is not what we know today; it’s what are we not seeing,” says Tawfik. “And it takes a lot of humility to recognise that we don’t know. We need to listen more than we talk; we need to observe more than we dictate.

“My job is to provoke and push everybody to think about areas we need to consider.”

He acknowledges that sometimes it will be possible to look ahead, but other times will mean having to “break it and learn” to understand more about how tools work in different contexts and what the risks really are.

Loter also has some specific concerns.

“There are a lot of open questions about liability for harm caused by incorrect information, mis-translation, intellectual property violations, or other issues created by AI-produced content,” he says.

“We also have concerns about the opacity of both the underlying language model and the foundation models, as well as the fact that the technology is consolidated in the hands of the biggest tech companies and is not truly ‘open’.”

Hands-on experimentation can help cities to demystify emerging technologies and get past the marketing hype.

“I think that the actual use-value of these technologies is an open question,” says Loter. “I think there’s a growing awareness that these technologies are not as ‘magical’ as some of the more carefully crafted demonstrations we all saw earlier this year may have led some to believe.”

Start now and stick together

Despite the challenges, both leaders think cities have no choice but to face this fast-moving area head-on.

“I would definitely recommend that if a city hasn’t engaged with the technology and considered the potential risks and impacts for their environment, that they start that work before the use of generative AI technologies proliferates in their workplaces,” says Loter. “When generative AI functionality is introduced into standard office applications, the genie will be out of the bottle for many organisations.”

He adds: “Treat it like anything else: test claims, critically examine fears, evaluate it honestly, make sure that basic guardrails are in place, whether those are technical, skilling, or policy controls.”

Tawfik also believes it’s important for cities to work together to trade knowledge and experience, as well as to influence national policy and make sure vendors hear the local government perspective.

“It’s critical for all of us to find a platform where we can share and learn from each other,” he says.

“We’re trying to come up with our own consortium of cities of the willing to drive a single voice.”

  • Reuters Automotive
https://cities-today.com/wp-content/uploads/2024/04/CB3295-Avec_accentuation-Bruit-wecompress.com_-2048x1365-1.jpg

Bordeaux Métropole calls for unity to tackle digital divide

  • Reuters Automotive