Photo: Thekaikoro| Dreamstime.com

Europe proposes risk-based regulation for AI

26 April 2021

by Sarah Wray

The European Commission has laid out its plans for the first legal framework on artificial intelligence (AI).

The proposals take a tiered, risk-based approach. Systems deemed to pose an unacceptable risk to the safety, livelihoods and rights of people would be banned. These include tools that allow ‘social scoring’ by governments.

Under the Artificial Intelligence Act, the use of real-time biometric identification systems like facial recognition would also be prohibited in public spaces, apart from in exceptional circumstances such as a missing child or an imminent terrorist threat.

Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, said: “On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.

“By setting the standards, we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way. Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

Risk assessment

Systems deemed ‘high risk’ would be subject to strict requirements before they can be put on the market, including risk assessment and mitigation steps; high quality datasets; explainable results; and appropriate human oversight.

AI systems falling under this category include those used in critical infrastructure such as transport, justice and democratic processes, law enforcement, border control, and essential private and public services. This category would also apply to AI-based CV screening and exam-scoring algorithms.

Limited risk AI systems such as chatbots would be subject to transparency obligations ensuring that users are aware that they are interacting with a machine.

The proposals allow the free use of minimal risk applications such as AI-enabled video games or spam filters.

“The vast majority of AI systems fall into this category,” the Commission said.

The Commission plans to create a European Artificial Intelligence Board to facilitate the implementation of the rules and drive development of standards for AI.

Additionally, voluntary codes of conduct would be in place for lower-risk AI, as well as regulatory sandboxes.

The proposals will be debated by the European Parliament and member states and are unlikely to be finalised before 2023.

Do the rules go far enough?

While many cities will welcome stricter controls over AI technologies, some are calling for the rules on the highest risk systems to go further.

In an article by city collaboration network Eurocities, Laia Bonet, Deputy Mayor of Barcelona and Chair of the Eurocities’ Knowledge Society Forum, backed the ban on applications such as social credit scoring.

But, she commented: “We are however concerned for the fact that the proposed regulation keeps an open door to large-scale surveillance in its provisions regarding real-time biometric recognition systems.”

“We call for an outright, European-wide ban to biometric recognition systems,” Bonet said.

The new regulation would require high-risk systems to undergo a conformity assessment, be registered in an EU database and sign a declaration of conformity before entering the market.

Some AI providers could be allowed to self-assess their compliance with such standards.

“Self-assessment is not up to the trustworthy model the Commission wants to promote,” said Bonet. “In our view, conformity with AI standards cannot be left in the hands of private companies providing AI technology.”

She called on the EU to invest in independent mechanisms to guarantee that all AI providers meet standards.

City action

Some cities are already taking steps to ensure AI is deployed ethically and transparently.

For example, Helsinki and Amsterdam have developed and each published an Artificial Intelligence Register. The registers, which were announced in September and thought to be the first of their kind in the world, incorporate an overview of the AI systems used in city services, including how data is processed, risk mitigation, and whether the tools have human oversight.

Amsterdam also launched the Civic AI Lab which focuses on “human-centred” AI in education, welfare, the environment, mobility and health.

Image: ThekaikoroDreamstime.com

  • Reuters Automotive
https://cities-today.com/wp-content/uploads/2024/04/CB3295-Avec_accentuation-Bruit-wecompress.com_-2048x1365-1.jpg

Bordeaux Métropole calls for unity to tackle digital divide

  • Reuters Automotive