Inside the EU’s tentative deal on world-first AI regulation

Key Takeaways:

– The European Union has reached a tentative agreement on the AI Act, bringing it closer to enactment.
– Lawmakers managed to reach a deal just in time before the original deadline, avoiding a potential delay until next year.
– The Act is billed as the world’s first comprehensive legislation for artificial intelligence.
– The development of AI has caused divisions within the EU’s regulatory plans.
– France, Germany, and Italy opposed binding rules and proposed following codes of conduct instead.
– Another sticking point was the restrictions on biometric surveillance, with EU legislators wanting an outright ban and governments calling for a national security exemption.
– The Act will follow a risk-based approach, categorizing AI systems into minimal risk, high-risk, unacceptable risk, and specific transparency risk.
– Non-compliance with the Act will result in hefty fines.
– The Act introduces specific rules for general purpose AI models, including additional binding obligations for powerful models.
– The establishment of a new European AI Office within the European Commission is also part of the Act.
– The Act is seen as a launchpad for EU startups and researchers to lead the global AI race.
– Further negotiation and lobbying are expected, but a full agreement before next year’s European parliamentary elections is likely.
– The law is unlikely to take effect for at least 18 months.

The Next Web:

Following marathon discussions last week, the European Union has secured a tentative agreement on the terms of the AI Act, bringing the landmark regulation closer to enactment.

While missing the original deadline for Wednesday, lawmakers managed to thrash out a deal late on Friday, just in time for the weekend. If they had not, the law would have been delayed until next year, potentially after the EU-wide elections in June.

It has not been easy trying to hit a moving target.

Billed as the world’s first comprehensive legislation for artificial intelligence, the Act was first proposed in 2021. In the years since then, the rapid development of AI has caused various divisions in the bloc’s regulatory plans.

The latest rift emerged after the explosive launch of ChatGPT last year. The OpenAI chatbot sparked panic and excitement about the power of foundation models, which are sometimes referred to as “general purpose” AI systems. EU nations were split over the best way to oversee them.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

France, Germany, and Italy opposed plans for binding rules, which they feared would impede innovation and their domestic businesses. The trio proposed to instead follow codes of conduct.

Another sticking point was the restrictions on biometric surveillance. EU legislators had wanted an outright ban, while governments had called for a national security exemption.

Risk-based approach to AI systems

At the 11th hour, lawmakers clinched a provisional deal on the Act’s principles, centred around what they call a risk-based approach. This follows a tiered category structure:

  • Minimal risk — such as AI-enabled recommender systems or spam filters. Free pass and absence of obligations. 
  • High-risk — systems such as critical infrastructures, medical devices, access to educational institutions or for recruiting people, law enforcement, etc. Will need to comply with requirements including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, human oversight, and a high level of  robustness and cybersecurity.
  • Unacceptable risk — the Act will ban systems considered a clear threat to the fundamental rights of people. This includes  “AI systems or applications that manipulate human behaviour to circumvent users’ free will.” For instance, social scoring by governments or companies, or systems for “categorising people in real time.” However, there is a “narrow exception” for remote biometric identification for law enforcement purposes. 
  • Specific transparency risk — users need to be aware that they are interacting with AI, and deep fakes or AI-generated content must be labeled as such. 

As per usual for EU tech regulations, hefty fines will be doled out to those failing to comply. These will range from €35mn (or 7% of global annual turnover, whichever is higher) for violations on banned applications, to €7.5mn (or 1.5%) for supplying incorrect information.

In addition, the Act introduces specific rules for general purpose AI models. For very powerful models that could pose systemic risks, there will be “additional binding obligations” that will be “operationalised through codes of practices developed by industry, the scientific community, civil society, and other stakeholders together with the Commission.” 

New European AI Office

While enforcement will be up to individual member states, the Act also determines the establishing of a new European AI Office within the European Commission. Meanwhile, the bloc’s industry chief Thierry Breton stated that the Act was not only a rulebook, but a “launchpad for EU startups and researchers to lead the global AI race.”

The provision will now require further negotiation — and there’s still time for more lobbying. But there’s also now a strong chance of a full agreement before next year’s European parliamentary elections.

Still, the law is unlikely to take effect for at least 18 months. By that point, the world of AI could be a very different place. 


Source link

AI Eclipse TLDR:

After extensive discussions, the European Union (EU) has reached a tentative agreement on the AI Act, bringing the landmark regulation closer to enactment. The Act, which was first proposed in 2021, aims to be the world’s first comprehensive legislation for artificial intelligence. However, the rapid development of AI has caused divisions within the EU’s regulatory plans, and the recent launch of OpenAI’s ChatGPT exacerbated these divisions. France, Germany, and Italy opposed binding rules and proposed following codes of conduct instead. Another point of contention was the restrictions on biometric surveillance, with EU legislators pushing for an outright ban and governments calling for a national security exemption.

Lawmakers eventually reached a provisional deal centered on a risk-based approach. The Act categorizes AI systems into three tiers: minimal risk, high risk, and unacceptable risk. Minimal risk systems, such as AI-enabled recommender systems or spam filters, will have no obligations. High-risk systems, such as critical infrastructures and medical devices, will need to comply with specific requirements, including risk-mitigation systems, high-quality data sets, human oversight, and robust cybersecurity. Unacceptable risk systems, which threaten people’s fundamental rights, will be banned, except for a narrow exception for remote biometric identification for law enforcement purposes. The Act also introduces transparency requirements, such as labeling AI-generated content.

Failure to comply with the Act will result in hefty fines, ranging from €7.5 million to €35 million, depending on the violation. Additionally, the Act establishes a new European AI Office within the European Commission and provides specific rules for general purpose AI models. Although enforcement will be up to individual member states, the Act aims to be a launchpad for EU startups and researchers to lead the global AI race, according to Thierry Breton, the EU’s industry chief.

While the agreement still requires further negotiation, there is a strong chance of a full agreement before next year’s European parliamentary elections. However, the law is unlikely to take effect for at least 18 months, during which time the AI landscape could change significantly.