A deal has been reached in Europe regarding the world’s first set of comprehensive regulations for AI.
EU negotiators reached an agreement on Friday regarding the first set of comprehensive regulations for artificial intelligence. This will allow for legal supervision of the technology used in popular generative AI services like ChatGPT, which have the potential to revolutionize daily life but also raise concerns about potential threats to humanity.
Representatives from the European Parliament and the 27 nations in the bloc successfully resolved major discrepancies on contentious issues, such as generative AI and police utilization of facial recognition monitoring, to reach a tentative political consensus on the Artificial Intelligence Act.
“Agreed!” stated European Commissioner Thierry Breton, shortly before midnight. “The EU is the first continent to establish definitive regulations for the utilization of AI.”
The outcome was determined through lengthy private discussions this week, starting with a 22-hour session before a second round began on Friday morning.
The officials were under pressure to ensure a successful political outcome for the main legislation. It was anticipated that they would also keep the option for further discussions open in order to clarify the specific details, which could result in additional behind-the-scenes negotiations.
In 2021, the EU was at the forefront of creating guidelines for AI by releasing the initial version of its rulebook. However, the rise in generative AI caused European authorities to quickly revise their proposal, which was intended to be a model for other countries.
According to Brando Benifei, an Italian lawmaker and co-leader of the European Parliament’s negotiation team, even though the vote will still take place at the beginning of next year, it is just a formality now that the deal has been finalized. He shared this information with The Associated Press on Friday evening.
He replied via text message that it was extremely satisfactory when asked if it met all his expectations. He acknowledged the need for some concessions but was pleased with the overall result.
The new legislation will not be fully enforced until 2025 at the earliest and includes harsh monetary consequences for infractions, with penalties of up to $38 million (35 million euros) or 7% of a company’s worldwide revenue.
Generative AI systems like OpenAI’s ChatGPT have exploded into the world’s consciousness, dazzling users with the ability to produce humanlike text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection, and even human life itself.
Currently, the United States, United Kingdom, China, and international coalitions such as the Group of Seven prominent democracies have presented their own ideas for governing AI. However, they are still lagging behind Europe in this area.
‘A powerful example’
According to Anu Bradford, a professor at Columbia Law School who specializes in EU and digital regulation, the strong and thorough regulations implemented by the EU could serve as a significant model for other governments contemplating similar measures. While not all provisions may be replicated, many aspects are likely to be emulated.
According to her, AI companies must comply with the regulations set by the EU and may also choose to apply these requirements to markets beyond the European continent. She believes that this will be more efficient than creating separate models for each market.
Some are concerned that the deal was hastily approved.
Daniel Friedlaender, the leader of the European office for the Computer and Communications Industry Association, a lobbying group for the tech industry, stated that the current political agreement signifies the start of significant and essential technical tasks related to key aspects of the AI Act that are still absent.
The initial purpose of the AI Act was to address the potential risks associated with certain AI functions, categorizing them based on their level of risk from low to unacceptable. However, legislators advocated for its extension to include foundation models, which are complex systems that form the basis of general-purpose AI services such as ChatGPT and Google’s Bard chatbot.
European negotiators were facing a major challenge when it came to foundation models. Despite strong opposition from France, who advocated for self-regulation to support European AI companies in competition with large American players like OpenAI’s sponsor Microsoft, a tentative compromise was reached early on in the discussions.
These systems, commonly referred to as large language models, are trained using extensive amounts of written material and images gathered from the internet. They allow generative artificial intelligence systems to produce novel content, in contrast to conventional AI that follows specific rules to process data and complete tasks.
According to the agreement, the most cutting-edge foundation models that present the greatest “systemic risks” will undergo additional review. This includes the obligation to provide more detailed information, such as the amount of computing power utilized in the training of these systems.
Elevation of threats
Experts have cautioned against the potential misuse of these dominating foundational models, created by a small number of major technology corporations, which could greatly enhance the spread of false information and manipulation online, as well as contribute to cyberattacks and the development of bioweapons.
Advocacy organizations also warn that the absence of openness regarding the data utilized to educate the models presents potential dangers in everyday life, as they serve as fundamental frameworks for programmers creating AI-driven services.
The most challenging subject was the use of AI technology in facial recognition surveillance systems. After a lot of negotiation, a compromise was reached by the negotiators.
The European legislators desired a complete prohibition of utilizing facial recognition and other remote biometric identification technologies in public due to concerns about privacy. However, the governments of member nations sought exemptions in order for law enforcement to utilize these technologies for addressing serious crimes such as child sexual exploitation or terrorist attacks.
The validity of the statement was questioned by civil society organizations.
Daniel Leufer, a senior policy analyst at Access Now, stated that despite any successes achieved in the last negotiations, significant imperfections will persist in the final text. These include exemptions for law enforcement and inadequate safeguards for AI systems utilized in migration and border control, as well as insufficient restrictions on the most hazardous AI systems.