Officials from the European Union labored late last week to reach a consensus on groundbreaking regulations that will oversee the utilization of artificial intelligence across the 27 member countries.
The most recent set of regulations, known as the Artificial Intelligence Act, has been created to govern technology in Europe, potentially having a worldwide influence.
Let’s examine the rules of AI in more detail.
What does the AI act entail and how does it function?
The AI Act adopts a “risk-based strategy” towards products or services utilizing artificial intelligence and concentrates on governing the uses of AI rather than the technology itself. Its purpose is to safeguard democracy, uphold the rule of law, and preserve fundamental rights such as freedom of speech, while also promoting investment and creativity.
The more risky an AI application is, the stricter the regulations. Applications with minimal risk, like content recommendation systems or spam filters, would only need to comply with simple rules such as disclosing their use of AI.
More stringent regulations are imposed on high-risk systems, such as medical devices, which must utilize reliable data and offer transparent information to users.
Certain applications of AI have been prohibited due to their perceived potential for causing harm, such as social scoring algorithms that dictate human behavior, certain forms of predictive policing, and emotion recognition systems used in educational and professional settings.
Individuals in public spaces are not subject to facial recognition technology by law enforcement, unless they are suspected of committing severe offenses such as abduction or terrorism.
The European lawmakers are expected to give final approval for the AI Act in early 2024, which will not come into effect until two years later. Companies found in violation could face fines of up to 35 million euros ($38 million) or 7% of their global revenue.
What impact does the AI’s behavior have on the rest of the globe?
The AI Act will have jurisdiction over the population of almost 450 million in the EU, but analysts predict it will have a far-reaching influence due to Brussels’ significant involvement in establishing regulations that serve as a worldwide benchmark.
In the past, the EU has acted similarly with past technology regulations, specifically requiring a universal charging port which led to Apple discontinuing its proprietary Lightning cable.
While numerous nations are grappling with the question of whether and how to control AI, the EU’s all-encompassing regulations are positioned to act as a model.
According to Anu Bradford, a professor at Columbia Law School specializing in EU law and digital regulation, the AI Act is a groundbreaking regulation that covers all aspects of AI and has the power to influence global efforts to regulate AI.
She stated that the EU has a special opportunity to take charge and demonstrate to the rest of the world that AI can be regulated and its progress can be monitored through democratic means.
According to rights groups, there could be worldwide consequences for what the law fails to address.
Amnesty International criticized Brussels for not implementing a complete ban on live facial recognition, stating that this decision essentially allows for dystopian levels of digital surveillance in all 27 EU countries and sets a harmful example for the rest of the world.
The limited restriction is a significant missed chance to prevent and halt immense harm to basic human rights, civil liberties, and the rule of law, which are already at risk within the EU.
Amnesty International also criticized legislators for not prohibiting the sale of artificial intelligence (AI) technologies that have the potential to violate human rights. This includes the use of social scoring, which China utilizes to incentivize compliance with government regulations through surveillance.
What measures are other nations taking regarding AI governance?
The two leading countries in AI, the United States and China, have begun developing their own regulations.
In October, President Joe Biden of the United States signed an extensive executive order regarding artificial intelligence. This is anticipated to be reinforced by laws and international accords.
Leading AI developers must disclose safety test results and other relevant information to the government. Agencies will establish guidelines to guarantee the safety of AI tools before their release to the public, and will also provide instructions for labeling content generated by AI.
Biden’s directive expands upon the previous voluntary pledges made by tech giants such as Amazon, Google, Meta, and Microsoft to ensure the safety of their products prior to launch.
In the meantime, China has introduced “interim measures” for regulating generative AI, which pertain to text, images, audio, video, and other material created for individuals within China.
Xi Jinping, the President, has suggested a Global AI Governance Initiative, advocating for a just and transparent atmosphere for the advancement of AI.
What impact will the AI’s actions have on ChatGPT?
The impressive growth of ChatGPT from OpenAI demonstrated significant advancements in technology and prompted European policymakers to revise their proposal.
The AI Act contains regulations for chatbots and other versatile AI systems, also known as general purpose AI, which are capable of performing a variety of tasks such as generating poetry, producing videos, and coding.
Authorities implemented a two-pronged strategy, requiring most general-purpose systems to meet basic transparency standards such as revealing information about their data management. Additionally, in acknowledgement of the EU’s focus on environmental sustainability, they were also required to disclose the amount of energy used to train the models using large amounts of written materials and internet images.
In addition, they must adhere to EU copyright regulations and provide a condensed version of the materials used for training.
More stringent regulations will be implemented for highly advanced AI systems that have significant computing capabilities. These systems are considered to pose “systemic risks,” and officials aim to prevent them from impacting other software developers’ services.
Source: voanews.com