Biden has signed a comprehensive executive order on the regulation of artificial intelligence.
On Monday, President Joe Biden issued an extensive executive order regarding artificial intelligence. The order addresses various issues such as national security, consumer privacy, civil rights, and commercial competition. The administration praised the order for making significant progress in the US’s strategy for reliable and secure AI.
The directive instructs government departments and agencies in the United States to create policies that will regulate the development of increasingly advanced and potentially dangerous technology within the industry. This fast-paced growth has raised concerns about the ability to effectively regulate it.
During a signing ceremony at the White House, Biden stated that in order to fulfill the potential of AI and mitigate potential risks, it is crucial to establish governance for this technology. He further explained that the order being signed is the most substantial step any government has taken to ensure the safety, security, and trustworthiness of AI.
Security ‘Red Teaming’
One of the main criteria of the recent mandate is that companies creating sophisticated artificial intelligence systems must perform thorough testing to prevent malicious individuals from exploiting them. This evaluation, referred to as red teaming, will evaluate potential risks posed by AI systems to crucial infrastructure, as well as threats related to chemical, biological, radiological, nuclear, and cybersecurity.
The National Institute of Standards and Technology is responsible for establishing the guidelines for conducting these tests, and AI companies must submit their findings to the government before making their products available to the public. The Departments of Homeland Security and Energy will play a significant role in evaluating potential risks to critical infrastructure.
In order to address the potential danger of AI facilitating the spread of fabricated and deceptive content, such as computer-generated images and “deep fake” videos, the Commerce Department plans to produce guidelines for establishing standards that can readily detect computer-generated material. This process is commonly referred to as “watermarking.”
The directive instructs the White House’s chief of staff and National Security Council to create a set of principles for the appropriate and moral implementation of AI technology by the American national defense and intelligence organizations.
are essential to a functioning society.
The preservation of privacy and civil rights is crucial for the proper functioning of a society.
The proposal outlines several measures aimed at enhancing privacy safeguards for Americans when AI systems gather data about them. This includes promoting the advancement of privacy-preserving technologies like cryptography and establishing regulations for government agencies to manage data that includes personally identifiable information of citizens.
The order also mentions that there is a lack of legislation in the United States that outlines the data privacy rights of Americans. Compared to Europe, the U.S. is significantly behind in creating these rules. The order urges Congress to enact bipartisan data privacy laws to safeguard all Americans, particularly children.
The directive acknowledges that the coding used for AI to analyze data and respond to user inquiries may contain biases that harm marginalized groups and those who frequently face discrimination. As a result, it proposes the implementation of guidelines and top practices to regulate the application of AI in various fields, such as the criminal justice system, healthcare industry, and housing market.
The directive addresses various other aspects, ensuring measures to safeguard American workers whose employment may be impacted by the implementation of AI technology; upholding the US’s dominance in producing AI systems; and ensuring that the government establishes and adheres to regulations for its own utilization of AI systems.
Open questions
According to experts, there is still a lot of uncertainty regarding the Biden administration’s approach to regulating AI in practical terms, despite the wide-reaching scope of the executive order.
Benjamin Boudreaux, a researcher specialized in policy at the RAND Corporation, explained to VOA that although it is evident that the administration is making efforts to fully address the various challenges and risks related to AI, there is still a lot of work that needs to be done.
Boudreaux stated that the specifics of allocating resources and funding to executive branch agencies to implement the recommended actions, as well as determining which models will be applicable to the suggested norms and recommendations, are crucial.
International leadership
On an international level, the directive states that the government will take charge in creating “a plan to create strong global structures for utilizing the advantages of AI, addressing its hazards, and guaranteeing safety.”
According to James A. Lewis, who holds the position of senior vice president and leads the strategic technologies program at the Center for Strategic and International Studies, the executive order effectively outlines the United States’ stance on various crucial matters concerning the worldwide advancement of AI.
Lewis stated that the plan addresses important concerns, though it may not be revolutionary in certain areas. However, it establishes a standard for both businesses and other nations on the U.S.’s approach to AI.
According to Lewis, this is significant as the U.S. is expected to have a prominent role in shaping the global regulations and standards for this technology.
According to Lewis, whether we approve of it or not (and it’s clear that some countries do not), we are at the forefront of AI. Being the source of the technology gives us an advantage in setting the regulations, and the U.S. can capitalize on that.
Attempting to confront a past conflict.
Some experts are doubtful that the Biden administration is prioritizing the genuine risks that AI could pose to consumers and citizens.
According to Louis Rosenberg, an AI expert with 30 years of experience and the CEO of Unanimous AI, he has worries that the government may be focused on outdated issues.
Rosenberg expressed his approval of the bold statement being made regarding the critical issue at hand. This demonstrates the administration’s dedication to addressing the matter and safeguarding the public from the potential harms of AI.
According to him, the current administration is primarily concerned with using AI to enhance existing dangers to consumers, such as fabricated images and videos and deceptive misinformation, rather than protecting them. These issues are already prevalent in today’s society.
According to him, the government has a history of undervaluing the advancements of technology when it comes to regulation.
Rosenberg expressed greater worry about the potential impact of AI on individuals. Specifically, he highlighted the development of conversational AI systems designed to engage with people.
In the near future, we will no longer be entering requests into Google manually. Instead, we will be communicating with an interactive AI bot,” Rosenberg stated. “AI systems will become highly efficient at convincing, influencing, and potentially even pressuring people in conversations on behalf of those controlling the AI. This is a new and unique threat that did not exist prior to AI.”
Source: voanews.com