InformedInsights

Get Informed, Stay Inspired

The United States advocates for worldwide safeguards against potential dangers presented by artificial intelligence.
Science & Health Technology

The United States advocates for worldwide safeguards against potential dangers presented by artificial intelligence.


On Wednesday, Vice President Kamala Harris of the United States stated that it is the responsibility of leaders to safeguard individuals from the potential hazards of artificial intelligence. She is spearheading the Biden administration’s efforts to establish a worldwide plan for AI.

Experts praise the endeavor, stating that human supervision is vital in order to avoid the potential weaponization or abuse of this technology. It has various uses, ranging from military intelligence to medical diagnosis to creating art.

Harris stated that in order to maintain order and stability during the rapid advancements in technology worldwide, it is essential for nations to have a shared understanding. This is why the United States will collaborate with its allies and partners to enforce current international regulations and standards on AI, while also striving to establish new ones.

Harris has declared the establishment of the government’s AI Safety Institute and has made public preliminary policy recommendations for the government’s utilization of AI, as well as a statement on its responsible implementation in military settings.

Recently, President Joe Biden, who referred to AI as “the most significant technology of our era,” issued an executive order that sets forth fresh regulations. These regulations include a mandate for major AI developers to submit their safety test outcomes and other crucial data to the U.S. government.

The use of AI is growing in various fields. To illustrate, the Defense Intelligence Agency recently announced that its military intelligence database, which utilizes AI, will soon reach “initial operational capability.”

On the other hand, at the other extreme, a programmer may have chosen to “educate an AI algorithm using more than 1,000 human flatulence samples in order to generate lifelike fart noises.”

Like any other tool, AI is subject to its users’ intentions and can be used to deceive, misinform or hurt people – something that billionaire tech entrepreneur Elon Musk stressed on the sidelines of the London summit, where he said he sees AI as “one of the biggest threats” to society. He called for a “third-party referee.”

In the beginning of this year, Musk was one of over 33,000 individuals to endorse a public letter urging AI laboratories to temporarily cease training AI systems that surpass the capabilities of GPT-4 for at least six months.

Elon Musk, who is considering developing his own generative AI program, stated that for the first time in human history, we are faced with the possibility of a higher intelligence than our own. He believes that we may not have complete control over this technology, but we can strive to guide it in a way that benefits humanity. However, he also recognizes it as one of the greatest risks we face and one that needs to be addressed urgently.

FILE - Tesla Chief Executive Officer Elon Musk gets in a Tesla car as he leaves a hotel in Beijing, China, May 31, 2023.


On May 31, 2023, in Beijing, China, Elon Musk, the CEO of Tesla, got into a Tesla vehicle as he departed from a hotel.

Earlier this year, industry leaders including OpenAI CEO Sam Altman testified before congressional committees and shared their views with U.S. lawmakers.

During a Senate Judiciary Committee meeting on May 16, the speaker expressed concerns about the potential harm that could be caused to the world by us, the field, the technology, and the industry. This fear could manifest in various forms.

According to Jessica Brandt, who serves as the policy director for the AI and Emerging Technology Initiative at the Brookings Institution, while AI has demonstrated impressive capabilities in scientific research, it is still constrained by its developers.

She explained on Zoom to VOA that it’s not about humans not knowing how to do something, but rather about the ability to make discoveries at a much faster rate through a large number of calculations. These discoveries would be difficult for humans to make within a reasonable timeframe.

According to her, AI cannot be considered completely unbiased or all-knowing. Numerous studies have shown that the effectiveness of AI is dependent on the quality of the data used to train the model, which can also contain human prejudices. This is a significant issue of concern.

Or, as AI Now Executive Director Amba Kak said earlier this year in a magazine interview about AI systems: “The issue is not that they’re omnipotent. It is that they’re janky now. They’re being gamed. They’re being misused. They’re inaccurate. They’re spreading disinformation.”

Experts suggest that government and technology leaders should not rely on a single solution, but instead focus on aligning their values and ensuring there is human oversight and ethical usage.

Brandt stated that it is acceptable to have various methods and, if feasible, work together to ensure that democratic principles are established in the systems that oversee technology worldwide.

According to Mira Murati, the chief technology officer of Open AI, those at the forefront of the industry generally concur that artificial intelligence is becoming increasingly integrated into our daily lives. The crucial factor is to guarantee that these machines are in line with human intentions and values.

Experts monitoring government oversight predict that the United States will not be able to devise a singular, comprehensive resolution for the challenges presented by artificial intelligence.

According to Bill Whyman, a senior adviser at the Center for Strategic and International Studies, it is probable that the United States will see a decentralized approach to AI regulation through various executive branch actions. Unlike Europe, the US is not expected to enact a comprehensive national AI law in the near future. Instead, any successful legislation will likely be more specific and less contentious, such as providing funding for AI research and addressing AI-related child safety concerns.

Source: voanews.com