InformedInsights

Get Informed, Stay Inspired

California laws target deepfake political ads, disinformation
Technology

California laws target deepfake political ads, disinformation

In a step that could have broad implications for future elections in the U.S., California Governor Gavin Newsom this week signed three pieces of legislation restricting the role that artificial intelligence, specifically deepfake audio and video recordings, can play in election campaigns.

One law, which took effect immediately, makes it illegal to distribute “materially deceptive audio or visual media of a candidate” in the 120 days leading up to an election and in the 60 days following an election.

Another law requires that election-related advertisements using AI-manipulated content provide a disclosure alerting viewers or listeners to that fact.

The third law requires that large online platforms take steps to block the posting of “materially deceptive content related to elections in California,” and that they remove any such material that has been posted within 72 hours of being notified of its presence.

“Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation — especially in today’s fraught political climate,” Newsom said in a statement.

“These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI.”

While California is not the only state with laws regulating the use of deepfakes in political ads, the application of the ban to 60 days following the election is unique and may be copied by other states. Over the years, California has often been a bellwether for future state laws.

Tech titan opposition

Social media platforms and free speech advocates are expected to challenge the laws, asserting that they infringe on the First Amendment’s protection of freedom of expression.

One high-profile opponent of the measures is Elon Musk, billionaire owner of the social media platform X, who has been aggressively using his platform to voice his support of Republican presidential nominee Donald Trump.

In July, Musk shared a video that used deepfake technology to impersonate the voice of Vice President Kamala Harris. In the video, the cloned voice describes Harris as a “deep state puppet” and the “ultimate diversity hire.”

On Tuesday, after Newsom signed the new laws, Musk once again posted the video, writing, “The governor of California just made this parody video illegal in violation of the Constitution of the United States. Would be a shame if it went viral.”

Federal action considered

Most of the legislative efforts to regulate AI in politics have, so far, been happening at the state level. This week, however, a bipartisan group of lawmakers in Congress proposed a measure that would authorize the Federal Election Commission to oversee the use of AI by political campaigns.

Specifically, it would allow the agency to prohibit campaigns from using deepfake technology to make it appear that a rival has said or done something that they did not do or say.

During an appearance at an event sponsored by Politico this week, Deputy U.S. Attorney General Lisa Monaco said there was a clear need for rules of the road governing the use of AI in political campaigns, and she expressed her confidence that Congress would act.

While AI promises many benefits, it is also “lowering the barrier to entry for all sorts of malicious actors,” she said. “There will be changes in law, I’m confident, over time,” she added.

Minimal role in campaign so far

Heading into the 2024 presidential campaign, there was widespread concern that out-of-control use of deepfake technology would swamp voters in huge amounts of misleading content. That hasn’t really happened, said PolitiFact editor-in-chief Katie Sanders.

“It has not turned out the way many people feared,” she told VOA. “I don’t know that it’s entirely good news, because there’s still plenty of misinformation being shared in political ads. It’s just not generated by artificial intelligence. It’s really relying on the same tricks of exaggerating where your opponent stands or clipping things out of context.”

Sanders said that campaigns might be reluctant to make use of deepfake technology because voters “are distrustful of AI.”

“Where the deepfake material that does exist is coming from is smaller accounts, anonymous accounts, and is sometimes catching enough fire to be shared by people who are considered elites on political platforms,” she said.

Source: voanews.com