InformedInsights

Get Informed, Stay Inspired

Artists employ technological tools to combat the threat of AI plagiarism.
Technology

Artists employ technological tools to combat the threat of AI plagiarism.

Creatives are facing a threat from AI technology that analyzes their creations and mimics their techniques. To combat this, they have joined forces with academics to hinder these imitative efforts.

American artist Paloma McClain reacted defensively upon discovering that multiple artificial intelligence models had been trained using her artwork without giving her any recognition or compensation.

McClain informed AFP that it was a source of discomfort for them.

She stated that true and impactful technological progress should be carried out with ethical considerations and should benefit everyone instead of being detrimental to others.

The artist utilized a free software known as Glaze, developed by researchers from the University of Chicago.

The process of glazing surpasses AI models in terms of training by manipulating pixels in imperceptible ways to human viewers. This results in a digitized artwork appearing significantly altered to AI.

Ben Zhao, a computer science professor on the Glaze team, stated that they are offering technological resources to safeguard human creators from harmful and intrusive AI models.

Created in just four months, Glaze spun off technology used to disrupt facial recognition systems.

“We acted quickly and efficiently when defending artists from software imitators because we were aware of the severity of the issue,” Zhao explained. “Many individuals were experiencing distress.”

Large AI companies have contracts to train their systems using data, but most of the digital content, such as images, audio, and text, that is used to influence the thinking of advanced software has been gathered from the internet without obtaining explicit permission.

According to Zhao, Glaze has been downloaded over 1.6 million times since its March launch.

Zhao’s team is developing a Glaze upgrade known as Nightshade, which boosts defense by tricking AI into mistaking a dog for a cat.

McClain stated that Nightshade could have a significant impact if a sufficient number of artists utilize it and distribute a substantial amount of contaminated images online.

“Nightshade’s research shows that the number of poisoned images needed is less than expected,” she stated.

According to the academic in Chicago, multiple companies have reached out to Zhao’s team expressing interest in utilizing Nightshade.

“Zhao stated that the objective is to empower individuals and companies, regardless of their level of creativity, to safeguard their content.”

Viva Voce

A company named Spawning has created Kudurru software, which can identify any efforts to gather a large quantity of images from an internet platform.

According to Jordan Meyer, co-founder of Spawning, an artist has the ability to restrict access or provide images that do not align with the requested content. This can potentially compromise the accuracy of the data being used to train AI algorithms.

Over 1,000 websites have been incorporated into the Kudurru network.

Spawning has also launched haveibeentrained.com, a website that features an online tool for finding out whether digitized works have been fed into an AI model and allow artists to opt out of such use in the future.

Researchers at Washington University in Missouri have created AntiFake software to prevent AI from replicating voices as defenses against image-based attacks increase.

Zhiyuan Yu, the Ph.D. student in charge of the project, stated that AntiFake enhances digital recordings of individuals speaking by including inaudible noises that prevent the synthesis of a human voice.

The objective of the program is to extend beyond simply halting unauthorized AI training and also prevent the production of “deepfakes,” which are fabricated audio or video recordings of famous individuals, politicians, family members, or others, depicting them doing or saying things they did not actually do or say.

According to Zhiyuan Yu, the AntiFake team was contacted by a well-known podcast for assistance in preventing their productions from being taken over.

According to the researcher, the software, which is currently accessible at no cost, has primarily been utilized for recording human speech. However, it has the potential to also be used for capturing songs.

Meyer argued that the ideal scenario would involve all data used for AI being obtained with consent and payment. The goal is to encourage developers to move towards this approach.

Source: voanews.com