The AI backlash begins: artists could protect against plagiarism with this powerful tool

Key Takeaways:

– Researchers at the University of Chicago have created a tool called Nightshade to help online artists counter AI companies.
– Nightshade inserts poisonous pixels into digital art, which manipulate generative AIs’ interpretation of the art.
– The software exploits the security vulnerability of AI models like Stable Diffusion, which use internet images for training data.
– Nightshade can make AIs perceive images incorrectly, such as seeing a dog as a cat or a car as a cow.
– The tool also affects tangentially related ideas and art styles, making AIs confused about concepts and connections.
– Removing toxic pixels is difficult, as developers would have to identify and delete each corrupted sample, which is challenging considering the volume of training data.
– Nightshade is still in the early stages and has been submitted for peer review.
– The team plans to implement and release Nightshade as an optional feature of Glaze, their existing tool.
– The team also hopes to make Nightshade open source for others to use and create their own versions.
– There are no current plans to develop Nightshade for video and literature, but the team may consider it in the future.
– Initial reactions to Nightshade are positive, with the potential for AI developers to respect artists’ rights and pay royalties.
– The article suggests checking out the best digital art and drawing software in 2023 if interested in illustration as a hobby.

TechRadar:

A team of researchers at the University of Chicago has created a tool aimed to help online artists “fight back against AI companies” by inserting, in essence, poison pills into their original work.

Called Nightshade, after the family of toxic plants, the software is said to introduce poisonous pixels to digital art that messes with the way generative AIs interpret them. The way models like Stable Diffusion work is they scour the internet, picking up as many images as they can to use as training data. What Nightshade does is exploit this “security vulnerability”. As explained by the MIT Technology Review, these “poisoned data samples can manipulate models into learning” the wrong thing. For example, it could see a picture of a dog as a cat or a car as a cow.

Poison tactics

As part of the testing phase, the team fed Stable Diffusion infected content and “then prompted it to create images of dogs”. After being given 50 samples, the AI generated pictures of misshapen dogs with six legs. After 100, you begin to see something resembling a cat. Once it was given 300, dogs became full-fledged cats. Below, you’ll see the other trials.

(Image credit: University of Chicago/MIT Technology Review)

The report goes on to say Nightshade also affects “tangentially related” ideas because generative AIs are good “at making connections between words”. Messing with the word “dog” jumbles similar concepts like puppy, husky, or wolf. This extends to art styles as well. 

Nightshade's tangentially related samples

(Image credit: University of Chicago/MIT Technology Review)

It is possible for AI companies to remove the toxic pixels. However as the MIT post points out, it is “very difficult to remove them”. Developers would have to “find and delete each corrupted sample.” To give you an idea of how tough this would be, a 1080p image has over two million pixels. If that wasn’t difficult enough, these models “are trained on billions of data samples.” So imagine looking through a sea of pixels to find the handful messing with the AI engine.

Source link

AI Eclipse TLDR:

Researchers at the University of Chicago have developed a tool called Nightshade that aims to help online artists protect their work from AI companies. Nightshade introduces poisonous pixels into digital art, which disrupt the way generative AIs interpret the images. These “poisoned data samples” can manipulate the AI models into learning incorrect information. For example, a picture of a dog could be interpreted as a cat or a car as a cow. The tool, which is still in the early stages, has been submitted for peer review. The researchers plan to implement and release Nightshade for public use in the future, and hope to make it open source. They also have no current plans to extend the tool to video or literature. Initial reactions to Nightshade have been positive, with some suggesting that it could lead to AI developers respecting artists’ rights more and even paying royalties.