– Explicit, fake AI-generated images sexualizing Taylor Swift have been circulating online, sparking outrage.
– The images have spread on platforms like X, Twitter, Reddit, Facebook, and Instagram.
– Some platforms struggle with detecting and removing the banned content before it becomes widely viewed.
– It is unclear how many non-consensual deepfakes have been generated or how widely they have spread.
– Swift fans have been uploading favorite photos of Swift to bury the harmful images and prevent them from appearing in searches.
– Solving the problem requires more than just requesting removals from social media platforms.
– Swift’s predicament raises awareness of the harm caused by non-consensual deepfake pornography.
– It may inspire regulators to act faster to combat deepfake porn.
– Some lawmakers are already working on legislation to criminalize deepfake porn.
– The UK’s Online Safety Act restricts illegal content, including deepfake pornography, on platforms.
– AI image generators are taking measures to limit NSFW outputs.
– Keeping up with reports of deepfake porn falls on social media platforms’ shoulders, and they are currently unprepared to handle the issue efficiently.
Explicit, fake AI-generated images sexualizing Taylor Swift began circulating online this week, quickly sparking mass outrage that may finally force a mainstream reckoning with harms caused by spreading non-consensual deepfake pornography.
A wide variety of deepfakes targeting Swift began spreading on X, the platform formerly known as Twitter, yesterday.
Ars found that some posts have been removed, while others remain online, as of this writing. One X post was viewed more than 45 million times over approximately 17 hours before it was removed, The Verge reported. Seemingly fueling more spread, X promoted these posts under the trending topic “Taylor Swift AI” in some regions, The Verge reported.
The Verge noted that since these images started spreading, “a deluge of new graphic fakes have since appeared.” According to Fast Company, these harmful images were posted on X but soon spread to other platforms, including Reddit, Facebook, and Instagram. Some platforms, like X, ban sharing of AI-generated images but seem to struggle with detecting banned content before it becomes widely viewed.
Ars’ AI reporter Benj Edwards warned in 2022 that AI image-generation technology was rapidly advancing, making it easy to train an AI model on just a handful of photos before it could be used to create fake but convincing images of that person in infinite quantities. That is seemingly what happened to Swift, and it’s currently unknown how many different non-consensual deepfakes have been generated or how widely those images have spread.
It’s also unknown what consequences have resulted from spreading the images. At least one verified X user had their account suspended after sharing fake images of Swift, The Verge reported, but Ars reviewed posts on X from Swift fans targeting others who allegedly shared images whose accounts remain active. Swift fans also have been uploading countless favorite photos of Swift to bury the harmful images and prevent them from appearing in various X searches. Her fans seem dedicated to reducing the spread however they can, with some posting different addresses, seemingly in attempts to dox an X user who, they’ve alleged, is the initial source of the images.
Neither X nor Swift’s team has yet commented on the deepfakes, but it seems clear that solving the problem will require more than just requesting removals from social media platforms. The AI model trained on Swift’s images is likely still out there, likely procured through one of the known websites that specialize in making fine-tuned celebrity AI models. As long as the model exists, anyone with access could crank out as many new images as they wanted, making it hard for even someone with Swift’s resources to make the problem go away for good.
In that way, Swift’s predicament might raise awareness of why creating and sharing non-consensual deepfake pornography is harmful, perhaps moving the culture away from persistent notions that nobody is harmed by non-consensual AI-generated fakes.
Swift’s plight could also inspire regulators to act faster to combat non-consensual deepfake porn. Last year, she inspired a Senate hearing after a Live Nation scandal frustrated her fans, triggering lawmakers’ antitrust concerns about the leading ticket seller, The New York Times reported.
Some lawmakers are already working to combat deepfake porn. Congressman Joe Morelle (D-N.Y.) proposed a law criminalizing deepfake porn earlier this year, after teen boys at a New Jersey high school used AI image generators to create and share non-consensual fake nude images of female classmates. Under that proposed law, anyone sharing deepfake pornography without an individual’s consent risks fines and being imprisoned for up to two years. Damages could go as high as $150,000 and imprisonment for as long as 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.
Elsewhere, the UK’s Online Safety Act restricts any illegal content from being shared on platforms, including deepfake pornography. It requires moderation, or companies will risk fines worth more than $20 million, or 10 percent of their global annual turnover, whichever amount is higher.
The UK law, however, is controversial, because it requires companies to scan private messages for illegal content. That makes it practically impossible for platforms to provide end-to-end encryption, which the American Civil Liberties Union has described as vital for user privacy and security.
As regulators tangle with legal questions and social media users with moral ones, some AI image generators have moved to limit models from producing NSFW outputs. Some did this by removing some of the large quantity of sexualized images in the models’ training data, such as Stability AI, the company behind Stable Diffusion. Others, like Microsoft’s Bing image creator, make it easy for users to report NSFW outputs.
But so far, keeping up with reports of deepfake porn seems to fall squarely on social media platforms’ shoulders. Swift’s battle this week shows how unprepared even the biggest platforms currently are to handle blitzes of harmful images seemingly uploaded faster than they can be removed.
AI Eclipse TLDR:
Explicit, fake AI-generated images sexualizing Taylor Swift have been circulating online, causing widespread outrage and highlighting the harms of spreading non-consensual deepfake pornography. The deepfakes targeting Swift began spreading on the social media platform X, with some posts being removed while others remain online. One post on X was viewed over 45 million times in a short period before it was taken down. X also promoted these posts under the trending topic “Taylor Swift AI” in certain regions, further fueling their spread. The images have since spread to other platforms like Reddit, Facebook, and Instagram. Some platforms struggle to detect banned content before it becomes widely viewed, despite banning the sharing of AI-generated images.
It is currently unknown how many non-consensual deepfakes of Swift have been generated and how widely they have spread. It is also unclear what consequences have resulted from the spread of these images. While at least one X user had their account suspended for sharing fake images of Swift, other accounts targeting alleged image sharers remain active. Swift’s fans have been uploading numerous favorite photos of her to bury the harmful images and prevent them from appearing in searches.
The issue of deepfake pornography extends beyond requesting removals from social media platforms. The AI model trained on Swift’s images is likely still accessible, allowing anyone with access to create new images. Solving the problem may require more comprehensive measures. Swift’s predicament may raise awareness of the harm caused by non-consensual deepfake pornography, potentially leading to faster action from regulators. Some lawmakers are already working to combat deepfake porn, proposing laws that criminalize its creation and sharing. However, the detection and prevention of deepfake porn currently falls on social media platforms, which are struggling to keep up with its proliferation.