Microsoft Fine-Tunes AI Generator After Explicit Taylor Swift Pics Float Online
By Mikelle Leow, 30 Jan 2024
Photo 103366809 © Alexandre Paes Leme Durao | Dreamstime.com
Taylor Swift’s next breakup hit needs to be one with artificial intelligence as the villain. After her unconsenting appearances in sham Le Creuset commercials in which her face and voice were edited into videos to swindle Swifties, the superstar has once again found herself on the darker side of digital technology, where she’s become the target of graphic, AI-generated deepfake images.
As discovered by 404 Media, several of these X-rated pictures were facilitated by Microsoft’s Designer AI generator app, prompting the tech giant to detect a loophole and shake it off its platform. The explicit imagery of Swift amassed tens of millions of views.
It seemed users had been exploiting the AI system’s limitations by slightly misspelling names or employing descriptive phrases to bypass Microsoft’s safeguards designed to prevent the creation of explicit content. For instance, if they wanted to churn out false photos of Jennifer Aniston, they’d write “Jennifer ‘actor’ Aniston” in the prompt.
To stem the tide, X (formerly known as Twitter), where a bulk of the images were circulating, temporarily disabled searches for the singer’s name. Joe Benarroch, the social network’s head of business operations, later confirmed with the Wall Street Journal that the search function has since been restored, but reaffirmed a promise to remove such content and penalize those responsible.
X has decided to deal with the Taylor Swift deepfake porn by blocking all searches for Taylor Swift. ð¤¡ pic.twitter.com/iQExom8eLt— Jameson Lopp (@lopp) January 28, 2024
Swift’s fans rallied to her defense, launching a campaign under the hashtag #ProtectTaylorSwift to populate X with positive content and report the deepfake accounts.
I don’t understand why society is so degrading… it’s not “she’s a white woman idc” or “ooh the swifties aren’t gonna like this”— Swiftie Reverie â¸â¸ (@swiftiereverie) January 26, 2024
No. No one in their right mind should like this or anything of the sort. These Taylor swift ai pics are disgusting. #TaylorSwiftAI #ProtectTaylorSwift pic.twitter.com/3IWFWcRrFz
The issue extended beyond X, with Reality Defender, a watchdog specializing in identifying deepfake content, noting a surge in nonconsensual pornographic material featuring Swift across several social media outlets, including Facebook and Telegram.
Microsoft, recognizing its AI’s role in creating these pictures to burn, acted to close loopholes that allowed users to compose explicit content by bypassing simple name filters. Sarah Bird, responsible AI engineering lead at Microsoft, emphasized the company’s commitment to a safe and respectful user experience.
The company’s Code of Conduct prohibits the use of its tools for generating adult or nonconsensual intimate content. To enforce this policy, Microsoft has deployed large teams dedicated to the development and maintenance of safety systems and guardrails. These teams work to refine content filtering, operational monitoring, and abuse detection mechanisms to ensure a comfortable user environment.
Meta has also reportedly worked to wipe out the offending pictures from its platform.
to all the people who participated in creating deepfake ai images of Taylor Swift pic.twitter.com/KiH5xS9kwm— mohâ¸â¸ ð (@tswiftvoodoo) January 25, 2024
Bad blood is certainly spewing between deepfake imaging—with its potential for abuse—and women, in particular. The ease of creating and disseminating such images poses profound ethical and safety questions for both the creators of AI tools and the platforms that host content. As big tech firms like Microsoft and X take steps to address these issues, the incident with the pop star highlights the ongoing need for vigilance in the digital age.
More related news