Social media platform X (formerly known as Twitter) has temporarily blocked Taylor Swift‘s name from being searched after explicit, fake AI-generated images of the singer went viral last week.
At the time of publishing (January 29), searching for Swift’s name and “AI Taylor Swift” on X will lead users to a page that reads: “Something went wrong. Try reloading.” Additionally, a secondary text at the bottom of the page reads: “Something went wrong – but don’t fret. It’s not your fault.”
The block is only a temporary measure taken by X, the platform’s head of business operations Joe Benarroch said in a statement per Variety: “This is a temporary action and done with an abundance of caution as we prioritize safety on this issue.”
The fake, AI-generated images of Swift had reportedly circulated on X and the messaging platform Telegram last week. X released a statement, per the BBC, stating: “We have a zero-tolerance policy towards such content. Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them.”
The White House was also made aware of the situation, calling the incident “alarming”. White House press secretary Karine Jean-Pierre said via a statement, per the BBC: “We know that lax enforcement disproportionately impacts women and they also impact girls, sadly, who are the overwhelming targets.”
Additionally, she suggested that there should be a legislation that handles the misuse of AI on social media, which was backed by US Representative Joe Morelle. Morelle has been involved with the proposed Preventing Deepfakes of Intimate Images Act, which would have made it illegal to share deepfake pornography without consent.
Taylor Swift has not commented on the images, but the Daily Mail has reported that her team is “considering legal action” against the site which published the AI-generated images.