Elon Musk's AI video generator has been accused of deliberately creating sexually explicit clips of Taylor Swift without prompting.
Image: YouTube
Elon Musk’s AI video tool, Grok Imagine, developed by his firm xAI, has come under fire following allegations that it generated explicit, sexually suggestive video clips of Taylor Swift, without being prompted to do so.
According to the online tech website The Verge, the moment a journalist selected the “spicy” mode, the system produced what were described as "fully uncensored topless videos" of the singer, despite no request for nudity.
Prof Clare McGlynn, a legal expert who has contributed to drafting legislation to outlaw pornographic deepfakes, described the tool’s behaviour as “misogyny by design”, asserting that platforms like X could have prevented the creation of such content, but deliberately chose not to.
Grok Imagine simply requests a date of birth and nothing more.
Jess Weatherbed, jounalist of The Verge, who first tested the feature, said the experience was startling: she only typed “Taylor Swift celebrating Coachella with the boys” and clicked “spicy” before being met with explicit content. She emphasised she never asked the AI to remove the singer’s clothes.
Despite xAI’s acceptable use policy explicitly banning the depiction of people "in a pornographic manner", Grok’s capabilities clearly undermine that, generating harmful likenesses of well-known figures, including other celebrities besides Swift.
The tool’s release has sparked intense criticism over its weak moderation, unapplied guidelines, and potential for widespread abuse - especially given Musk’s stature and the rapid uptake of the technology.
This incident revisits a similar scandal in January 2024, when sexually explicit deepfake imagery of Taylor Swift circulated widely online, prompting public outrage and calls for regulatory action.
IOL Lifestyle
Related Topics: