UFC Fans Target Amanda Nunes After Elon Musk’s Grok Safeguards Fail, Generating Adult Content

Two weeks ago, UFC legend Amanda Nunes shared what appeared to be a routine training camp photo on Instagram. The image showed her team preparing for her upcoming UFC 324 bout against Kayla Harrison and was captioned “Pra cima delas 🚀.” Within days, a different version of that image began circulating widely across social media, despite never being posted by Nunes.

The altered image showed Nunes and her training partners wearing pink bathing suits. It spread rapidly across Facebook, Instagram and Reddit, drawing thousands of reactions from MMA fans. Comment sections filled with surprise, jokes and speculation, with most users seemingly unaware that the image was artificial.

The image was generated using Grok, the AI image tool developed by Elon Musk’s company xAI. Grok has recently drawn attention for having fewer content restrictions than competing systems such as ChatGPT or Google’s Gemini. Users have found that it can generate adult  images involving real people, something most major AI platforms block.

In this case, someone took a legitimate training photo of Nunes and used Grok to digitally alter the clothing of everyone in the frame. The final result was realistic enough that many casual viewers assumed it was authentic and shared it without hesitation.

What stands out is not only that the image was created, but how easily it spread on platforms where awareness of AI manipulation remains limited. While the altered image circulated heavily on Facebook, Instagram and Reddit, it appeared far less frequently on X, formerly Twitter. This suggests that users on the platform most closely associated with Grok may be more familiar with its capabilities and more skeptical of suspicious images.

The UFC fan base, like many sports communities, consumes content quickly and across multiple platforms. Training photos, behind the scenes footage and camp updates are routine parts of coverage. When an image appears to feature a well known MMA star and resembles familiar content, most viewers default to assuming it is real.

“I literally thought this was real until someone pointed it out in the comments. Why would anyone even make this?”

The answer is largely procedural. Grok’s minimal safeguards make it easy to generate images of real people, and the quality of the output is now high enough that many viewers cannot immediately identify manipulation. The technology no longer requires technical skill, only intent.

Nunes is not the first public figure to be targeted with AI generated adult imagery, and she will not be the last. What her case illustrates is how quickly synthetic content can penetrate niche communities where users are not primed to question authenticity. MMA fans are focused on bouts, camps and matchups, not AI policy debates or detection tools.

Musk has defended a more libera; approach towards Grok as a stance against heavy handed moderation. However, when the outcome is the mass circulation of fake adult images of athletes, teachers or private individuals, the consequences move beyond abstract arguments about free expression and into measurable harm.