Recently, a deeply troubling AI-generated content has gone viral, featuring the renowned theoretical physicist Stephen Hawking who passed away in 2018. Videos created using OpenAI’s new text-to-video generator Sora 2 show the wheelchair-bound scientist in violent scenarios.
Among the most disturbing clips circulating online is footage depicting Hawking in a UFC-style combat situation, where an announcer’s voice declares “Hawking’s in trouble” as the physicist is shown being knocked from his wheelchair.
The videos extend beyond combat sports simulations. Other clips show Hawking’s wheelchair being delivered by forklift into a wrestling ring, where he faces immediate attack from wrestlers. In one such video, an announcer exclaims, “This shouldn’t even be legal!”
Additional AI-generated videos show the physicist in other violent scenarios, including encounters with wild animals and dangerous situations involving his mobility equipment. These depictions are particularly troubling given that Hawking lived with amyotrophic lateral sclerosis (ALS) since the late 1960s, a motor neuron disease that eventually led to his death.
The phenomenon highlights a bitter irony: artificial intelligence is being weaponized to create disturbing content featuring a man who was among AI’s most prominent critics during his lifetime. Hawking repeatedly warned about the potential dangers of advanced AI systems, making these videos especially tone-deaf.
OpenAI’s Sora 2 platform operates with a TikTok-style feed where users can share their AI-generated creations. While the company’s safety documentation states it will “take measures to block depictions of public figures,” deceased celebrities appear to fall into a different category. In a statement to PCMag, OpenAI indicated it would “allow the generation of historical figures,” suggesting dead public figures receive different treatment under their policies.
Users have generated numerous clips featuring other deceased celebrities in questionable scenarios, alongside copyrighted characters and intellectual property violations. The platform’s guardrails appear insufficient to prevent the creation of non-consensual deepfakes and exploitative content.
OpenAI CEO Sam Altman has promised that rightsholders will receive “more granular control over generation of characters, similar to the opt-in model for likeness but with additional controls.” However, this announcement came only after widespread criticism of the platform’s content policies.
The company introduced a “cameos” feature that theoretically gives living individuals control over their digital likeness through an opt-in system. Users can record themselves to enable others to place them in AI-generated scenarios. Yet evidence suggests these protective measures are failing to prevent unauthorized use of people’s likenesses.