Sora Deepfakes Turn OpenAI's App into a Scammer Playground

A viral clip that wasn’t real

I was scrolling through my feed the other night when I stumbled upon a short clip of a friend speaking fluent Japanese at an airport. The only problem was that my friend does not know a single word of Japanese. That realization hit me: it was not him at all but an AI-generated video, and it looked suspiciously like something made with Sora.

What Sora does and why it worries experts

Sora is a new video app that can generate eerily realistic footage and, alarmingly, strip the watermark that typically identifies AI content. Its cameo feature lets users upload their faces to appear in generated videos, which on the surface feels playful and creative. In practice it opens the door to impersonation, fake statements, and staged scenes that look authentic.

Real world misuse is already appearing

Reports show people discovering videos of themselves saying or doing things they never did, sometimes in public and sometimes in humiliating contexts. Scammers and bad actors can use those capabilities to fabricate endorsements, fake confessions, or blackmail material. Observers have also found violent and racist imagery produced via the app, indicating content filters are not catching everything.

OpenAI’s response and its limits

OpenAI says it is introducing controls so people can manage how their digital likenesses appear. Some users can block being used in political or explicit content, and the company has rolled out additional identity controls. These are steps forward, but critics argue the guardrails are inconsistent and often reactive rather than preventative.

The broader problem of normalizing synthetic media

This is not just about one app or one company. The bigger issue is how quickly synthetic media is being normalized and how thin our defenses remain. As AI video tools get better, distinguishing real from fake will become harder, eroding trust in video evidence and public discourse.

What needs to be done next

Banning these tools is unlikely to be effective or desirable. Instead we need stronger detection technology, enforceable transparency rules, and wider public education about the limits of media trust. Policymakers and platforms should prioritize standards that make synthetic content traceable and accountable. For everyday users, a measure of skepticism when hitting play will remain essential.