Sora's Moment: 3 Unanswered Questions That Could Make or Break OpenAI's AI-Video App
What Sora is
OpenAI recently launched Sora, a TikTok-style app that serves an endless feed of AI-generated short videos, each up to 10 seconds long. The app lets users create a hyperrealistic “cameo” of themselves that reproduces their appearance and voice, and it can insert other people’s cameos into videos depending on permission settings. Despite criticism from some corners of the AI community, Sora quickly rose to the top of the Apple US App Store.
What kinds of videos are taking off
Early popular content on Sora skews toward quick, surreal, and recognizably viral formats: bodycam-style clips of police pulling over pets, deepfake sketches featuring trademarked characters like SpongeBob or Scooby Doo, parodic clips of historical figures saying modern things, and recurrent religious-themed variations. The uniform fact that everything is explicitly AI-generated appears to be part of the appeal for many users, who find comfort in not having to guess whether something is real.
Can it last?
There are two competing narratives about Sora’s longevity. One sees it as a fad: a novelty built around showing off what cutting-edge models can produce—a curiosity that will fade after initial fascination. The other, which OpenAI seems to be betting on, argues that Sora represents a deeper shift in attention: users may prefer content that leans fully into fantastical, unbound creativity that ordinary video platforms can’t reliably produce.
How Sora ages will depend on a handful of product decisions: how ads are integrated, the boundaries set for copyrighted material, and the recommendation algorithms that choose what users see. These choices will shape whether Sora remains a novelty or evolves into a daily destination.
Can OpenAI afford it?
Generating video is one of the most compute- and energy-intensive AI tasks. Compared with text or image generation, short videos require far more resources, and Sora currently allows free, unlimited video creation. OpenAI has invested in data centers and power infrastructure, and its leadership has acknowledged the need to monetize video generation, but specifics are scarce. Potential avenues include personalized ads and in-app purchases.
The environmental footprint also matters. While Sam Altman has called the emissions of a single ChatGPT query negligibly small, the emissions cost of a 10-second AI-generated video is not yet public. If Sora scales widely, researchers and regulators will likely demand transparent accounting of its energy and emissions impact.
How many lawsuits are coming?
Sora is already rife with legal flashpoints: trademarked and copyrighted characters, deepfakes of deceased public figures, and videos that use copyrighted music. Reports indicate OpenAI has told rights holders they must opt out if they don’t want their material included—an approach that diverges from how platforms traditionally handle copyrighted content.
OpenAI says it will give rights holders more “granular control” over character usage, but admits some problematic generations may slip through. Added to that is the cameo system: people can restrict how their cameo is used, but questions remain about the limits of those restrictions and about enforcement. The company has begun adding options to prevent cameos from appearing in political content or saying certain words, but enforcement reliability is unproven.
What Sora will ultimately test
Full-scale Sora hasn’t been widely released yet; access is still controlled by invites. When it opens up, the app will be an important experiment in whether AI-generated video can be optimized for endless engagement to the point of competing with or even displacing “real” content. Beyond technical performance, Sora tests social tolerance for simulated reality: how much of our attention and trust will we trade for an infinite scroll of synthetic, fantastical videos?