OpenAI initially introduced its text-to-video AI model, Sora, in February but has remained tight-lipped about its official release timeline.
Recently, however, controversy erupted when a group of artists accused the company of exploiting their work for “unpaid research and publicity,” prompting them to take action.
On Tuesday, beta testers for Sora 2.0 reportedly leaked an early version of the tool, complete with a functional interface for generating videos.
According to a post on Hugging Face, a platform for sharing AI models, users briefly gained the ability to produce numerous AI-generated videos resembling Sora’s official demonstrations. OpenAI quickly intervened to restrict access after the leak, as TechCrunch first reported.
Message From Leakers:
DEAR CORPORATE AI OVERLORDS
We received access to Sora with the promise to be early testers, red teamers and creative partners. However, we believe instead we are being lured into “art washing” to tell the world that Sora is a useful tool for artists.
ARTISTS ARE NOT YOUR UNPAID R&D
☠️ we are not your: free bug testers, PR puppets, training data, validation tokens ☠️
“We are not against the use of AI technology as a tool for the arts (if we were, we probably wouldn’t have been invited to this program). What we don’t agree with is how this artist program has been rolled out and how the tool is shaping up ahead of a possible public release. We are sharing this to the world in the hopes that OpenAI becomes more open, more artist friendly and supports the arts beyond PR stunts.”
Critics argue that OpenAI’s early access program for Sora takes advantage of artists by using their work and input without compensation, a practice they refer to as “art washing”—leveraging artistic credibility to enhance a corporate product. These accusations target OpenAI, a company valued at $150 billion after recent multi-billion-dollar funding rounds, for allegedly benefiting from the unpaid labor of hundreds of artists providing feedback and testing.
The artists also express frustration over OpenAI’s strict content approval policies for Sora. Reportedly, these rules require that “every output must receive approval from the OpenAI team before being shared,” adding to their concerns about control and exploitation.
When approached by reporters, OpenAI declined to confirm the authenticity of the alleged Sora leak. Instead, the company emphasized that participation in its “research preview” is entirely optional, with no requirement for users to provide feedback or engage with the tool.
In a statement, OpenAI spokesperson Niko Felix explained, “Sora remains in the research preview phase as we work to balance creativity with stringent safety measures for wider use.
Hundreds of artists in our alpha program have contributed to Sora’s development by shaping features and safeguards. Participation is voluntary, and there is no obligation to provide feedback or use the tool.
We’ve been thrilled to offer these artists free access and plan to continue supporting them through grants, events, and other initiatives. Our vision is for AI to serve as a powerful creative tool, and we’re dedicated to ensuring Sora is both practical and safe for users.”
Former OpenAI CTO Mira Murati mentioned in a Wall Street Journal interview back in March that Sora was slated for release by the end of the year.
However, she emphasized that OpenAI would only launch the tool if they were confident about its potential impacts, particularly concerning global elections or other sensitive issues.
In a recent Reddit AMA, Chief Product Officer Kevin Weil explained the delay, stating that Sora’s release hinges on scaling the computational resources needed to support it effectively.
Additionally, OpenAI is working to address critical concerns such as safety, impersonation risks, and other ethical challenges before making the tool publicly available.
Stories You May Like