Runway, an AI startup focused on creating video-generation technology, introduced an API today that enables developers and businesses to integrate its generative AI models into external platforms, applications, and services.

The Runway API is currently available on a limited basis with a waitlist and offers just one model at the moment — Gen-3 Alpha Turbo, a faster but slightly less powerful version of the company’s main Gen-3 Alpha model.
Two pricing plans are available: Build, designed for individuals and teams, and Enterprise. The base price is set at one cent per credit, with each second of the video requiring five credits.
Runway has mentioned that “trusted strategic partners,” such as the marketing firm Omnicom, are already utilizing the API. The Runway API includes unique disclosure stipulations.
According to a blog post from the company, any interfaces utilizing the API must “prominently feature” a banner stating “Powered by Runway,” which links to their website.
This requirement is intended to “[help] users comprehend the technology powering [applications] while ensuring compliance with our usage terms.”
Runway, supported by investors such as Salesforce, Google, and Nvidia, was last valued at $1.5 billion and is encountering significant competition in the video generation market from companies like OpenAI, Google, and Adobe.
OpenAI is anticipated to launch its video-generation model, Sora, in some capacity this fall, while startups like Luma Labs are actively enhancing their technologies.
For instance, in what appears to be a strategically timed move, Luma has just released its video generation API, which does not require a waitlist. This API has additional features compared to Runway’s, including the capability to “control” the virtual camera within AI-generated scenes.
The initial rollout of the Runway API positions Runway among the pioneers in offering a video-generation model via an API.
Although this development could aid the company in its pursuit of profitability—potentially offsetting the substantial expenses associated with training and operating these models—it does not address the ongoing legal issues surrounding both the models themselves and the broader landscape of generative AI technology.
Runway’s video-generating models, similar to those of other companies in the field, were developed by analyzing a large dataset of videos to identify patterns for producing new content.
However, when it comes to the source of this training data, Runway, like many other vendors today, remains tight-lipped—partly due to concerns about losing their competitive edge.
However, the specifics of the training process could lead to potential IP-related legal challenges if it turns out that Runway utilized copyrighted data without authorization.
Evidence suggests this may be the case; a July report by 404 Media revealed a Runway spreadsheet containing training data links to YouTube channels associated with Netflix, Disney, Rockstar Games, and prominent creators such as Linus Tech Tips and MKBHD.
It remains uncertain whether Runway actually used any of the videos listed in the spreadsheet to train its video models.
In a June interview with reporters, co-founder Anastasis Germanidis mentioned that the company relies on “curated, internal datasets” for model training.
However, if Runway did source those videos, it wouldn’t be alone in potentially bending copyright laws within the AI industry.
Earlier this year, OpenAI’s CTO Mira Murati stopped short of denying that Sora may have been trained using YouTube content. Additionally, reports indicate that Nvidia utilized YouTube videos to develop an internal video-generating model called Cosmos.
Many generative AI companies contend that the principle of fair use offers them legal protection, a stance they are actively promoting in court and public discussions.
Conversely, some companies prefer to adopt a more cautious approach, viewing a commitment to ethical model training as a valuable aspect of their offerings. For instance, Adobe is reportedly compensating artists for clips used in developing its video-generating Firefly models.
Luma’s terms of service state that it will defend and indemnify its API business clients against damages related to IP violation claims.
Similar indemnification policies are provided by other companies, such as OpenAI; however, Runway does not offer this assurance.
Last December, Runway announced plans to collaborate with stock media library Getty to create more “commercially safe” iterations of its products.
Regardless of how the lawsuits regarding the legality of training on copyrighted content are resolved, it’s becoming evident that generative AI video tools pose a significant threat to the film and television industry.
A 2024 study commissioned by the Animation Guild, which represents animators and cartoonists in Hollywood, revealed that 75% of film production companies that have integrated AI have downsized, merged, or eliminated positions as a result.
Furthermore, the study predicts that by 2026, generative AI could disrupt over 100,000 entertainment jobs in the United States.
Stories You May Like