Runway’s Gen-3 Alpha AI video model now available – but there’s a catch

by | Jul 1, 2024 | Technology

Don’t miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders only at VentureBeat Transform 2024. Gain essential insights about GenAI and expand your network at this exclusive three day event. Learn More

RunwayML, one of the earliest startups to take up the task of training an AI for video generation, today announced that its latest frontier model, Gen-3 Alpha, has become generally available.

First announced a couple of weeks ago, Gen-3 Alpha allows users to produce hyper-realistic AI videos from text, image or video prompts. With widespread access rolling out today, anyone signed up on the RunwayML platform can use the model’s high fidelity and controllable generations to power a range of creative use cases, including those for advertising — much like what OpenAI has teased with Sora.

However, there’s also a caveat: Gen-3 Alpha is not free like Gen-1 and Gen-2 models. Users will have to upgrade to a paid plan from the company, with prices starting at $12/month per editor, billed yearly.

What to expect from Gen-3 Alpha?

After launching Gen-1 and Gen-2 models within a few months’ gap last year, RunwayML went radio silent on the models’ front and kept pushing feature updates for its platform. During this window, several rivals showcased their offerings, including Stability AI, OpenAI Pika and most recently Luma Labs.

Countdown to VB Transform 2024

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now

As the AI video wars picked up pace, the startup decided to reemerge last month with Gen-3 Alpha. The model, trained on videos and images annotated with highly descriptive captions, allows users to produce hyper-realistic video clips featuring imaginative transitions, precise key-framing of elements and expressive human characters dis …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
Don’t miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders only at VentureBeat Transform 2024. Gain essential insights about GenAI and expand your network at this exclusive three day event. Learn More

RunwayML, one of the earliest startups to take up the task of training an AI for video generation, today announced that its latest frontier model, Gen-3 Alpha, has become generally available.

First announced a couple of weeks ago, Gen-3 Alpha allows users to produce hyper-realistic AI videos from text, image or video prompts. With widespread access rolling out today, anyone signed up on the RunwayML platform can use the model’s high fidelity and controllable generations to power a range of creative use cases, including those for advertising — much like what OpenAI has teased with Sora.

However, there’s also a caveat: Gen-3 Alpha is not free like Gen-1 and Gen-2 models. Users will have to upgrade to a paid plan from the company, with prices starting at $12/month per editor, billed yearly.

What to expect from Gen-3 Alpha?

After launching Gen-1 and Gen-2 models within a few months’ gap last year, RunwayML went radio silent on the models’ front and kept pushing feature updates for its platform. During this window, several rivals showcased their offerings, including Stability AI, OpenAI Pika and most recently Luma Labs.

Countdown to VB Transform 2024

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now

As the AI video wars picked up pace, the startup decided to reemerge last month with Gen-3 Alpha. The model, trained on videos and images annotated with highly descriptive captions, allows users to produce hyper-realistic video clips featuring imaginative transitions, precise key-framing of elements and expressive human characters dis …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This