Gen-1: AI Model That Generates New Videos from Existing OnesGen-1: AI Model That Generates New Videos from Existing Ones
Released by the research minds behind Stable Diffusion
February 9, 2023
At a Glance
- Co-creator of Stable Diffusion unveil new video-to-video AI model.
- Gen-1 can generate videos based off existing videos and make changes to match desired prompts.
- By being trained on both images and videos enables Gen-1 to easily create new content.
Runway ML, the research lab that co-developed text-to-image AI model Stable Diffusion, has unveiled a new video-to-video model: Gen-1, which can generate new videos from existing ones.
Gen-1 is a content-guided video diffusion model. It edits videos based on visual or textual descriptions based on the desired output.
For example, users can upload a video of a dog with white fur, type in the text prompt, ‘a dog with black spots on white fur’ and the model will generate a new version of the existing video with the desired output.
Runway claims its new model is akin to “filming something new, without filming anything at all. No lights. No cameras. All action.”
Runway touts its latest generative model as being able to generate videos while retaining the original’s quality and flexibility.
According to Runway, Gen-1 is “able to realistically and consistently apply the composition and style of an image or text prompt to the target video.”
The video-to-video approach was achieved by jointly training the model on both images and videos. Such training data enables Gen-1 to edit entirely at inference time without additional per-video training or pre-processing as it uses example images as a guide.
Among the use cases Gen-1 could be deployed for, according to Runway, include customization, rendering and masking
Currently, only a handful of invited users were given access to Gen-1, although Runway is set to publicly launch the model in a few weeks. Users wanting access to Gen-1 have to join a waitlist.
“Runway Research is dedicated to building the multimodal AI systems that will enable new forms of creativity. Gen-1 represents yet another of our pivotal steps forward in this mission,” the Stable Diffusion makers contend.
A paper fully outlining the model is available via arXiv.
Generative AI for videos is nothing new. Last September, while the world was beginning its fascination with text-to-image AI models, researchers from Meta unveiled Make-A-Video, an AI system capable of generating videos from text prompts. Make-A-Video can also create videos from images or take existing videos and create similar new ones.
About the Author(s)
You May Also Like