Runway launches text-to-video AI model to generate art

85
1
Runway launches text-to-video AI model to generate art

The use of AI image generators dates back to the early 1990s when artists started to use AI algorithms to generate art, music and visual effects. In 2021, the launch of DALL-E 2 a neural network-based image generation model developed by OpenAI led to the mainstream adoption of AI image generators.

Today, the precision, realism, and controllability of AI systems for image and video synthesis are rapidly improving. One of the most popular AI image generators is Stable Diffusion, a deep learning text-to-image model that allows billions of people to create stunning art within seconds based on text descriptions.

Today, Runway, one of the startups behind the Stable Diffusion AI image generator, announced the release of an AI model known as Gen- 2 that takes any text description such as turtles flying in the sky and generates three seconds of matching video footage.

According to its website, Gen- 2 is a multi-modal AI system that can generate novel videos with text, images or video clips.

Due to safety and business concerns, Runway has decided not to release the model at this time, nor will it be open-sourced like Stable Diffusion. The text-to-video model will only be accessible through a waitlist on the Runway website and via Discord.

The use of AI to generate videos from text inputs is not a new concept, but last year Meta Platforms and Google published research papers on text-to-video AI models. According to Runway s co-founder and CEO Cristobal Valenzuela, Runway's text-to-video AI model will be accessible to the general public.