Artificial intelligence art platform Leonardo has launched a new image-to-video tool called Motion. Built on top of Stable Video Diffusion, the model can turn a generated image into short video clips with impressive results.
The generative AI startup, which also has an app that lets you turn sketches into artwork, raised $31 million last month to build out its services for consumers and enterprise users.
The animation AI tool works regardless of the original image style and produces between three and four seconds of footage. It doesn't currently allow you to increase the duration in the way Pika 1.0 and Runway do.
How does it work?
We've just updated Motion for even better consistency and fidelity. Available now for all users. More updates soon. 🎬 pic.twitter.com/KMWj68yIm2December 28, 2023
Using Motion is as simple as clicking a button on any image in Leonardo. It works in much the same way you can remix or upscale an image. Click the button and it makes a video.
At the moment the only control you have over the output is how much motion, but I’ve been impressed with how well it interprets the image and creates the animation.
You can also make any video output private or make it available on the public feed so other Leonardo users can find it. Motion is available to all users, produces an mp4 file and each generation takes 25 tokens.
What is the output like?
Leonardo Motion is an adapted version of Stable Video Diffusion built by StabilityAI, but unlike the raw implementation, it applies some of the fine-tuning deployed on other Leonardo models.
The output can be hit or miss in terms of the quality of the motion, but when it works it is incredibly impressive. People have shared clips of lifelike human movement, light reflecting on glass in the light and even fireworks exploding in the sky.
The actual look of the output depends on the quality of the input. The higher the resolution of the starting image, the better the look of the output video.
Other new features
Leonardo.ai is largely built on top of the various Stable Diffusion models, adapted with fine-tuned features and improved source images in training data. Over time they are integrating other Stable Diffusion tools as they become available.
Motion is built on Stable Video Diffusion and another new feature, Realtime Gen, builds on SDXL Turbo which I reviewed early this month. It allows for AI image generation as you type and is a good way to test out prompt ideas without waiting for a full image to generate.
You can use Realtime Gen to test an idea, use the creative upscale feature to turn it into a higher-resolution image, and then run it through Motion to create a short video.
Overall Motion is a good addition to the growing lineup of AI video tools. It isn’t as feature-rich as the more established models but does a good job at animating existing images and is likely to improve over time as new versions of Stable Video Diffusion are released.