Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Lloyd Coombes

AI may be able to make real-time video in a year — this is huge

Luma Labs dream machine.

The generative AI arms race is well underway, and Luma Labs remains one of the biggest names around thanks to its Dream Machine video generation model.

We've been relatively impressed with the results so far, but Jiaming Song, Chief Scientist at LumaLabs, has predicted where it could go next — and its going to change filmmaking forever.

Speaking to Anjney Midha in an interview shared to X, Song explains that real-time video generation is closer than ever, and that Luma Labs' Dream Machine will be able to shift perspective while maintaining consistency between shots.

This shifting perspective is something not possible with the current "one-shot" state of AI video generation at present and will allow for more control over how videos turn out, making the tools more useful in traditional filmmaking.

The challenges of AI video generation 

As explained by Midha, AI video generation needs to show that it's "actually doing more than generating cool frames". Pressed for an example, Song notes that traditional models run like an "image animator".

In the first example, Songs shares a prompt that requires Luma Labs to generate a video of a small, animated character. 

While we've seen this technology add additional perspective and animation before, here the video features a cut and transition as the camera shifts to an entirely different viewpoint while maintaining knowledge of the subject and its surroundings.

This was one of the key features of OpenAI Sora that excited people when it was first unveiled to the world in February and comes in part from longer generations.

Another image-to-video prompt sees a young girl staring at a huge eye on the wall (Song says "it might look a little bit disturbing in the first frame). The image given is the eye looking at the girl, but Dream Machine is able to generate a stunned look on her face while keeping her blue dress and short hair consistent from shot to shot.

Song suggests this "cause and effect" shows that Luma Labs video model is adjusting into a new level of understanding, where it is now able to consider human psychology in that situation.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.