We’re beginning to see some real-world examples of AI-generated content’s utility go mainstream, thanks in part to the weight of companies like Adobe and OpenAI going all out to bring generative AI technology to the masses.
A year after its release, the latest iteration of Adobe Firefly is starting
to look like a real contender as a tool for AI content. Not least because it sits as part of a rapidly evolving ecosystem in Adobe Creative Cloud, which has recently begun to introduce some paradigm-shifting generative AI features. Some of the most impressive things that are now available include AI object removal and editing for video in Adobe Premiere Pro, along with video clip extension and smarter selection tools within Photoshop and Lightroom.
Now that Adobe is all in on generative AI, early predictions about AI’s promise
are turning into practical realities for creative professionals who use its tools.
I recently came across the work of filmmaker Paul Trillo, who is credited with creating the first music video using OpenAI’s text-to-video tech, Sora.
The video for the song The Hardest Part by indie artist Washed Out (see above) is a fascinating interpretation of the theme of being with someone and then moving on from that relationship. To depict this, Trillo deploys a technique he refers to as a ‘high-speed infinite zoom’, where the perspective of the video is of the viewer zooming into and through the scenes of these people’s lives. It’s a unique and thought-provoking aesthetic that would have been extremely difficult to achieve through conventional methods.
Above: In this pop video, Trillo uses AI video generator Sora to produce a high-speed infinite zoom effect
Trillo has been at the cutting edge of film technique for many years and has built a reputation for pushing content creation technology to its limits. As well as the Washed Out video, he recently demonstrated some powerful practical uses for other AI tools to create VFX scenes, just like he did in his short film project Notes To My Future Self.
In the breakdown video below, Paul describes how “bad compositing comes from bad green screen lighting in which no one knows what the final background is going to look like”. He overcame that problem by generating backgrounds using Stable Diffusion and extending them with Photoshop, then upscaling them with Topaz Labs, Magnific and Krea to get them up to 8K resolution.
He also used Runway Gen-2, a multi-modal AI system, to create moving elements in each scene, such as swaying branches or dynamic lighting. These details helped to sell the VFX illusion that each scene was grounded in the real world, despite being entirely AI-generated.
What’s clear from Paul’s pioneering work is that the utility of these tools is undeniable. AI as creative aid is already helping people bring ideas to life that were far too impractical to shoot without an exhaustive budget and without wasting precious time on something that may not even work.
AI helps speed up the trial-and-error aspect of creation and reduces the cost of failure when it comes to exploring new ideas. That’s important because failing is a key component of success.
Take a look at the best AI image generators. You might also like to read more of Jon Devo's Scanning Ahead blogs.