Google's researchers have been working overtime recently, publishing a flurry of new models and ideas. The latest is a way to take a still image and turn it into a controllable avatar, coming off the back of a game playing AI agent.
VLOGGER isn't currently available to try but the demo suggests it will allow you to make an avatar and control it using your voice — and it looks surprisingly realistic.
You can already do similar things to some extent with tools like Pika Labs lip sync, Hey Gen’s video translation services and Synthesia but this seems to be a simpler, lower bandwidth option.
What is VLOGGER?
Currently VLOGGER is nothing more than a research project with a couple of fun demo videos, but if it is ever turned into a product it could be a new way to communicate in Teams or Slack.
It's an AI model able to create an animated avatar from a still image and maintain the photorealistic look of the person in the photo in every frame of the final video.
The model then also takes in an audio file of the person speaking and handles body and lip movement to reflect the natural way that person might move if it were them saying the words.
This includes creating head motion, facial expression, eye gaze, blinking as well as hand gestures and upper body movement without any reference beyond the image and audio.
How does VLOGGER work?
The model is built on the diffusion architecture that powers text-to-image, video and even 3D models like MidJourney or Runway but adds additional control mechanisms.
Vlogger goes through multiple steps to get the generated avatar. First it takes the audio and image as input, runs it through a 3D motion generation process, then a "temporal diffusion" model to determine timings and movement, finally it is upscaled and turned into the final output.
Essentially it builds a neural network to predict motion for the face, body, pose, gaze and expressions over time using the still image as the first frame and audio as the guide.
Training the model required a large multimedia dataset called MENTOR. It has 800,000 videos of different people talking with each part of their face and body labelled at every moment.
What are the limitations of VLOGGER?
This is a research preview rather than an actual product and while it is able to generate realistic looking motion, the video may not always match the way the person really moves. It is still a diffusion model at its core and they can be prone to unusual behavior.
The team say it also struggles with particularly large motions or diverse environments. Also, it can only handle relatively short videos.
What are the use cases for VLOGGER?
According to Google's researchers one of the primary use cases is in translation of video. For example taking an existing video in a particular language and editing the lip and face to match the new, translated audio.
Other potential use cases include creating animated avatars for virtual assistants, chatbots, or virtual characters that look and move realistically in a game environment.
There are tools that do something similar to this already including Synthesia, where users can go in to the company offices and create their own virtual avatar to give presentations, but this new model seems to make the process much easier.
One potential use is in providing low-bandwidth video communication. A future version of the model could allow for video chats from audio by animating the still image avatar.
This could prove particularly useful for VR environments on headsets like the Meta Quest or the Apple Vision Pro, operating independent of the platform’s own avatar models.