Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Eric Hal Schwartz

Google's AI robots are learning from watching movies – just like the rest of us

Google DeepMind AI Robot.

Google DeepMind's robotics team is teaching robots to learn how a human intern would: by watching a video. The team has published a new paper demonstrating how Google's RT-2 robots embedded with the Gemini 1.5 Pro generative AI model can absorb information from videos to learn how to get around and even carry out requests at their destination.

Thanks to the Gemini 1.5 Pro model's long context window, training a robot like a new intern is possible. This window allows the AI to process extensive amounts of information simultaneously. The researchers would film a video tour of a designated area, such as a home or office. Then, the robot would watch the video and learn about the environment. 

The details in the video tours let the robot complete tasks based on its learned knowledge, using both verbal and image outputs. It's an impressive way of showing how robots might interact with their environment in ways reminiscent of human behavior. You can see how it works in the video below, as well as examples of different tasks the robot might carry out.

Robot AI Expertise

Those demonstrations aren't rare flukes, either. In practical tests, Gemini-powered robots operated within a 9,000-square-foot area and successfully followed over 50 different user instructions with a 90 percent success rate. This high level of accuracy opens up many potential real-world uses for AI-powered robots, helping out at home with chores or at work with menial or even more complex tasks.

That's because one of the more notable aspects of the Gemini 1.5 Pro model is its ability to complete multi-step tasks. DeepMind's research has found that the robots can work out how to answer questions like whether there's a specific drink available by navigating to a refrigerator, visually processing what's within, and then returning and answering the question. 

The idea of planning and carrying out the entire sequence of actions demonstrates a level of understanding and execution that goes beyond the current standard of single-step orders for most robots. 

Don't expect to see this robot for sale any time soon, though. For one thing, it takes up to 30 seconds to process each instruction, which is way slower than just doing something yourself in most cases. The chaos of real-world homes and offices will be much harder for a robot to navigate than a controlled environment, no matter how advanced the AI model is. 

Still, integrating AI models like Gemini 1.5 Pro into robotics is part of a larger leap forward in the field. Robots equipped with models like Gemini or its rivals could transform healthcare, shipping, and even janitorial duties.

You might also like

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.