Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Creative Bloq
Creative Bloq
Technology
Joe Foley

Even AI's mind is blown by this optical illusion

Rotating snakes optical illusion.

We know from our collection of the best optical illusions that our eyes and brains can play tricks on us. In many cases, scientists still aren't sure why, but the discovery that an AI model can fall for some of the same tricks could help provide an explanation.

While some researchers have been training AI models to make optical illusions, others are more concerned about how AI interprets illusions, and whether it can experience them in the same way as humans. And it turns out that the deep neural networks (DNNs) behind many AI algorithms can be susceptible to certain mind benders.

One of Akiyoshi Kitaoka's rotating snakes optical illusions (Image credit: A.Kitaoka)

Eiji Watanabe is an associate professor of neurophysiology at the National Institute for Basic Biology in Japan. He and his colleagues tested what happens when a DNN is presented with static images that the human brain interprets as being in motion.

To do that, they used Akiyoshi Kitaoka's famous rotating snakes illusions. The images show a series of circles, each formed by concentric rings of segments in different colours. The images are still, but viewers tend to think that all the circles are moving other than the specific circle they focus on any one time, which the viewer will usually correctly perceive as static.

Watanabe's team used a DNN called PredNet, which was trained to predict future frames in a video based on knowledge acquired from previous ones. It was trained using videos of the kinds of natural landscapes that humans tend to see around them, but not on optical illusions.

The model was shown various versions of the rotating snakes illusion as well as an altered version that doesn't trick human brains. The experiment found that the model was fooled by the same images as humans.

In another optical illusion created by Akiyoshi Kitaoka, the text 'AI' looks darker in the second image, but it actually has the same luminance in both (Image credit: A.Kitaoka)

The rotating snakes illusion explained?

(Image credit: A.Kitaoka)

Watanabe believes the study supports his belief that human brains use something referred to as predictive coding. According to this theory, we don't passively process images of objects in our surroundings. Instead, our visual system first predicts what it expects to see based on past experience. It then processes discrepancies in the new visual input.

Based on this theory, the assumption is that there are elements in the rotating snakes images that cause our brain to assume the snakes are moving based on its previous experience. The advantage of this way of processing visual information would presumably be that it allows us to interpret what we see more quickly. The cost is that we sometimes misread a scene as a result.

There were some discrepancies between how the AI saw the illusion, though. While humans can 'freeze' any specific circle by starting at it, the AI was unable to do that. The model always sees all of the circles as moving. Watanabe put this down to PredNet's lack of an attention mechanism preventing it from being able to focus on one specific spot.

He says that, for now, no deep neural network can experience all optical illusions in the same way that humans do. As tech giants invest billions of dollars trying to create an artificial general intelligence capable of surpassing all human cognitive capabilities, it's perhaps reassuring that – for now – AI has such weaknesses.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.