Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Ryan Morrison

I just experienced a new AI mind reading tool — here’s what happened

MindPortal.

It isn’t every day you walk into the offices of a cutting-edge tech startup and the first words you hear from the CEO are “I came up with the idea while tripping on hallucinogenics”. MindPortal isn’t any normal startup and co-founder Ekram Alam isn’t your ordinary CEO. 

After dropping out of medical school, Ekram realized entrepreneurship was his future. With friend and co-founder Jack Baber, their first product was a VR app that simulated the prehistoric world. Eventually, they drew on their shared medical background and founded MindPortal.

The goal of the company isn’t to create a physical mind-reading product with the help of artificial intelligence, rather it is to create AI models that simulate and interact with the brain in different ways. They are close to releasing their first general-purpose thought-to-text model.

How does the technology work?

During a demo at their unassuming office in London, I watched a researcher send a rehearsed sentence into their current brain AI model and then on to ChatGPT. “The premise of Mind Portal is we want to explore the nature of human-AI interaction,” Ekram declared.

Using a Functional near-infrared spectroscopy system (F-stop) MindPortal has a software classifier that measures optical brain data, seen through blood flow in the brain, and compares that to the training data to link thought to phrase.

Their success rate for MindGPT is 42% — a statistically significant figure that shows it is demonstrating real capability and not just getting it right by chance. But still mostly wrong.

The system I saw requires pre-defined and trained phrases and is specifically linked to a single brain. Ed, the researcher demoing the model had to spend hours thinking the phrase over and over again as part of the training process.

“When Ed imagines language that activates different parts of the brain, it's that activation that's being picked up in real-time,” explained Ekram. “The AI model will then be able to make a classification guess as to which of a handful of sentences Ed is thinking about.”

I tried it three times and it got the wrong sentence each time. However, this was expected as their success rate for MindGPT is 42% — a statistically significant figure that shows it is demonstrating real capability and not just getting it right by chance. But still mostly wrong.

A 5-year road to general-purpose MindGPT

MindPortal describes itself as a human-AI interaction company. It has been going for five years, focused on building cutting-edge AI models. Baber told me each new model was a fresh start, taking a different approach to mind-to-AI interactions.

During my tour of MindPortal’s small office, which resembles more of a lab than a traditional workspace, I was shown two generations of mind-reading models.

The first was more general and would work for anyone — apart from me as it required a VR headset and I’m blind in my left eye so most VR systems don’t work for me. It looks for a specific brain signal common to all people and uses this to trigger an “on/off” gate.

They demonstrated a VR game where you could trigger a color in a grid just by thinking about it and then using eye-tracking to change position. This uses a simple EEG sensor mounted to the back of the VR headset. It allows you to click by concentrating instead of pinching your fingers.

I was then able to see Ed demonstrate MindGPT. This is partly a second-generation model and partly a completely fresh approach. As explained earlier this uses a more complex optical brain sensor to detect specific brain wave signals linked to pre-defined phrases.

We’re currently at the mind-to-phrase stage, with a clear path to scale up that technology in a way that could be widely useful in interacting with chatbots, or even helping give voice to people who have lost their voice. But it does have limitations, including being restricted to set phrases.

What comes next?

Midjourney image showing a warehouse of volunteers having their brains scanned (Image credit: Midjourney/Future AI image)

Ekram says to create something more than a statistically significant but largely inaccurate brain-reading model they would first need to get more brain data, and it can’t just be any data from an fMRI scan or sensor reading — it has to be structured data thinking about sentences.

He says that it would cost about $50 million and require a warehouse full of people wearing brain-sensing headsets to think about sentences for hours per day.

I could wear a headgear, think of a sentence, such as, how are you today? That could be then sent through an AI model that takes the text and translates it into a voice. It then puts that into your ear using AirPods and you can respond with your mind.

Ekram Alam, MindPortal CEO

This would be to scale the current MindGPT model to a level where it has more sentences than just the three samples. Ekram says you could get to a point where you have enough phrases for it to be widely useful for most situations. The extra training data is more to make it general use. Right now it has to be trained per user, but with enough data, anyone could use the model.

The real solution though is a new type of model that is general purpose by default. They are working on something called MindSpeech. This would be thought-to-text, not to-phrase.

This is an AI model that will allow the user to continuously imagine language in their mind and have it continuously decoded into text. No specific or set phrases, just mind-to-text.

According to Ekram it is entirely possible that by the end of this decade, we could have wearable devices that convert our thoughts into text and send them straight to the AI. He goes even further, predicting devices that will allow us to communicate mind-to-mind with AI’s help.

“I could wear a headgear, think of a sentence, such as, how are you today? That could be then sent through an AI model that takes the text and translates it into a voice. It then puts that into your ear using AirPods and you can respond with your mind,” he explained.

Final thoughts

Midjourney AI image showing a man wearing a fictional MindPortal cap in a supermarket (Image credit: Midjourney/Future AI image)

This isn't a product they're going to be putting on the market tomorrow. They are a research company building a very early prototype. However, they do have plans for what they could achieve in the future and potential ways to get to that point — with enough money.

What MindPortal has done is give us a really interesting insight into what we might be using in the coming few years and how we might be interacting with AI and each other. With the next generation, general-purpose mind model on the horizon, that might come even sooner.

I really hope it works because I do not want to be standing in the supermarket talking to myself when I'm just having a conversation with my AI. I’d rather just send it my thoughts instead.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.