Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Erin Prater

Nearly 20 years after a stroke, a paralyzed woman is able to speak again—simply by thinking—thanks to AI. Watch here

(Credit: Photo courtesy of Noah Berger, UCSF)

A paralyzed woman can speak again, thanks to a small panel of electrodes implanted onto her brain and a digital avatar developed by scientists in California.

It marks the first time speech and facial expressions have been captured from brain signals and communicated by an avatar—one that speaks with the patient’s own voice.

That’s according to Kaylo Littlejohn, a fourth year doctoral student with the Dr. Edward F. Chang Lab at the University of California San Francisco, and a lead author on a paper detailing the project, published Wednesday in the journal Nature.

The patient—a 47-year-old woman named Ann who had experienced a brainstem stroke 18 years ago, terminating her ability to speak—agreed to have a paper-thin, credit card-sized set of 253 electrodes surgically implanted onto the cortex of her brain. The electrodes intercepted signals from this area to her her tongue, jaw, larynx, and face that would have created speech and facial expressions, were it not for her stroke. A cable plugged into a port in her head connected the electrodes to a bank of computers, which were equipped with an artificial intelligence-powered system.

Ann worked with Littlejohn’s team for weeks to train the system to recognize her brain’s unique signals for speech. It entailed her attempting to repeat—with her thoughts—a variety of phrases from a 1,024-word conversational vocabulary.

"She's extremely dedicated and hard-working," Littlejohn said of Ann. "She's willing to record as long as needed, and she really understands that her efforts will go toward creating a speech neuroprosthesis that many people who have this kind of disability will be able to use."

Once the system was trained, Ann's thoughts were translated into verbal messages conveyed by an avatar that used her own voice—reconstructed from a wedding video shot years ago.

Littlejohn was there the first time Ann used the system. Aside from a computerized AAC (augmentative and alternative communication) device that allowed her to use neck muscle movements to slowly and painstakingly communicate in a limited manner, it was the first time she had spoken in nearly two decades.

“It was just very heart-warming and encouraging, for both her and me,” Littlejohn told Fortune. 

For Ann, “it was an emotional experience to hear her own voice,” he added.

Because the system was trained to recognize 39 phonemes—sub-units of words—instead of entire words, it was able to decipher her thoughts three times faster, decoding signals to text at a rate of nearly 80 words per minute.

“The accuracy, speed, and vocabulary are crucial,” Sean Metzger, a bioengineering graduate student who helped develop the decoder, said in a news release about the story. “It’s what gives a user the potential, in time, to communicate almost as fast as we do, and to have much more naturalistic and normal conversations.”

Chang, chair of neurosurgery at the university, hopes to soon develop the system for use by similar patients, on a continual basis. Because the device is still in a clinical trial, Ann isn't allowed to use it outside of the study. A team he led previously enabled a man who had experienced a brainstem stroke to communicate via brain signals decoded into text.

“Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others,” Chang said in the release. “These advancements bring us much closer to making this a real solution for patients.”

In order to be useful in daily life for Ann and patients like her, such a product would need to be wireless, unlike the current version, and smaller so that it's portable, Littlejohn said. He hopes an improved version could be developed and approved by the U.S. Food and Drug administration in a decade at most.

'Another Brick in the Wall' of brain-machine interfaces

Ann's system is a type of brain-machine interface, also known as a brain-computer interface. Such technology can be used by paralyzed patients like the late Stephen Hawking to express themselves—only not so robotically, and merely by thinking, in this case.

Related work was debuted earlier this month by researchers at the University of California Berkeley, in an article published in the journal PLoS Biology.

Surgeons placed electrodes onto the brains of 29 epileptic patients at Albany Medical Center in New York, while the Pink Floyd song "Another Brick in the Wall, Part 1" was played in the operating room. Using artificial intelligence, researchers were able to reconstruct the song from the electrical activity of each patient's brain.

The work will be used to develop even better brain-machine interfaces to help paralyzed patients, as well as those with ALS and speech disorders like non-verbal apraxia, a condition in which patients can't make movements necessary for speech, Dr. Robert Knight, a professor of psychology and neuroscience, recently told Fortune.

As the technology improves, it may eventually be possible to transmit thoughts through scalp electrodes. Such electrodes can currently be used to signal one’s choice of a single letter from a string of letters—but it takes at least 20 seconds to identify each letter, making communication far too cumbersome, lead author Ludovic Bellier, a postdoctoral researcher in human cognitive neuroscience at Berkeley, told Fortune.

If the technology is streamlined, it may eventually aid those without disabling conditions—like thought workers—in syncing with a computer to convey text from their minds.

“It’s really about reducing friction and allowing people to just think their action,” Bellier said. One example: “You could think, ‘Order my Uber,’ and you don’t have to finish what you’re doing—your Uber arrives.”

For those alarmed by potential future applications of the research, Knight and Bellier emphasize that such feats aren’t currently possible without surgery. And the A.I. developed to translate signals into sounds “merely provides the keyboard for the mind,” they assert.

As for the potential of privacy concerns to develop, Bellier said he’d be more worried about what Big Tech knows about us now, thanks to the monitoring and tracking of online activity.

Besides, privacy issues can be dealt with, he said. When a wireless EEG is completed on a patient, the signal is encrypted. 

“We’re on the threshold of lots of things—the fusion of neuroscience and computer engineering, and really, in many ways, the sky’s the limit,” he said.

Added Knight: “I think we’re just on the edge of tickling this whole story.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.