Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Allie Garfinkle

Character.AI’s Noam Shazeer on what we know about AI—and what we don’t

Portrait of Noam Shazeer (Credit: Winni Wintermeyer for The Washington Post—Getty Images)

When Noam Shazeer is thinking, he puts his face in his hands, and goes somewhere else.

Not literally, of course. He’s still across from me, in a Character.AI conference room that will later host an employees’ Dungeons & Dragons night he doesn’t attend. But he’s mentally elsewhere, searching for a mathematically perfect answer.  

I asked: “Is there a question you never get, that you wish you did?”

Silence. It’s the kind of pause that’s long enough that you start to hear the other person breathe. Long enough, that I wonder how long it’s been. Shazeer lets me watch the gears in his head turn. No one ever does that. 

We crack our knuckles, laugh, and finally he says: “I guess, why do large language models work?”

I lean in. If anyone would know, it’s Shazeer, the CEO and cofounder of Character.AI, and one of the primary architects behind today’s AI landscape.

While at Google in 2017, Shazeer co-authored the paper "Attention Is All You Need," which revolutionized natural language processing and minted the transformer model. The advance would ultimately make ChatGPT’s ascension and the generative AI boom possible, but at the time the paper came out to crickets, Shazeer said. Today, it’s now been cited more than 125,000 times. It’s not crazy to say that Shazeer has altered the course of tech, and certainly AI. (He was accordingly named to the Time100 AI list last year.)

Shazeer may be a star Silicon Valley founder, but he feels like a scientist first and foremost. Shazeer left Google during the pandemic to cofound Character.AI with Daniel De Freitas, an AI researcher and longtime Google engineer. Since then, Character's raised about $150 million from backers like Andreessen Horowitz, Elad Gil, and SV Angel. I interviewed Shazeer a few weeks ago, but this week Character.AI has been in the news: The Information reported that Elon Musk’s xAI is looking at a possible acquisition of the company. Character declined comment.

Character.AI is a chatbot startup—a platform where users have created millions of chatbots, ranging from Napoleon Bonaparte to Goku from Dragon Ball Z to Shazeer himself. I sometimes use Character for feedback on my published writing, as I think about what works and what doesn’t. I go to a chatbot and say: Stephen King, what do you think of this passage? Joan Didion, which phrases stand out most to you? And Elon Musk, will you ever like anything ever? (Unlikely, Character Elon is pretty ornery in my experience.)

I tell Shazeer that I think his chatbots do a great job of reacting, but haven’t yet mastered writing on their own. He's immediately solutions-oriented.

"We should be able to turn good reactions into good writing," he said.

I should perhaps be grateful that the chatbots aren’t regular Kurt Vonneguts. At least not without a lot of clever prompting. Not yet. 

"I think roughly everybody’s at a crossroads, because the applications of 2025 and 2026 will potentially be unimaginable in 2024," said Shazeer. "We’ll do our best to be on top of them, and be the place where users come to do things they couldn’t have imagined a year ago."

It’s tough to know how to conceptualize our AI future, or when exactly that future will be. OpenAI’s Sam Altman this year said that artificial general intelligence, or AGI, is coming in the "reasonably close-ish future" but "will change jobs much less than we all think."

Shazeer’s take is different but compatible: “I don’t know that I particularly care about AGI in the sense of wanting something that can do absolutely everything a person can do,” he said. “Why is that the metric? Maybe it’s some sort of angst, or raises questions about purposefulness: What’s the purpose of humans in this world?”

It’s a question I lob right back. I watch Shazeer think.

“Let’s see,” he said, hands over eyes. Beat. “Maybe, again, it’s similar to earlier periods in history, where people felt less powerful?” 

He lets the question hang, so I offer an idea: Humans will always find a purpose. 

“Absolutely, certainly,” he says. “It’s more of a religious question.”

Shazeer readily talks about AI from a disarmingly existential lens, one that engages with both religion and history, one that's concerned with the nature of human life. 

“Somebody living in the Middle Ages would have no frame of reference for this entire room,” said Shazeer. “If they showed up right here and saw this technology—the conference room speakerphones, the whiteboard—it would just look like magic or a miracle. It’s the same way we’d probably have very little frame of reference for what the world might be like in the future.”

I was surprised by my experience sitting across from Shazeer, in part because I was surprised to be there at all. When I was flying to San Francisco to meet him for the first time, he almost canceled on me at the last minute, and I was almost furious. I wondered, was it some kind of Silicon Valley power play? 

But less than 90 seconds into meeting Shazeer, it was clear that it wasn’t. He’s warm but shy, with a slightly frenzied but totally deadpan sense of humor. When prompted, Shazeer told me a dad joke: “Did you hear the one about the newspaper? No, it's tear-able!" I suspect he doesn't especially like doing press, but he’s determined to do so kindly—and carefully. He's someone whose labyrinthine thoughts don't lend themselves to simplification.

He also understands that there’s only so much that he, or anyone, can control. I asked Shazeer his own question: Why do large language models work? 

“My best guess is divine benevolence,” he says. “It's hard. Nobody really understands what’s going on. This is a very experimental science, which nobody’s really used to in the world of computing where everything is operating deterministically. But we just have no clue what’s going to happen. It’s more like alchemy or whatever chemistry was in the Middle Ages. You experiment, and you have intuitions about why you know how something works. Some of those intuitions are good and some aren’t. You come up with some formulas about what seemed to work well, and try to scale it up.”

That makes sense enough. After all, there are all sorts of essential things we don’t entirely understand, from what happens in a burning fire to how gravity really works. But with so many people trying to project certainty (or at least conviction) about how AI works, it was strangely thrilling to hear Shazeer say he doesn’t know. 

He puts his head in his hands, looks up, and invokes the divine. 

See you Monday,

Allie Garfinkle
Twitter:
@agarfinks
Email: alexandra.garfinkle@fortune.com
Submit a deal for the Term Sheet newsletter here.

Nina Ajemian curated the deals section of today’s newsletter.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.