Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Salon
Salon
Science
Troy Farah

Where is the AI arms race leading us?

To borrow an old cliché, ChatGPT-4 broke the internet. The latest iteration of artificial intelligence research lab OpenAI's most advanced AI chatbot was released on March 14, immediately generating considerable hype around a technology that draws equal parts techno-utopianism or robot apocalypse. It can answer questions with natural, human-like language, giving the illusion that there is more than just code on the other end of the dialogue window. In other words, some perceive ChatGPT as a sentient AI, like something from Hollywood sci-fi.

GPT-4 (short for Generative Pre-trained Transformer) is advertised as exponentially superior to its predecessor GPT-3, trained on a much more extensive dataset that has allowed it to operate with human-level performance across numerous academic benchmarks.

"I'm not saying any of these things are scams, but this is exactly what scammers do. They exploit your own feelings and the way you sympathize with the world."

It can pass many (but not all) high-level exams, including the LSAT, with flying colors, generate an entire website from a napkin drawing, invent new drugs before ordering them from a supplier, and code a functional version of the video game "Pong" in just 60 seconds. (Some of these claims aren't easy to verify independently, so take social media posts with a grain of salt.)

OpenAI has promised GPT-4 is safer than GPT-3, with more built-in filters and less ability to be manipulated into saying dangerous things, such as crafting instructions for biological weapons. But while there are improvements, the New York Times reports GPT-4 still has flaws, including the tendency to hallucinate, or give false information that is confidently presented as fact.

GPT-4 is an exemplar of the AI arms race, in which multiple companies are developing platforms with astounding technology that was somewhat unthinkable even a few months ago, including the text-to-image generators like Midjourney and the underground deepfake network used to generate fake videos. This tech is being released to the public at a breakneck pace and before the culture at large can even really respond, another upgrade is out.

Against the backdrop of all this, Silicon Valley is in a bit of a financial crisis. The March 10th collapse of Silicon Valley Bank, which was responsible for the capital of many tech start-up firms, marked the second-largest bank failure in U.S. history. Meanwhile, Microsoft's fired its entire Ethics & Society team on March 13 (which seems like the kind of employees that would be critical at a time like this), part of a larger trend of tech company layoffs.

All of this could be distracting from more important fundamental questions, like how does any of this stuff work, anyway? Where is it leading us? And what are the social repercussions as AI technology improves?

Salon spoke with Chirag Shah, a data science professor at University of Washington in Seattle, whose core focus is how people find information and how they use it. Not only is he concerned with building these kind of intelligence systems, he also confronts issues with bias, fairness, accountability and transparency — what typically falls under the category of Responsible AI. We discussed some of the basics of AI, why there's so much hype (and the Turing test is not reliable), and more.

This interview has been condensed and lightly edited for clarity.

I'm honestly not the biggest tech nerd. But ever since the LaMDA [Language Model for Dialogue Applications] story happened last summer, [in which Blake Lemoine, an engineer at Google, claimed that a chatbot had become sentient], I've been super interested in AI. I've been experimenting with all these random tools, but there's all this hype around this technology and also a lot of chatter from people that this technology spells doom for society. How serious are some of these concerns?

Yeah, we definitely are fascinated about this. Hardly any day goes by when somebody is not talking to me about ChatGPT — actually at a global scale. I just came back from Singapore and India last week. Everywhere, people are aware of this, they're talking about it.

When we talk about AI, there's usually three categories or levels: There is ANI, which is "artificial narrow intelligence"; AGI, which is "artificial general intelligence"; and ASI, "artificial super intelligence." We've been used to ANI for a long time. Think about chess playing programs, I mean, that's an example of an AI system that's been in existence for decades now. And they've gotten good, they've gotten excellent. 

"This is not about machines rising up, like the whole 'Terminator' scenario. But I also don't see this being the savior that we were hoping for."

But what we are now witnessing is this move towards AGI, which is [where] you have a system or an entity that has a general intelligence. I [think] that is hype, I don't believe that these systems have really achieved that or are even close to that. But people have started extrapolating it.

I think this is the big disconnect. These things have been designed to excel at specific tasks. And they've proven that. They are still more of an ANI, but because they are doing this in natural language, it starts giving you an illusion of that general intelligence.

We often forget that in all of our history, the only other entities with whom we've had natural language interactions are other humans. For the first time, we have these entities that are understanding natural language as well as responding in natural language. And it happened kind of quite suddenly. We were not prepared.

The LaMDA thing is so interesting. I have to admit, when I first read those conversations with Blake Lemoine, I was a little bit convinced, because some of the stuff that it was doing was, as you say, an illusion. But it's a very convincing one.

Then a couple of months ago, I started using ChatGPT and actually getting to interact with this technology. It really didn't feel sentient to me, it was like, "Oh this is really just a natural language processing program. It has a lot of limitations."

But let's say LaMDA or ChatGPT were sentient. How do we even prove that? If we're going towards this goal of AGI and ASI, how do we know when we reach it?

I think the traditional method that we all studied in school, the Turing test, is not relevant or useful now. It can easily be passed by a lot of these things. It gives you enough illusion that you can't tell the difference between human being and an artificial entity. It's already actually passing all kinds of tests, standardized tests and so on.

Even before all of this, many have criticized that Turing test is really not the right measure for intelligence. So I think it really goes back to more of a philosophical discussion about what we consider as intelligence. So clearly many will argue that they are really great at pretending to be intelligent.

But one would argue that even if that is true, maybe that's it — that's a measure of intelligence. So what if somebody's pretending? Even human beings have impostor syndrome.

The other narrative here is it doesn't even matter. Like why do we even have to measure this? Why do we even have to say, "Oh, we got to this threshold, this milestone" —because at the end of the day, we just want to see the result. Whether it's done by an entity that is labeled as ANI or AGI, who cares? We care about the outcome, right? And so maybe we don't need to label things as much as we would like to. It's all about the perception anyway.

Yeah, I think that if Turing were still alive, even he would be like, "Why are you still using this metric from the 1950s?" But I don't think people really know of other tests for sentience that exist. I mean, there's the one from "Blade Runner." But nothing else is really in the popular lexicon that I'm aware of. Maybe you're right, it's sort of irrelevant trying to prove sentience. We should really be talking more about how these tools are used and who gets access to them, Like the company is called OpenAI. But ironically, they're not very transparent, I think.

Oh no. In fact, they're very clear about not revealing their underlying process and a lot of the other details. So in their whole 100-page documentation about this, they are not telling you how they train these things, specifically, what data you use, what are their parameters and all those things. And I don't think they intend to because now this is a trade secret, a proprietary thing that gives them the competitive edge.

"Is that what we want to do? Is that what we should be doing? We haven't asked those questions. We also don't have regulations to stop any of this."

But that also is not new. I mean, Google famously doesn't reveal their ranking algorithm, right? We have a good sense of what's in it, but we don't know for sure. We've been okay with that. We definitely use a lot of tools for which we don't have understanding of how it works. Most tools we use are not open source, right? And we use them everyday, but we have a sense of what they can do and what they cannot do, what they we shouldn't be doing with them.

The challenge here is not so much of these things lack transparency. But our own understanding is not compatible with how we tend to use these things, even for the things that they're not designed for. And that's where transparency becomes very important.

So when people start using this, say, for medical diagnosis, many would argue "well, that's not what it's really designed for. It's not a medical expert." But people don't understand that.

I like what you mentioned about how we sort of use AI all the time in our daily lives. I mean, AI is being used right now on this Zoom call. It will modify the audio to make the connection better. Later, I'm going to take this interview and put it into a program that will transcribe it for me. I'll go back and read the whole transcript and make sure it's accurate, but usually I only have to fix one or two mistakes per paragraph. It's very accurate. I use AI all the time in my journalism when I think about it. When I'm doing research, I'm using Google's algorithm.

But you're right. People need to understand the limitations of this tech, and they are using it in a way that's like... they're pushing it a little bit further than they probably should and, and then they get these sort of confusing results, like when the Bing chatbot fell in love with somebody. Or there's this New York Times reporter that couldn't sleep at night after talking to it. It's just kind of a human tendency to project our own consciousness onto things. That's like a really fascinating psychological aspect, but that's also something that could be exploited, too.

Exactly. I'm not saying any of these things are scams, but this is exactly what scammers do. They exploit your own feelings and the way you sympathize with the world. This is why natural language is such an important aspect. That's what connects us with fellow human beings. When we see somebody speaking our language or following our culture, they belong to our tribe, they belong to our community.

There's this inbuilt trust which we've learned over thousands of years of evolution, and it's actually been very helpful to us because we can't always be questioning one another's motives. That's what these things are exploiting — they're able to do that. They're able to speak like us and almost think like us and even express feelings like us. So you're absolutely right, that there is a psychological angle to this. These things are just reflections of how we almost desperately want them to be like us.

Yeah, you raise a really interesting question: do we even want to build AI systems that are sentient? Similarly, should AI be used to make art and poetry and writing? I don't think AI is going to replace artists, but it will make the field more competitive, right? And artists might get paid less. 

I often ask this question, "who asked for this?" People are projecting their needs on this. "Oh, ChatGPT can do this for me." But it's after the fact. We build this thing and now people are coming. But the question remains to be answered, where are we going with this? What is the purpose here?

I do believe it has profound impact on... artists and original creators, It's already starting to have an impact — it's gonna have a profound impact on creativity in general. Because what is the value of original creation now when you can just use this for doing that creation?

And so it's more than simply people losing their jobs. It's very fundamental to us, to humanity, that we've come this far through our original thinking and creation and innovations and entrepreneurship. And now, do we just outsource that to this artificial entity? Because it can write poetry and it can paint things? And so what is the drive for us to be doing this now?

So I think this is actually very profound, very fundamental to our ongoing evolution. I mean, we've started evolving, and now we let's just transport our history, our thinking, our way of living for all these thousands of years to this thing. Is that what we want to do? Is that what we should be doing? We haven't asked those questions. We also don't have regulations to stop any of this.

Yeah, it doesn't seem like anyone in Congress is like really aware of the implications. I don't know of anyone that's really trying to rein this in or regulate this at all.

Imagine if you're a drug company. Could you put out a product just because it's effective? No, I mean, you have to go through so much of testing and approval and everything. Like, you know, you can't even put out a toothpaste without all kinds of testing and approval. But we don't have the same thing for AI. I think we should.

But my wish is not going to come true, because we are so behind, especially in the U.S. The European Union has more advanced [regulations], but when it comes to AI, they're still a little bit behind. And to be honest, I don't have a lot of hope that will ever catch up. But I wish we could at least narrow the gap. We can at least have some guardrails, some regulation, some policy that compels these companies do at least have some accountability.

This is not about machines rising up, like the whole "Terminator" scenario. But I also don't see this being the savior that we were hoping for. I don't think that this is going to basically make everything better and change everything. I also don't buy into the hype of AI. It's neither of these extremes. The truth is somewhere in the middle.

There are definitely benefits of this there. It's going to change a lot and we need to be aware of that. And sure it has this harmful side, there are biases, there are hallucinations. But again, you know, it's not going to really destroy humanity. So we also don't need to fear it, we need to understand it and we need to embrace it.

And we need to ask what's next for us? What does it mean for us individually and for society? Where do we want to go with this and how far? These are the questions we want to be asking, so we don't get caught with surprises. What we don't want is something happening without our knowledge and without our kind of intention and then suddenly, we have to deal with it. I want this to be an intentional development going forward.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.