As we venture forth boldly and perhaps blindly into what Jen-Hsun Huang calls "a new industrial revolution" of AI technology, there's no shortage of voices on both the techno-optimistic and techno-pessimistic sides of the debate. However, we can all probably agree that what we don't need is hand-waving, which unfortunately looks like Bill Gates' approach to certain problems that AI poses.
When asked by The Verge about generative AI misinformation and the climate, Gates said, "I think AI, on balance, is super beneficial to work on climate. People can type misinformation into a word processor. They don’t need AI, you know, to type out crazy things.
"And so I’m not sure that, other than creating deepfakes, AI really changes the balance there. In fact, I’d say that as people talk about reducing misinformation, the role of AI can be a positive role in terms of looking at what’s going on in a superefficient way."
This is in the context of an interview pertaining to Gates' upcoming futurist Netflix series, "What's Next? The Future with Bill Gates".
I wouldn't expect Gates to channel Heideggerian techno-pessimism given this context, and there's certainly a benefit to keeping things positive, but the word processor comparison? Come on. AI videos and audio are the main problem, and it's a problem that Gates seems willing to hand-wave with a brief "other than creating deepfakes" remark.
To be fair to Gates, he was asked how he would feel "if generative AI tools that Microsoft has worked on have a significant impact on disinformation", and so I suppose he might have had only generative text in mind. But to be equally fair, Microsoft does have a foot in more than just generative text. VASA-1, for instance, is the company's foray into talking-face video generation.
Let's spell things out more clearly for Gates and the others at the back. The problem—or at least one problem—is how quick and easy AI is making the creation of very convincing misinformation such as deepfakes of politicians or other important figures. These capabilities can in no way be compared to someone typing away some made-up trash in a word processor and publishing it online.
To be doubly clear, the problem is just how convincing deepfakes and other AI-generated or AI-aided misinformation could be and the negative effects this could have on people's beliefs. Psychological research has confirmed what experience has always told us: Once someone's formed a belief, they often won't revise it even when presented with strong contrary evidence.
Now imagine if such beliefs were strongly held because they were based on initially convincing evidence, such as a deepfake or even a collection of mutually-reinforcing deepfakes. Even if the message these videos conveyed was later proven to be misinformation, people might still cling on to the feeling, the underlying message that they delivered. The damage would already be done.
That's damage that typically can't be generated by a person typing away in Google Docs.
The funny thing is, Microsoft seems to know this, too, because earlier this year it was one of the companies to sign a statement pledging (in part) to work towards combating the risk that AI might pose for democratic elections this year.
What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.
In a different interview with CNET, Gates expanded a little on the deepfake problem: "I do think over time, with things like deepfakes, most of the time you're online you're going to want to be in an environment where the people are truly identified, that is they're connected to a real-world identity that you trust, instead of just people saying whatever they want."
So maybe the solution, in Gates's view, could be to make people properly identifiable in the digital arena. A digital ID, perhaps? But then who would be the arbiter of such identification? And is that not ignoring the ongoing arms race between ever-more sophisticated AI generation and AI detection?
I suppose Gates' vaguely dismissive response and focus on the positives of AI shouldn't surprise me so much, considering he owns over $40 billion worth of Microsoft shares and Microsoft itself stands to benefit from AI. Though, I guess the argument could always be put the other way: Perhaps Gates pushes such investments because he truly believes AI will be a net positive force.
Whatever the case, ridiculous hand-wavey word processor analogies aren't the way to get the point across.