These are the two words of the year: artificial intelligence.
Almost every CEO, every tech investor, every politician who wants to look fashionable uses these two words on a daily basis this year.
Artificial intelligence is a new paradigm such as we had not seen since the internet and the cloud. This technology burst into popular imagery last November, with the introduction to the public of ChatGPT, a conversational chatbot.
While many cars today are equipped with technologies that allow them to perform complex maneuvers on their own, consumers have not often linked these developments to advances in AI. Likewise, they never thought that artificial intelligence had made colossal progress, despite the introduction of voice assistants like Siri and Alexa into their daily lives. ChatGPT has lifted this veil, which obscured the world from seeing AI in its true light.
AI Fears
The ChatGPT chatbot, which provides human-like responses to even complex requests, has changed the way internet search is perceived. ChatGPT showed that artificial intelligence has reached a point where technology can perform certain tasks much better than humans can.
The AI arms race between big tech and startups means that the technology is very advanced, according to experts, who fear that we are getting closer to artificial general intelligence, or AGI, which is the point at which a machine can understand or learn anything that humans can.
But some observers go even further and speak of reaching singularity, which is the time when technological evolution gives rise to machines that are more intelligent than humans.
Singularity also means that tech progress is so rapid, that it would exceed the humans' ability to understand, predict and control it.
The biggest fear is that the technology will evolve to sci-fi scenarios: Chatbots and robots, currently controlled by humans, might escape this control. Some also fear that bad actors will use AI to advance their agendas.
It is in this context that the legendary investor Marc Andreessen, co-founder and general partner of venture capital firm Andreessen Horowitz, offered on Jun. 12 to answer any questions relating to AI or questions related to a blog post he wrote titled "Why AI Will Save the World."
"AI will not destroy the world, and in fact may save it," Andreessen wrote. "A shorter description of what AI isn’t: Killer software and robots that will spring to life and decide to murder the human race or otherwise ruin everything, like you see in the movies."
"An even shorter description of what AI could be: A way to make everything we care about better."
The Question
You can read his full text here.
But Elon Musk, who believes that AI is more dangerous than nuclear weapons and therefore calls for its regulation for the public good, seems to disagree. The billionaire has thus seized the opportunity offered by Andreessen to ask the question that many are asking.
"How many years do we have before AI kills us all?" the billionaire asked the investor.
The question of the techno king provoked a shower of comments, including that of a user saying that "AI can also be a force for good and help shape a better future." This elicited a scathing response from Musk.
"Says the AI!" the billionaire commented.
Andreessen did not respond to Musk, but his blog directly addresses the serial entrepreneur's previous criticism of AI.
"We have a full-blown moral panic about AI right now," the investor wrote. "This moral panic is already being used as a motivating force by a variety of actors to demand policy action – new AI restrictions, regulations, and laws."
"These actors, who are making extremely dramatic public statements about the dangers of AI – feeding on and further inflaming moral panic – all present themselves as selfless champions of the public good," Andreessen continued with a link to a New York Post article which quotes Musk's criticism of AI.
"ChatGPT is scary good. We are not far from dangerously strong AI," the billionaire posted last December.
Last March, Musk went so far as to sign a petition with others, calling for a six-month or longer moratorium on the development of powerful new AI tools. The signatories of this petition believe that the pause would provide time to put in place safety standards and to assess the dangers and risks that some of the most advanced AI tools would pose to our civilization.
"Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?" said the open letter, titled "Pause Giant AI Experiments," that Musk and 31,810 other people have signed at last check. "Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?"
Public authorities are lagging behind in the regulation of AI. They play catch-up. In May, the administration of President Joe Biden launched a plan to promote "responsible" AI and to protect U.S. consumers from the potentially bad effects of the technology.
In China, the cyberspace regulator announced in April a series of measures for managing generative artificial intelligence services. These measures are still a draft and must be submitted for review to the authorities before they can be adopted.