In the AI arms race that has just broken out in the tech industry, Google, where much of the latest technology was invented, should be well positioned to be one of the big winners.
There is just one problem: With politicians and regulators breathing down its neck, and a hugely profitable business model to defend, the internet search giant may be hesitant to wield many of the weapons at its disposal.
Microsoft threw down a direct challenge to the search giant this week when it sealed a multibillion dollar investment in AI research company OpenAI. The move comes less than two months after the release of OpenAI’s ChatGPT, a chatbot that answers queries with paragraphs of text or code, suggesting how generative AI might one day replace internet search.
With preferential rights to commercialise OpenAI’s technology, Microsoft executives have made no secret of their goal of using it to challenge Google, reawakening an old rivalry that has simmered since Google won the search wars a decade ago.
DeepMind, the London research company that Google bought in 2014, and Google Brain, an advanced research division at its Silicon Valley headquarters, have long given the search company one of the strongest footholds in AI.
More recently, Google has broken ground with different variations on the so-called generative AI that underpins ChatGPT, including AI models capable of telling jokes and solving mathematics problems.
One of its most advanced language models, known as PaLM, is a general purpose model that is three times larger than GPT, the AI model that underlies ChatGPT, based on the number of parameters on which the models are trained.
Google’s chatbot LaMDA, or Language Model for Dialogue Applications can converse with users in natural language, in a similar way to ChatGPT. The company’s engineering teams have been working for months to integrate it into a consumer product.
Despite the technical advances, most of the latest technology is still only the subject of research. Google’s critics say it is hemmed in by its hugely profitable search business, which discourages it from introducing generative AI into consumer products.
Giving direct answers to queries, rather than simply pointing users to suggested links, would result in fewer searches, said Sridhar Ramaswamy, a former top Google executive.
That has left Google facing “a classic innovator’s dilemma” — a reference to the book by Harvard Business School professor Clayton Christensen that sought to explain why industry leaders often fall prey to fast-moving upstarts. “If I was the one running a $150bn business, I’d be terrified of this thing,” Ramaswamy said.
“We have long been focused on developing and deploying AI to improve people’s lives. We believe that AI is foundational and transformative technology that is incredibly useful for individuals, businesses and communities,” Google said. However, the search giant would “need to consider the broader societal impacts these innovations can have”. Google added that it would announce “more experiences externally soon”.
While leading to fewer searches and lower revenue, the spread of AI could also cause a jump in Google’s costs.
Ramaswamy calculated that, based on OpenAI’s pricing, it would cost $120mn to use natural language processing to “read” all the web pages in a search index and then use this to generate more direct answers to the questions that people enter into a search engine. Analysts at Morgan Stanley, meanwhile, estimated that answering a search query using language processing costs around seven times as much as a standard internet search.
The same considerations could discourage Microsoft from a radical overhaul of its Bing search engine, which generated more than $11bn of revenue last year. But the software company has said it plans to use OpenAI’s technology throughout its products and services, potentially leading to new ways for users to be presented with relevant information while they are inside other applications, thus reducing the need to go to a search engine.
A number of former and current employees close to Google’s AI research teams say the biggest constraints on the company’s release of AI have been concern about potential harms and how they would affect Google’s reputation, as well as an underestimation of the competition.
“I think they were asleep at the wheel,” said one former Google AI scientist, who now runs an AI company. “Honestly, everyone under-appreciated how language models will disrupt search.”
These challenges are exacerbated by the political and regulatory concerns caused by Google’s growing power, as well as the greater public scrutiny of the industry leader in the adoption of new technologies.
According to one former Google executive, the company’s leaders grew worried more than a year ago that sudden advances in the capabilities of AI could lead to a wave of public concern about the implications of such a powerful technology in the hands of a company. Last year it appointed former McKinsey executive James Manyika as a new senior vice-president to advise on the broader social impacts of its new technology.
Generative AI, which is used in services like ChatGPT, is inherently prone to giving incorrect answers and could be used to produce misinformation, Manyika said. Speaking to the Financial Times only days before ChatGPT was released, he added: “That’s why we’re not rushing to put these things out in the way that perhaps people might have expected us to.”
However, the huge interest stirred up by ChatGPT has intensified the pressure on Google to match OpenAI more quickly. That has left it with the challenge of showing off its AI prowess and integrating it into its services without damaging its brand or provoking a political backlash.
“For Google it’s a real problem if they write a sentence with hate speech in it and it’s near the Google name,” said Ramaswamy, a co-founder of search start-up Neeva. Google is held to a higher standard than a start-up that could argue that its service was just an objective summary of content available on the internet, he added.
The search company has come under fire before over its handling of AI ethics. In 2020, when two prominent AI researchers left in contentious circumstances after objections to a research paper assessing the risks of language-related AI, a furore erupted around Google’s attitude to the ethics and safety of its AI technologies.
Such events have left it under greater public scrutiny than organisations like OpenAI or open-source alternatives like Stable Diffusion. The latter, which generates images from text descriptions, has had several safety issues, including with the generation of pornographic imagery. Its safety filter can be easily hacked, according to AI researchers, who say that the relevant lines of code can be deleted manually. Its parent company, Stability AI, did not respond to request for comment.
OpenAI’s technology has also been abused by users. In 2021, an online game called AI Dungeon licensed GPT, a text-generating tool, to create choose-your-own storylines based on individual user prompts. Within a few months, users were generating gameplay involving child sexual abuse, among other disturbing content. OpenAI eventually lent on the company to introduce better moderation systems.
OpenAI did not respond to a request for comment.
Had anything like this happened at Google, the backlash would have been far worse, one former Google AI researcher said. With the company now facing a serious threat from OpenAI, they added, it was unclear whether anyone at the company was ready to take on the responsibility and risks of releasing new AI products more quickly.
Microsoft, however, faces a similar dilemma over how to use the technology. It has sought to paint itself as more responsible in its use of AI than Google. OpenAI, meanwhile, has warned that ChatGPT is prone to inaccuracy, making it hard to embed the technology in its current form in a commercial service.
But as the most dramatic demonstration yet of an AI force that is sweeping through the tech world, OpenAI has given notice that even entrenched powers like Google could be at risk.