What you need to know
- Microsoft and Google are in a race to capitalize in the next major wave of tech innovation.
- Machine learning models and "AI" chatbots have shaken Google's search dominance for the first time in decades. Microsoft's ChatGPT-powered Bing essentially searches the web for you.
- To that end, Google has been desperately trying to catch up, and in doing so, may be violating OpenAI's terms of service by "borrowing" data ChatGPT.
It's all fair in AI love and war, apparently.
ChatGPT AI is often accused of leveraging "stolen" data from websites and artists to build its AI models, but this is the first time another AI firm has been accused of stealing from ChatGPT.
ChatGPT is powering Bing Chat search features, owing to an exclusive contract between Microsoft and OpenAI. It's something of a major coup, given that Bing leap-frogged long-time search powerhouse Google in adding AI to its setup first, leading to a dip in Google's share price. Google has been desperate to catch up, reportedly calling a "code red" all-hands meeting a few months ago to respond. The result was Google's "Bard" chatbot, which has been the subject of some ridicule thus far for its odd answers and difficulty understanding context. And now, we have another piece of controversy to add to the Bard pile.
According to a report from TheInformation, a "prominent" AI researcher at Google has resigned from the firm, after warning that Google's own Bard system has been leveraging information from OpenAI's ChatGPT without authorization. Jacob Devlin has since joined OpenAI itself to work on ChatGPT, after discovering that Google was "heavily" relying on information from ShareGPT, a website that parses conversations made with OpenAI's chat models. Microsoft has an exclusive license to use ChatGPT for commercial purposes, and as such, Devlin was concerned that Google was violating OpenAI's terms of service by using the data in this way. It could potentially open Google up to lawsuits if confirmed.
Windows Central's Take
For my take, honestly, this report doesn't come without a hint of irony. Since chat models create context from content "borrowed" from the public internet, used without licensing from artists, writers, videographers, and other types of human content creators. Furthermore, we've already seen instances where Google and Bing both feed each other misinformation. We've even had one of our own reports misunderstood by Bing, which was then in turn misunderstood by Google. The result was a false version of one of our own articles, using one of our writers' names as a citation.
Microsoft co-founder Bill Gates recently described the recent wave of AI innovation as the next big movement in tech. Microsoft has absolutely leaped at this opportunity to supplant long-time search rival Google, baking AI into virtually all of its products.
The haphazard rush to take advantage of this AI breakthrough has resulted in a similar wave of concerns, with reports that AI could result in anywhere up to 300 million job losses according to a Goldman Sachs analysis. SpaceX CEO Elon Musk has even called for a pause on AI research so that government regulators can catch up. Microsoft has responded to all of these kinds of issues by laying off its AI ethics team, something it has been heavily criticized for.
Whether it's Google or Microsoft, shareholder profits are the only thing that really matters to these big companies. It's not a stretch to expect more shady practices from all firms interested in AI, in the name of making huge piles of cash — although Microsoft did just announce some vague plans to compensate content creators who have "contributed" to Bing Chat's answers.