Google hurriedly unleashed Bard, an AI chatbot, as ChatGPT's success threatened to snatch one of their main target audiences: information seekers. In late March, Google opened Bard to the public. (If you're interested in trying it out yourself, you can sign up here).
Reviewers' first impressions of Bard have been largely positive (although many complain that it is inaccurate at times), but inside Google's offices, the AI chatbot is apparently extremely unpopular. According to a scathing Bloomberg report, Google is ignoring ethical commitments in favor of winning the AI race against Open AI (ChatGPT's creator).
3 reasons why Google employees hate 'Bard'
Bloomberg claims that it obtained an internal Google document and touched base with 18 current and former Google employees to determine the common thread on the perception of Bard inside Google. And according to what Bloomberg has discovered, it's not a pretty picture.
One worker called Bard a "pathological liar"; another called it "cringeworthy." Check out three reasons why Google employees do not approve of Bard:
1. Google Bard is egregiously wrong. Bloomberg said that one employee complained about Bard's inaccurate answers. For example, in one instance, they asked Bard to explain how to land a plane, and in response, it delivered a reply that would lead to a deadly crash. Another employee pointed out that it asked Bard for advice on scuba diving, but its response "would likely result in serious injury or death."
Despite the complaints, Google rolled out the ChatGPT rival anyway.
2. Google is allegedly indifferent to ethical concerns. Before the ChatGPT craze, Google was cautious about the AI that underpinned Bard, and was much more open to the idea of taking its time to consider ethical hurdles before rolling it out to the public. However, Google's aversion to risk dropped when ChatGPT crashed the party.
It's worth noting that Timnit Gebru and Margaret Mitchell (Google's former ethical AI leaders) were both ousted after a dispute regarding the fairness of the company's AI principles. Employees still working on the team say they don't know how to continue doing their jobs as ethics advocates without putting their position at risk. Yikes!
In March, employees told Google AI Principles Ops and Governance Leader Jen Gennai that Bard did not pass a risk evaluation because it could "cause harm," but she overruled the report.
3. Google Bard's release was too rushed. As mentioned, Google was anxious to get ahead of the AI game, so it slapped the word "experiment" on Bard. The search-engine giant hoped that this label would earn some forgiveness from the public for its shortcomings. The Bloomberg report called out Google for rushing out a service that has a "profound societal impact."
Remember, Google itself had reservations about its generative AI and did not want to roll out any consumer-friendly versions of its large learning model (LLM) developments — until it saw OpenAI stepping on its toes.
Bottom line
Google's employees whipping out their pitchforks against Bard because it's not ready yet mirrors a recent report regarding Apple's mixed-reality headset. Several employees, choosing to remain anonymous, revealed that there is major dissension about the upcoming head-mounted display. However, the higher ups have stuffed their fingers in their ears and are reportedly rolling out the headset at WWDC 2023.
As time goes on, we can't help but wonder which side will end up swallowing their words. The top dogs who pressured their teams to roll out crudely made products or the employees who feel that quality should be the top priority — not outpacing the competition.