After a shaky start at its unveiling last month, Google has opened up its artificial intelligence (AI) chatbot Bard to more users.
The company is competing with other tech giants in the fast-moving AI space, while fending off threats to the world's most profitable search engine.
Google admits its Bard technology isn't perfect — so let's take a look at what it can do, what it can't, who can access it, and some of the early reviews.
What is Bard and what can it do?
Bard is a conversational AI chatbot powered by what is known as a large language model (LLM) — a technology also used by chatbots such as OpenAI's ChatGPT and Microsoft's Bing — which is trained on large amounts of text and information from the internet.
Google says Bard is powered by "a lightweight and optimised version of LaMDA (Language Model for Dialogue Applications)", the same system a Google engineer once claimed was sentient, which the company denied was true.
Google describes Bard as "an early experiment", and has only opened it for users to test in the United States and United Kingdom at the time of this story.
When given a prompt, Bard generates a response by "selecting, one word at a time, from words that are likely to come next", the company says.
Here's an example of it being asked to brainstorm some quick tips:
In a blog post, Google executives Sissie Hsiao and Eli Collins say Bard can help "boost your productivity, accelerate your ideas and fuel your curiosity".
"You might ask Bard to give you tips to reach your goal of reading more books this year, explain quantum physics in simple terms or spark your creativity by outlining a blog post," they say.
Bard can also offer up a few draft responses so users can choose the best starting point for the conversation, like in this example:
Google admits Bard 'gets some things wrong'
Just like other AI chatbots, Bard is not perfect.
The system displayed incorrect information when it debuted publicly last month, leading to a nearly eight per cent drop in the stock price of Google's parent company Alphabet that day.
Google has also been up front about Bard's problems.
In their blog post, Google's Ms Hsiao and Mr Collins say while LLMs are an exciting technology, "they're not without their faults".
"For instance, because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs. And they can provide inaccurate, misleading or false information while presenting it confidently," they say.
"For example, when asked to share a couple [of] suggestions for easy indoor plants, Bard convincingly presented ideas … but it got some things wrong, like the scientific name for the ZZ plant."
In the below example, Bard says the scientific name for the ZZ plant is Zamioculcas zamioculcas, when it is actually Zamioculcas zamiifolia — everybody knows that.
These kind of falsehoods are known in technology circles as AI hallucinations.
Much like ChatGPT and Bing, Bard has a disclaimer under its text box warning users that it "may display inaccurate or offensive information that doesn't represent Google's views".
The company is allowing access to Bard through a separate site from its search engine, and providing a "Google It" button so that users can check the accuracy of the information Bard displays. Google says it will be "thoughtfully integrating" the technology Bard is based on into its search engine eventually though.
The tech giant is also limiting how much testers can interact with Bard right now, which is something Microsoft did after media coverage pointed out some of the strange things the Bing chatbot was doing in its early days.
What are the early reviews of Bard like?
Early impressions appear to be mostly positive, but reviewers have been quick to point out that Google has given itself more time by being slower to release its chatbot than its rivals.
In his time using Bard, The Verge reporter James Vincent reported that it was able to answer simple queries and recommend a list of popular heist movies, but sometimes struggled with producing factual information.
"Although the chatbot is connected to Google's search results, it couldn't fully answer a query on who gave the day's White House press briefing (it correctly identified the press secretary as Karine Jean-Pierre but didn't note that the cast of Ted Lasso was also present)," he said.
"It was also unable to correctly answer a tricky question about the maximum load capacity of a specific washing machine, instead inventing three different but incorrect answers."
Mr Vincent said Bard appeared to be faster than ChatGPT and Bing, but it allegedly gave him this contentious line when it was asked about Crimea: "Russia has a long history of ownership of Crimea."
It also reportedly got a little bit saucy with him:
The Verge's editor-at-large, David Pierce, found that Bard was "a noticeably worse tool than Bing" when it came to finding useful and correct information, and described it as "not much of a productivity tool".
MIT Technology Review said Bard wouldn't provide tips on how to make a Molotov cocktail — which is a good thing (because a Molotov is a weapon and not an actual cocktail).
Bard reportedly wouldn't provide any medical information either, such as how to spot signs of cancer, but managed to write an invitation for a child's birthday party while finding and using the correct address for a specific indoor gymnasium.
TechRadar praised Bard's design, and said it was "sometimes scarily" fast, but found the system repeated itself three times while writing a film review.
When will Bard be available in Australia?
Google has not given a date for when people in Australia will be able to test or use Bard, but it says the system will be become available in more countries and languages over time.
Until now, Bard was only available to hand-picked "trusted testers" chosen by Google.
It's unclear how many people in the US and UK will be able to test it before it becomes available in more countries.