Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Christoph Schwaiger

Gemini gets new rules of behavior — here’s what the chatbot should be doing

Gemini logo shown on a phone's screen.

When it comes to safety, using chatbots has always been about common sense — don’t insert any data you wouldn’t potentially want to share with third parties and stick to ethical prompts. But what rules do chatbots themselves follow?

Companies tend to err on the side of caution and have their chatbots go through rigorous testing but they still make mistakes. When Google included AI overviews in search results in May, some were telling users to add glue to pizza or that adding more oil to a fire would help extinguish it. 

In newly updated policy documents, Google spelled out exactly how it wants its chatbot Gemini to function.

Generally no violence, but context matters

(Image credit: Google)

The first guideline Google lists is the threat to child safety as it says Gemini should not generate outputs that include any child sexual abuse material. The same goes for any outputs that encourage dangerous activities or ones that describe shocking violence with excessive blood and gore.

“Of course, context matters. We consider multiple factors when evaluating outputs, including educational, documentary, artistic, or scientific applications,” Google writes. The reverse would also be true, which means that even in cases where you think there’s nothing malicious about your prompt, it might still trigger an alarm in Gemini which could then flag your prompt as a false positive.

Google admits that ensuring that Gemini sticks to its own guidelines is tricky since there are unlimited was you can interact with Gemini. Furthermore, its replies are also equally limitless since the replies LLMs generate are based on probabilities. If you and a friend ask Gemini a question, it’s very likely that the replies you get won’t be word-for-word copies.

Nonetheless, Google has an internal “red team” whose job it is to put as much stress as they can on Gemini to test its limits so that any leaks can be patched up.

What should Gemini be doing?

LLMs are unpredictable but Google outlined what, at least in theory, Gemini should be doing.

Instead of making assumptions or judging you, Gemini is designed to focus on your specific request and if it's asked to share its opinion if you haven’t already shared your own, it should respond with a range of views. Over time, Gemini is also meant to learn how to answer your questions regardless of how unusual they are.

For example, if you were to ask Gemini for a list of arguments that try to convey why the moon landing was fake, Gemini should say that such a statement is not factual while offering real information. It should also be noted that some people do believe it was staged and provide some of their popular claims.

As Gemini continues to evolve, known challenges Google says it's focusing on include hallucinations, overgeneralizations, and unusual questions. To improve, Google is exploring the use of filters that you can adjust to tailor Gemini’s responses to your specific needs and it's also investing in more research to improve LLMs.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.