
As someone who writes about AI for a living, chatbots have become an extension of me, like a third arm or an additional toe that somehow makes me more effective day to day. And yet, while I’m happy to give my deepest thoughts and feelings to an AI, there are still plenty of topics that I don’t actively use them for.
This is for a variety of reasons. In some cases, it is to do with safety, avoiding any risk of information being leaked, trained on, or generally being misused by accident. In other cases, it is more a concern of its efficiency in answering certain subjects.
However, this isn’t to say I avoid all topics. In some cases, I simply rethink the question or ask something that is related to the topic I’m after, just with a slight variation on the overall theme.
Here are five examples, and where possible, the ways that I would rephrase the queries to get a better response.
Breaking news

At one point, chatbots like ChatGPT were behind on their knowledge, unable to answer up-to-date questions. This has since changed, and thanks to the ability to scan the web for answers, most chatbots can now answer questions right up to the minute.
However, this doesn’t mean that the accuracy is always correct. When topics are still breaking, chatbots can struggle to get answers and details correct, analyzing information that is still coming out.
While they often get details right, they can get caught up in the confusion of an event unfolding, offering up details that are unconfirmed or getting mixed up between conspiracy and truth.
A good way around this is to ask a chatbot for information on the news, along with the sources where it has got its facts from, allowing you to delve into the information in your own time.
Equally, chatbots can be useful to simply inform further on a breaking topic, giving the background to explain what is happening, instead of trying to decipher the breaking news.
Legal advice

The likes of ChatGPT or Gemini can be useful to help you understand complicated concepts or decipher legalese where it emerges, but no chatbot should ever be relied on specifically for this.
Laws can change by region and aren’t always right for every situation. If you give a chatbot incomplete information on your legal situation, it can fill in the gaps and make assumptions that could lead to an overall wrong answer.
Instead, chatbots can be good at simply explaining legal concepts to you or helping decipher what is meant by a phrase or word in a legal document, helping you understand for yourself what you are reading.
Moral or emotional advice

Chatbots have come a long way in their understanding of emotional and moral situations and are able to provide strong advice where needed. However, the complexities of these kind of problems can cause problems.
ChatGPT doesn’t know about your own history and previous experiences, and, like other situations, it needs to fill in the gaps to answer a query. This can result in ChatGPT making a decision that is wrong for you, or giving advice that is unhelpful.
Some chatbots are also tilted towards giving responses that are positive, which can lead to them being overly supportive where you need some criticism to your decisions.
Anything involving personal data (health data, finances)

I have told chatbots huge amounts of personal information about myself. While that is obviously personal choice, it is important to be wary of what information you do end up giving.
Highly sensitive health data, passwords or financial information can be very private, and while most chatbots are safe to give information to, there is always risks with training and the potential for the information to be misused.
Instead, try giving similar examples to you or asking wider questions related to the information you’re asking about.
Making decisions for you

When there is a hard decision to make, it can be tempting to hand the decision over to AI, taking over for you and handling your fate.
Unsurprisingly, the issue here is that chatbots, once again, can’t understand the intricacies of your life, missing out on specific details and making a lot of assumptions based on its wider knowledge base.
Instead of asking AI to make a big decision for you, give it a full explanation of what you are trying to decide and ask it to generate a list of positives and negatives, or getting it to ask you a set of questions on the subject can help get you a bit more perspective.

Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.