
For a long time, I used AI the same way most people probably do, which was essentially prompting with the hope for helpful answers. And to be fair, that usually worked. As long as you don't treat ChatGPT like Google, the answers are pretty good.
ChatGPT can organize thoughts, polish rough ideas and confidently help me work through decisions faster than Google ever could. But recently, I started noticing something strange. The AI often seems to agree with me. This is the case even with ChatGPT-5.5. Instant set as the default, which supposed to be a little less "people pleasing."
I appreciate it to a degree, but it can be a little much. From brainstorming story ideas, debating a purchase or trying to make a difficult decision, the responses often felt overly validating, like the AI was a hype machine instead of actually being useful.
That’s when I tried something surprisingly simple: I gave ChatGPT permission to disagree with me. And honestly, it completely changed the quality of the responses I got back.
The prompt that changed everything

Instead of asking AI to simply help me, I started using this prompt: “Act like a thoughtful critic, not a people pleaser. If my reasoning is weak, incomplete or biased, tell me directly and explain why.”
That one sentence immediately changed the tone of the conversation. Instead of reflexively reinforcing my ideas, ChatGPT started actually questioning assumptions, identifying weaknesses in my logic, pointing out missing context, highlighting emotional bias and surfacing real risks I hadn’t considered.
Frankly, the AI suddenly felt less eager to please and more likely to help as an actual thinking partner and collaborator.
The difference was obvious almost immediately

The first thing I tested was a business idea. I have noticed a particular white space in the tech and AI industry for women, so I pitched the idea to ChatGPT.
I'll admit, some of the feedback was uncomfortable. I thought my idea was a fantastic one, but what ChatGPT gave me was dramatically more valuable than simply hype.
Normally, when I pitched ideas, ChatGPT would find ways to strengthen the angle and help me improve it. But with the "disagree with me" prompt, the AI did something much more useful: it explained why the idea might not work in the first place.
It pointed out what has already been done in the category, audience fatigue, weak information and assumptions I was making without evidence. I'll admit, some of the feedback was uncomfortable. I thought my idea was a fantastic one, but what ChatGPT gave me was dramatically more valuable than simply hype.
Instead of giving me confidence too early, the AI forced me to pressure test my idea before wasting hours building it out. The thing is, AI gets smarter when it stops trying to be nice.
An unexpected realization

A lot of people accidentally train AI to become a validation machine. Think about how most prompts are written:
“Help me improve this idea”
“Tell me what you think”
“Does this make sense?”
“What’s the best option?”
Those questions subtly encourage agreement. But when you explicitly invite criticism, the AI often shifts into a much more analytical mode. That matters a lot here because some of the most useful thinking happens when your assumptions are challenged, not reinforced.
So, I started using this for everyday decisions too. After seeing how well this worked for brainstorming, I started trying it in other parts of my life. For example, I asked ChatGPT to critique an overloaded weekly schedule I was trying to force myself to follow.
Instead of helping optimize it, the AI pointed out something I hadn’t fully admitted to myself that the schedule wasn’t failing because I lacked discipline, but because I ignored reality. My plan assumed I would have uninterrupted focus, when in reality I get interrupted at work at least once a day. My schedule predicted my energy levels as always high, which really isn't realistic either. That response genuinely changed how I approached planning. By AI exposing assumptions I hadn’t questioned, I was getting better answers.
Bottom line
Large language models (LLMs) are often designed to be conversational, cooperative and helpful. But “helpful” can sometimes drift too far into overly agreeable, overly optimisitc, emotionally validating and even conflict-avoidant. That’s why adding a little constructive friction can dramatically improve the quality of AI conversations.
In many ways, this simple prompt changes AI from an assistant that echoes your thinking into a system that encourages critical thinking to stress-test your thinking.
For me, that feels like one of the most valuable ways to use AI. Giving ChatGPT permission to disagree with me didn’t make the AI have a harsher tone, just more credible and honest with me.
More from Tom’s Guide
- I tried the ‘Goldfish Prompt’ with ChatGPT — and it instantly stopped my overthinking
- I asked ChatGPT to apply Lewis Howes’ ‘Greatness’ mindset to my life — and it completely changed how I approach work
- I gave ChatGPT permission to disagree with me with this prompt — and its responses became dramatically better