Get all your news in one place.
100's of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Amanda Caswell

I hit Claude’s new usage limits — and It changed how I use AI forever

Claude logo on phone.

I’ve spent the last several months treating Claude like an infinite resource. It was my digital intern, my sounding board and my assistant that could handle whatever I threw at it. But last week, right in the middle of a big project I'm working on, it dropped the ball.

It wasn't a Claude outage or a hallucination that derailed my work, it was a usage cap.

If you’re a power user of Anthropic’s Claude 4.6 Sonnet or 4.6 Opus, you’ve likely seen the warning: "You have reached your message limit until 4 PM." It’s a jarring moment that turns a seamless workflow into a complete standstill. And crazy enough, you'll see this message even if you're a Pro user like me.

But after the initial frustration wore off, I realized that I needed to change my usage strategy. Here is how I’ve pivoted to stay productive — and why this could be useful for anyone regardless of what subscription tier you use. From free to Pro to Max, here's how to work around those limits.

Usage limits are tightening

(Image credit: Shutterstock)

AI is expensive. It's one of the biggest reasons OpenAI pulled the plug on Sora. Between the massive compute power required to run LLMs (Large Language Models) and the surging user base, companies like Anthropic and OpenAI are tightening the leash.

Even on Claude Pro ($20/month), limits aren't fixed; they fluctuate based on demand. If you're working on a complex project with long attachments, you’ll burn through your "allowance" faster than you think.

3 ways I changed my AI strategy

(Image credit: Olena Malik / Getty Images)

To keep working without being "locked out," I had to stop treating Claude like a chatty coworker and start treating it like a high-priced consultant. That meant:

  • No more "thinking out loud." I used to send five or six short messages to "warm up" an idea. Now, that’s a waste of credits. Instead, I now draft my full context in a Notepad file first. I combine the goal, the constraints and the raw data into one "Mega-Prompt." This results in getting a better first-draft response and saves 80% of my message overhead.
  • The "model-hopping" workflow. I’ve always used multiple chatbots at once, and this strategy helps a lot to increase productivity when usage limits are low. Knowing what chatbots are better for certain projects, helps. To stay within my token budget, I use Claude for creative brainstorming and coding (where its "human" tone shines), but I switch to ChatGPT for data analysis or Google Gemini for quick research tasks. By spreading the load, I rarely hit the ceiling on any single platform.
  • Reducing follow up. I’ve started using System Instructions more effectively. I tell Claude exactly what the final output should look like in the first message to help reduce follow up. If I still have questions, I paste the response into Gemini and go from there.

The bottom line

AI is only getting more advanced, using more power and well, that means it's going to get more expensive. For that reason, the era of "infinite AI" is over. That was a gift of the early beta days. As these tools become integrated into professional workflows though, we have to move on from casual chatting to intentional prompting.

Hitting a limit is frustrating, but it forced me to be a more concise, clear and efficient communicator. If you want to get the most out of your $20-a-month subscription, stop chatting — and give my strategies a try. Let me know in the comments what you do when you hit your usage limits.



More from Tom's Guide

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.