ai-storiesreal-world

Is using AI every day making you worse at thinking?

More people are noticing something uncomfortable: heavy AI use seems to be degrading their ability to do things without it. The research is starting to back them up.

By Sara Morales · April 3, 2026

Is using AI every day making you worse at thinking?

A Reddit post from last week has been shared thousands of times. The title: "Using AI daily is making me noticeably worse at doing things without it."

The person writing it had been using ChatGPT heavily for six months. They noticed their ability to write from scratch had degraded. They'd stare at a blank document waiting for AI to start it. Emails that used to take five minutes now felt hard without AI assistance. The ability to hold a long train of thought without outsourcing it seemed to have gotten weaker.

The comments were full of people recognizing themselves.

This Is Probably Real, Not Just Anxiety

The instinct to dismiss this as technophobia or confirmation bias is understandable. But there's actual research backing the concern. A 2024 study from Microsoft and Carnegie Mellon University on AI's impact on critical thinking found that workers who relied on AI tools more heavily showed reduced engagement in their own critical reasoning. The more people trusted AI output, the less they verified it, and the less they independently reasoned through problems.

This isn't new. GPS navigation reduced people's ability to build internal maps of cities. Calculators reduced mental arithmetic. Spell-check reduced spelling ability. The pattern of cognitive offloading degrading the offloaded skill is well documented.

The question isn't whether it happens. It's whether it matters.

When It Matters and When It Doesn't

Nobody cares that GPS hurt our ability to navigate by landmark. The skill is less useful now. The offloading was a net win.

Writing and reasoning are different. The ability to think through a problem without a tool, to hold complexity in working memory, to produce a coherent argument from scratch - these aren't just instrumental skills that AI can replace. They're how you evaluate whether the AI output is any good.

If you can't write a clear paragraph yourself, you can't tell whether the paragraph Claude or ChatGPT wrote is good, or just plausible-sounding. If you can't reason through a technical problem independently, you can't evaluate whether the AI solution is correct or just confident. The dependent skill underpins the quality check on the AI output.

That's the loop that makes the degradation matter more here than it did with GPS.

What Heavy AI Users Are Doing About It

The people who seem most thoughtful about this aren't avoiding AI - they're being deliberate about which tasks they let AI do and which they don't.

A few patterns we've come across. Some people have "no AI" zones for certain tasks - they draft personal emails without it, they write their first-pass thinking in a notebook before opening a chat window. Others use AI for research and information gathering but force themselves to write outputs from scratch. Some set a rule: before asking AI, spend five minutes trying it yourself.

None of this is about being anti-AI. It's about the same logic as going to the gym when you have a car. You don't walk everywhere because cars are bad. You deliberately exercise the capability you'd otherwise lose.

The Counterargument Worth Taking Seriously

The strongest pushback on all of this: maybe the cognitive skills being degraded aren't the ones that matter in a world where AI can do them better. Maybe spending cognitive effort on writing a first draft from scratch is like spending effort on long division when calculators exist. The effort isn't the point. The output is.

If the AI version of your email is clearer and more effective than the one you'd write yourself, does it matter that your ability to write emails unaided is declining? There's a real argument that the answer is no.

We're not sure. We think it probably matters that you can think clearly without a tool, because thinking clearly is foundational to everything else, including evaluating the tools you use. But the honest answer is that nobody really knows yet what heavy AI use does to cognition over years or decades. The research is early.

A Practical Suggestion

Try one week of tracking which cognitive tasks you reach for AI to start. Not to complete - to start. The first draft, the first outline, the first approach to a problem. Notice which things you no longer attempt without AI scaffolding first.

That list is probably fine. It might also contain a few things where you'd rather keep the muscle.

The tools are good. They're also changing how we think. Both things are true at the same time, and the second one deserves more attention than it gets in conversations about AI productivity.

Comments

Some links in this article are affiliate links. Learn more.