This morning, I read yet another LinkedIn post proclaiming that “writing is critical reasoning’s most important tool.” The post went on to warn that AI would atrophy our brains, that we’d lose the ability to think deeply if we let machines write for us.
All but three years ago today—November 30, 2022—ChatGPT launched and changed everything. And somehow, we’re still having the same tired debate: Team Human versus Team Robot, AI doom versus AI salvation. Your brain will rot, versus AI will save us all.
But there’s a third position that nobody’s talking about. And it’s the one that actually matters.
The Equation That Never Made Sense
Let me be blunt: Writing well does not equal thinking well. They’re just not the same thing.
You know who couldn’t write easily? Einstein. Tesla. My father, who spent his entire career as a panel beater and never once put pen to paper to solve a complex mechanical problem. He could reshape a car panel from raw steel, conceiving the entire process in his head and executing it flawlessly. Zero drawings. Zero written plans. Pure spatial reasoning and mechanical intelligence operating at the highest level.
Tesla famously constructed entire designs in his mind before building them. No sketches. No notes. Just extraordinary thinking that bypassed writing entirely.
But according to the “writing is thinking” crowd, these people weren’t engaging in critical reasoning? That’s complete horseshit, and there’s no logical way to validate that claim.
The uncomfortable truth is this: When people say “writing is critical thinking,” what they’re really saying is “the only thinking that counts is the kind I can do.” It’s intellectual narcissism disguised as rigour.
The Pattern We Keep Repeating
Here’s what’s wild: We’ve had this exact panic before. Multiple times.
Socrates argued that writing itself would weaken people’s memories. Scholars worried the Gutenberg printing press would flood the world with low-quality thinking. When calculators arrived, everyone panicked that people would forget how to do math.
Every single communication technology—from pen and paper to lithographic representation to the printing press to calculators to AI—has faced the same visceral resistance. And it’s always framed as concern about our thinking capacity.
But what if it was never really about thinking at all?
What’s Really at Stake
Let me tell you what I think this is actually about: power, access, and control over who gets to participate in intellectual discourse.
If AI levels the playing field for neurodivergent thinkers, for people who think brilliantly but write “differently,” for people who have revolutionary ideas but not traditional academic training—that’s threatening to existing power structures.
Knowledge is power, right? That quote gets bandied around constantly. But here’s the question nobody asks: Is knowledge powerful when you hoard it, or when you share it? Because if you’re hoarding it, who exactly gets to see that power?
The real anxiety isn’t about critical thinking. It’s about WHO gets to be considered an intellectual. And if writing is no longer the barrier to entry, the gatekeepers lose their gates.
How I Actually Use AI (And Why It’s Not Lazy)
Let me walk you through my actual process, because this is important.
I start with stream of consciousness—whether typed or spoken, depending on my environment that day. It might be messy. It’s definitely not well-structured prose. But here’s what it IS: it’s me drawing on decades of experience, connecting concepts from yesterday or last week or ten fucking years ago, reasoning through problems in real-time.
Then I use an LLM to help structure that thinking. The AI translates my stream of consciousness into something more organised.
And here’s the crucial part that the critics miss: I review everything. I go back and read what the AI produced and ask myself: Did it capture the essence of what I said? Did it use my words to articulate my message? Is this congruent with my actual thinking?
If you blindly publish AI output without review, that’s not an AI problem. That’s not a critical reasoning problem. That’s just fucking lazy.
But that laziness isn’t about the tool—it’s about the person. And pretending the tool is the problem lets intellectually lazy people off the hook.
I’m doing cognitive labour in three places:
Before/during the stream of consciousness - reasoning, connecting, drawing on experience
In the review process - checking congruence, catching gaps
In the iteration - refining, sharpening the argument
The “writing is thinking” crowd only counts step 1 if you type it yourself. But I’m doing all three steps—just using different tools for different parts.
The Communication Equation
Here’s something else that matters: communication is always two-sided. There’s a sender and a receiver. If those two things are mismatched, the communication has failed.
As a neurodivergent professional, I’ve been told my whole life that I communicate wrong. Too verbose. Too tangential. Not structured properly. But that’s framing it as MY failure, when really, it’s a mismatch between my natural communication style and conventional expectations.
I’m phenomenal at speaking. I can hear arguments, assimilate information, and respond in ways that help teams succeed. Third chair on my school debating team for two years running. The thinking was always there. Getting it from my ADHD brain onto paper? That was the nightmare.
Until AI changed everything.
Now I can do what I do well—speak, connect concepts, tell stories—and use a tool to translate that into writing that has the same impact. Who’s winning here? I think I am.
What People Are Still Getting Wrong
Three years in, here’s what people still don’t understand:
AI doesn’t solve everything. We’re talking about a specific subset of tools—LLMs using machine learning, neural networks, and technologies that have existed for decades but are now more accessible. Don’t lump everything into the “ChatGPT or Claude” bucket. That’s one tool in a massive AI superset.
The “hallucination” framing is wrong. We’ve humanised these tools in ways that obscure what’s actually happening. A hallucination is a drug-induced state—how can a computer hallucinate? If you get an answer and don’t validate it, that’s not the AI’s fault. That’s you failing to apply critical reasoning. It’s assuming what you see is a true and fair representation of fact.
The real skill is knowing how to use the tools. If you’re rolling into ChatGPT with zero context, no system instructions, no narrowing of scope—you might as well walk into a casino with $100 and hope to walk out with $110. Good luck.
In Claude, I use Projects. I provide specific system instructions about how I want the AI to respond to me, what voice to use, and what to prioritise. That’s not cheating—that’s expertise. That’s understanding your tools.
And you know what? People in my industry—technology—must be quivering in their boots, thinking anyone can ask an LLM a question and get an answer. But there’s still expertise required to understand that output, to validate it, to know when it’s right and when it’s complete bullshit.
The Call: What You Should Actually Do
So here’s what I want from you:
Try the damn tools. If you haven’t used them, don’t criticise them. Don’t throw rocks from a position of ignorance.
Challenge your cognitive biases. We all have them—it’s scientifically proven. Don’t assume everything you’re seeing and hearing in your bubble is a true, fair, accurate representation of all things out there.
Stop gatekeeping intellectual discourse. How do we get better as individuals, as communities, as societies? Through discourse. And if you’re unwilling to participate because you hold an opinion so strongly that you can’t see your own biases? Fuck off.
Validate, don’t blindly trust. If you use these tools, don’t just publish whatever comes out. Review it. Share it with someone else if you’re uncertain. But don’t poo-poo things just because you don’t understand them.
Experiment and learn. Use projects. Set system instructions. Help shape the answers you get by narrowing the field of view. If you choose not to do that, you’re rolling the dice.
The Third Position
We don’t have to choose between “AI will save us” and “AI will rot our brains.” There’s a third position: AI as a translation tool, a way to make different forms of intelligence visible and valuable.
For neurodivergent thinkers, for people whose brilliance doesn’t fit conventional molds, for anyone who’s been told they don’t think “properly” because they don’t write “properly”—these tools are revolutionary. Not because they do our thinking for us, but because they translate our thinking into forms the world can finally see and hear.
Three years after ChatGPT launched, we’re still asking the wrong questions. The question isn’t whether AI helps or hurts thinking. The question is: Are we ready to value different kinds of thinking? Are we ready to admit that writing was never the only path to rigorous thought?
Because from Socrates to the printing press to ChatGPT, we’ve always feared the tools that level the playing field. And we’ve always dressed up that fear as concern for intellectual standards.
It’s time to call that what it is: gatekeeping.
And it’s time to move past it.










