Tech
How To Think About AI: Is It The Tool, Or Are You?
from the do-you-use-your-brain-or-do-you-replace-it? dept
We live in a stupidly polarizing world where nuance is apparently not allowed. Everyone wants you to be for or against something—and nowhere is this more exhausting than with AI. There are those who insist that it’s all bad and there is nothing of value in it. And there are those who think it’s all powerful, the greatest thing ever, and will replace basically every job with AI bots who can work better and faster.
I think both are wrong, but it’s important to understand why.
So let me lay out how I actually think about it. When it’s used properly, as a tool to assist a human being in accomplishing a goal, it can be incredibly powerful and valuable. When it’s used in a way where the human’s input and thinking are replaced, it tends to do very badly.
And that difference matters.
I think back to a post from Cory Doctorow a couple months ago where he tried to make the same point using a different kind of analogy: centaurs and reverse-centaurs.
Start with what a reverse centaur is. In automation theory, a “centaur” is a person who is assisted by a machine. You’re a human head being carried around on a tireless robot body. Driving a car makes you a centaur, and so does using autocomplete.
And obviously, a reverse centaur is a machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.
Like an Amazon delivery driver, who sits in a cabin surrounded by AI cameras, that monitor the driver’s eyes and take points off if the driver looks in a proscribed direction, and monitors the driver’s mouth because singing isn’t allowed on the job, and rats the driver out to the boss if they don’t make quota.
The driver is in that van because the van can’t drive itself and can’t get a parcel from the curb to your porch. The driver is a peripheral for a van, and the van drives the driver, at superhuman speed, demanding superhuman endurance. But the driver is human, so the van doesn’t just use the driver. The van uses the driver up.
Obviously, it’s nice to be a centaur, and it’s horrible to be a reverse centaur.
As Doctorow notes in his piece, some of the companies embracing AI tech are doing so with the goal of building reverse-centaurs. Those are the ones that people are, quite understandably, uncomfortable with and should be mocked. But the reality is, also, it seems quite likely those efforts will fail.
And they’ll fail not just because they’re dehumanizing—though they are—but because the output is garbage. Hallucinations, slop, confidently wrong answers: that’s what happens when nobody with actual knowledge is checking whether any of it makes sense. When AI works well, it’s because a human is providing the knowledge and the creativity.
The reverse-centaur doesn’t just burn out the human. It produces worse work, because it assumes that the AI can provide the knowledge or the creativity. It can’t. That requires a human. The power of AI tools is in enabling a human to take their own knowledge, and their own creativity and enhance it, to do more with it, based on what the person actually wants.
To me it’s a simple question of “what’s the tool?” Is it the AI, used thoughtfully by a human to do more than they otherwise could have? If so, that’s a good and potentially positive use of AI. It’s the centaur in Doctorow’s analogy.
Or is the human the tool? Is it a “reverse centaur”? I think nearly all of those are destined to fail.
This is why I tend not to get particularly worked up by those who claim that AI is going to destroy jobs and wipe out the workforce, who will be replaced by bots. It just… doesn’t work that way.
At the same time, I find it ridiculous to see people still claiming that the technology itself is no good and does nothing of value. That’s just empirically false. Plenty of people—including myself—get tremendous use out of the technology. I am using it regularly in all different ways. It’s been two years since I wrote about how I used it to help as a first pass editor.
The tech has gotten dramatically better since then, but the key insight to me is what it takes to make it useful: context is everything. My AI editor doesn’t just get my draft writeup and give me advice based on that and its training—it also has a sampling of the best Techdirt articles, a custom style guide with details about how I write, a deeply customized system prompt (the part of AI tools that are often hidden from public view) and a deeply customized starting prompt. It also often includes the source articles I’m writing about. With all that context, it’s an astoundingly good editor. Sometimes it points out weak arguments I missed entirely. Sometimes it has nothing to say.
(As an aside, in this article, it suggested I went on way too long explaining all the context I give it to give me better suggestions, and thus I shortened it to just the paragraph above this one).
It’s not always right. Its suggestions are not always good. But that’s okay, because I’m not outsourcing my brain to it. It’s a tool. And way more often than not, it pushes me to be a better writer.
This is why I get frustrated every time people point out a single AI fail or hallucination without context.
The problem only comes in when people outsource their brains. When they become reverse centaurs. When they are the tool instead of using AI as the tool. That’s when hallucinations or bad info matter.
But if the human is in control, if they’re using their own brain, if they’re evaluating what the tool is suggesting or recommending and making the final decision, then it can be used wisely and can be incredibly helpful.
And this gets at something most people miss entirely: when they think about AI, they’re still imagining a chatbot. They think every AI tool is ChatGPT. A thing you talk to. A thing that generates text or images for you to copy-paste somewhere else.
That’s increasingly not where the action is. The more powerful shift is toward agentic AI—tools that don’t just generate content, but actually do things. They write code and run it. They browse the web and synthesize what they find. They execute multi-step tasks with minimal hand-holding. This is a fundamentally different model than “ask a chatbot a question and get an answer.”
I’ve been using Claude Code recently, and this distinction matters. It’s an agent that can plan, execute, and iterate on actual software projects, rather than just a tool talking to me about what to do. But, again, that doesn’t mean I just outsource my brain to it.
I often put Claude Code into plan mode, where it tries to work out a plan, but then I spend quite a lot of time exploring why it was making certain decisions, and asking it to explore the pros and cons of those decisions, and even to provide me with alternative sources to understand the trade-offs of some of the decisions it is recommending. That back and forth has been both educational for me, but also makes me have a better understanding and be comfortable with the eventual projects I use Claude Code to build.
I am using it as a tool, and part of that is making sure I understand what it’s doing. I am not outsourcing my brain to it. I am using it, carefully, to do things that I simply could not have done before.
And that’s powerful and valuable.
Yes, there are so many bad uses of AI tools. And yes, there is a concerted, industrial-scale effort, to convince the public they need to use AI in ways that they probably shouldn’t, or in ways that is actively harmful. And yes, there are real questions about what it costs to train and run the foundation models. And we should discuss those and call those out for what they are.
But the people who insist the tools are useless and provide nothing of value, that’s just wrong. Similarly, anyone who thinks the tech is going to go away are entirely wrong. There likely is a funding bubble. And some companies will absolutely suffer as it deflates. But it won’t make the tech go away.
When used properly, it’s just too useful.
As Cory notes in his centaur piece, AI can absolutely help you do your job, but the industry’s entire focus is on convincing people it can replace your job. That’s the con. The tech doesn’t replace people. But it can make them dramatically more capable—if they stay in the driver’s seat.
The key to understanding the good and the bad of the AI hype is understanding that distinction. Cory explains this in reference to AI coding:
Think of AI software generation: there are plenty of coders who love using AI, and almost without exception, they are senior, experienced coders, who get to decide how they will use these tools. For example, you might ask the AI to generate a set of CSS files to faithfully render a web-page across multiple versions of multiple browsers. This is a notoriously fiddly thing to do, and it’s pretty easy to verify if the code works – just eyeball it in a bunch of browsers. Or maybe the coder has a single data file they need to import and they don’t want to write a whole utility to convert it.
Tasks like these can genuinely make coders more efficient and give them more time to do the fun part of coding, namely, solving really gnarly, abstract puzzles. But when you listen to business leaders talk about their AI plans for coders, it’s clear they’re not looking to make some centaurs.
They want to fire a lot of tech workers – they’ve fired 500,000 over the past three years – and make the rest pick up their work with coding, which is only possible if you let the AI do all the gnarly, creative problem solving, and then you do the most boring, soul-crushing part of the job: reviewing the AIs’ code.
Criticize the hype. Mock the replace-your-workforce promises. Call out the slop factories and the gray goo doomsaying. But don’t mistake the bad uses for the technology itself. When a human stays in control—thinking, evaluating, deciding—it’s a genuinely powerful tool. The important question is just whether you’re using it, or it’s using you.