The Wishful Thinking Tax
Published:
On what we lose when we stop thinking before we prompt
It was past 2am. The cursor blinked. I had been prompting the same AI model for hours, chasing a bug through a trail of half-formed instructions and increasingly vague follow-ups. I had nothing to show for it.
I wasn’t sleeping. I wasn’t thinking. I was binge prompting, feeding fragments of intent into a machine and hoping, each time, that it would somehow assemble them into the solution I had failed to articulate. The output degraded with every iteration. The context window filled with noise. By the end, I wasn’t even sure what I was trying to fix anymore.
That night crystallised something I had been sensing for a while but hadn’t yet named: a new and quietly dangerous relationship between humans and AI, one where the very ease of delegation becomes the source of failure.
The System 1 trap
In Thinking, Fast and Slow, Daniel Kahneman describes two modes of cognition. System 1 is fast, intuitive, effortless, the mode we operate in by default. System 2 is slow, deliberate, effortful, the mode we engage when a problem demands real thought. The key tension is that System 1 constantly overestimates its own competence. We reach for the fast answer and assume it’s the right one.
AI was supposed to be a System 2 amplifier. The promise was simple: hand off the tedious parts, and you’d have more cognitive bandwidth for the important parts. What actually happened, for many of us, is something more insidious. AI became a System 1 enabler. It made shallow input feel like it should produce deep output. It lowered the activation energy for starting a task so dramatically that we stopped doing the thinking that made starting worthwhile.
We began prompting the way we scroll: reflexively, continuously, without a clear intention, hoping something useful would surface.
The delusion of wishful context
At the core of this pattern is a cognitive error I’ve started calling wishful context. It works like this: you have a rich, complex understanding of what you want, your goals, your constraints, the history of the problem, the standard you’re trying to reach. The model has none of that. It has only what you typed. And what you typed, in a hurry, with half your attention, is a pale shadow of what you actually know.
And yet we proceed as if the model received the fuller version. We assume it understands the subtext. We attribute intelligence to what is, at its core, a probabilistic pattern-matching engine operating on incomplete signals. The model isn’t failing us. We failed to show up with our own thinking first.
This is not a criticism of the technology. Language models are genuinely extraordinary. The failure is in the interface between human intent and machine input, a gap that is almost invisible precisely because the outputs are fluent and confident, even when they are wrong.
The compounding problem
What makes this particularly treacherous is that the damage accumulates. A poor prompt doesn’t just produce a poor output. It pollutes the context. In a stateless tool like a search engine, each query is fresh. In a language model conversation, your imprecision compounds. You iterate on a crooked foundation. You accept a response that’s sixty percent right and build from there. The errors stack. By the time you realise the direction is wrong, you’ve travelled a long way in it.
This is what happened to me at 2am. I wasn’t just wasting an evening. I was in a feedback loop of my own making, where each vague prompt produced a slightly wrong answer, which I partially accepted, which gave me a slightly wrong foundation for the next prompt, and so on. The model was working exactly as designed. The problem was me.
The inversion that actually works
The irony, when I reflect on the times AI has worked best for me, is the same in every case: I had already done the hard thinking before I opened the tool. I knew what I was trying to produce. I could articulate the constraints. I had a clear enough model of the outcome that I could evaluate whether the response was moving toward it or away from it.
In other words, to use AI well, you have to think more, not less. The delegation only works when you’ve already done the cognitive work that makes delegation coherent. AI as the last mile, not the first.
The people who get the most from these tools are not the ones who think the least. They’re the ones who arrive most prepared.
What this means for products and people
From a product perspective, this is an underexplored design problem. The dominant design philosophy for AI interfaces has been frictionlessness: get the user to a prompt as fast as possible, surface results immediately, reduce every barrier. But some friction is valuable. The question is where to put it.
The most interesting design challenge in AI right now isn’t making the model smarter. It’s building interfaces that make the human show up smarter. That might mean structured input modes that ask you to articulate your goal before you generate. It might mean explicit context scaffolding, prompting you for the constraints, the audience, the success criteria, before handing anything to the model. It might mean making the cost of vagueness legible, rather than hiding it behind a fluent-sounding response.
From a human perspective, the discipline is simpler to describe and harder to practise: develop what I think of as prompt readiness. Before you open the tool, ask yourself whether you actually know what you want. Not roughly. Specifically. If you can’t describe the outcome you’re trying to reach, the model can’t reach it for you.
A different kind of literacy
We talk a great deal about AI literacy, understanding what these models can and cannot do. But there’s a more fundamental literacy that gets less attention: the ability to know your own mind clearly enough to instruct one.
The 2am version of me wasn’t lacking AI literacy. I understood the tool. What I lacked, in that moment, was intellectual clarity about what I actually wanted. I was outsourcing the thinking precisely because I hadn’t done it yet. And the model, faithfully, gave me back the confusion I put in.
We are in the early years of a profound shift in how cognitive labour gets distributed between humans and machines. How that shift plays out depends less on the capability of the models and more on the habits we build around using them. The question isn’t whether AI can think for you. It can’t, not really. The question is whether you show up ready to think with it.
I still catch myself reaching for the prompt before I’ve reached for the thought. The difference now is that I notice it faster, and most of the time, I stop.
