How I use Claude to think

I don't use AI to write for me. I use it to think with me. That's not a semantic difference. If you hand Claude a prompt and publish what comes back, you've outsourced your judgment. You'll produce a lot of mediocre work very fast. If you use it to pressure test your thinking from angles you'd miss alone, you've gained something you can't get any other way.

I start with a position. I never open a session with "what should I think about X." I come in with a thesis, a half-formed plan, an argument I'm not sure holds up. Claude is a thinking partner, not a thinking replacement. If you don't have a point of view going in, you'll accept whatever comes back.

I build the thing, then I break it. First session, I work with Claude on the research. Then I start a new session and begin kaizen. I launch a bunch of agents to stress test it, each with a different point of view. How many depends on the problem. If I'm testing website copy for conversion, it's twenty personas across different buyer segments. If I'm evaluating vendors, perhaps it's three. If it's something internal, it's the actual team members who'll be affected. Claude is good at sizing the right set of perspectives when you give it the context.

I synthesize and loop. I take those reactions, decide what's signal and what's noise, and synthesize a revised version. New session. New agents, different lenses. Synthesize again. Sometimes that's one loop. Sometimes it's three. Sometimes I walk away for a few days, find something new on the web, and come back with the original doc and the new input and say "run a kaizen loop against this with these personas." The loop reopens whenever the thinking needs it.

One of my most frequent prompts is "convince me why I'm totally wrong." If Claude can't make a strong case against my position, I have more confidence in it. If it can, I just learned something.

Then I write for the team. Once the plan is solid, new session. I write the execution version. Then I run agents as the actual people who'll be reading the work. Does this make sense to Oliver? Is Liz going to push back on the timeline? Will the customer convert? That round isn't about the idea anymore. It's about how the people receiving it will receive it and whether they can act on it.

Fresh sessions are the discipline. Long conversations rot. Claude starts agreeing with you, echoing your framing, losing its edge. Every phase gets a clean session. Planning, stress testing, drafting, etc. It feels slower but it's faster and the output is incomparably better.

You have to think. This is the part that doesn't fit in a process doc. You can run kaizen loops all day, but if you're not a critical reader of what comes back, you're just generating noise. The tool multiplies whatever you bring to it. Bring sharp, critical thinking and you get sharper thinking back. Bring nothing and you get polished nothing, and you will be exposed.

AI: 20% | Human: 80% — Jesse described his full workflow, directed the framing, wrote the content, and edited every draft. Claude structured the essay, ran research agents to validate the approach against published best practices, and drafted iterations.

Previous
Previous

How We Work with AI at Lineage

Next
Next

Goodbye, BigSoccer