This is post #100 on this blog. Almost all of them are about using Claude Code in daily development work. Writing them forced me to articulate things I'd been doing instinctively — and sometimes realize my instincts were wrong.
Here's what actually changed in how I think.
1. The bottleneck is almost never the code
I thought Claude Code would make me a 10x developer. In terms of raw code output, it does. But that was never the bottleneck.
The bottleneck is requirements clarity. Knowing exactly what you want. Catching edge cases before they become bugs. Communicating intent clearly enough that the code Claude writes actually solves the problem you have.
Claude is a multiplier on clarity. If you're unclear about what you want, it multiplies that uncertainty. If you're precise, it multiplies that precision.
2. The session handoff problem is real
Every Claude Code session starts cold. All the context from previous sessions — the decisions made, the things you tried that didn't work, the reasoning behind the current architecture — is gone.
I spent months being surprised by this. Now I treat session documentation as core development work, not overhead. A good CLAUDE.md and good session notes are what turn Claude Code from a useful toy into something that works on a real project over time.
3. Good prompts look like good requirements docs
The prompts that get good results from Claude share something with good requirements docs: they specify what success looks like, not just what to build. They give constraints. They anticipate edge cases.
This was humbling to realize, because writing good requirements docs is hard. Claude didn't change that. It just made the gap between clear requirements and unclear ones more visible.
4. Verification is non-negotiable
Claude will confidently return wrong answers. Not as often as early AI coding tools, but it happens regularly enough to matter. The confidence is the problem — wrong answers delivered hesitantly are easier to catch than wrong answers delivered with a well-structured explanation.
I now treat everything Claude tells me as a hypothesis. I run the code. I test the edge cases. I check the docs reference. This isn't distrust exactly — it's the same skepticism I'd apply to a junior developer's confident assertion.
5. Context management is a skill
Claude's context window is not unlimited. How you manage it determines whether long sessions stay productive or degrade into confusion. Putting the right things in CLAUDE.md, knowing when to use /compact, knowing when to start fresh — these are real skills that took months to develop.
6. The best use isn't code generation
The highest-leverage use of Claude Code I've found isn't generating code. It's thinking through problems. "What are the failure modes of this design?" "What am I missing?" "Is there a simpler way to do this?"
Claude as a reasoning partner gives me faster feedback than waiting for code review. It catches things I'm too close to the problem to see. That's worth more than the code it writes.
7. It changes what you spend time on
Before Claude Code, most of my time went to writing implementation code. After, most of my time goes to understanding what to build, reviewing what Claude built, and thinking about correctness and edge cases.
That's a better distribution. It matches where judgment matters more than typing speed.
If you're newer to Claude Code and want to skip the months of figuring this out through trial and error, the Agent Prompt Playbook has the prompts and patterns that took me the longest to develop. $29.