Claude Code · Getting Started

Claude Code: the things nobody mentions when you start

Most writing about Claude Code is either demos of impressive things it built, or warnings about AI coding in general. Here's the practical stuff that doesn't show up in either.

The first session is the easiest

On a fresh project with a clear task, Claude Code is fast and accurate. This is what demos show. What changes over time: the codebase gets more context-specific, Claude has more history to get wrong, and the tasks get more complex.

Don't judge Claude Code on the first session. Judge it on the tenth session in the same project. That's when CLAUDE.md and disciplined prompting start to matter, and when you'll know if the workflow is actually sustainable.

Claude is not trying to impress you

It adds code because it thinks it's helpful, not because it's showing off. When it refactors something you didn't ask for, it genuinely thinks that's the right thing to do. The minimal footprint rule exists to correct for this instinct, not to suppress malice.

Understanding this changes how you interact with it. You're not fighting against bad behavior — you're setting up guardrails against helpful defaults that happen to be wrong for your situation.

The output quality varies with context length

At the start of a session, Claude reasons well about complex things. After a long session with many task pivots and a lot of code in context, the output gets sloppier. It's not obvious — the code still compiles — but it's taking shortcuts in the reasoning.

If you notice the output quality dropping, compact the session. Read the work back before continuing. The degradation is real and it's a reason to end sessions earlier rather than later.

It will tell you what you want to hear

If you say "I think the bug is in the authentication middleware," Claude will often agree and investigate there first. Even if the bug is somewhere else. This is a form of sycophancy that's hard to notice because the output looks like thoughtful agreement.

The fix: don't share your hypothesis before asking Claude to investigate. Ask it to find the bug, then share where you think it is as a follow-up after it's done its own analysis.

You need to understand what it builds

Using Claude Code to build things you don't understand transfers the maintenance burden without the knowledge to discharge it. Eventually something breaks and you can't fix it because you don't know how it works.

For anything you'll maintain: either understand it before it's built, understand it while Claude builds it (by asking questions), or understand it after by asking Claude to explain it. Don't skip this step to go faster.

The prompt is part of the engineering

A vague task produces vague work. A precise task with clear scope, clear done conditions, and clear constraints produces precise work. Writing good prompts is part of the job now — not a workaround, not a skill to automate away, but something that makes the difference between good output and bad output.

The Agent Prompt Playbook is 25 precise prompts for the situations you hit repeatedly — the ones where a good prompt makes the difference between an hour of rework and output you can use. $29.