Claude Code · Workflow

What Claude Code can't do (and where people waste time finding out)

There are things Claude Code is genuinely bad at. Knowing them upfront saves you from expensive detours.

Knowing when it's wrong

This is the big one. Claude will write code that compiles, passes lint, and does the wrong thing — with full confidence. It does not have a reliable internal signal for "I'm not sure about this." It has hedging phrases, but those don't correlate well with actual uncertainty.

You cannot trust Claude's confidence level. You have to verify the output independently, especially for anything touching security, data integrity, or external APIs.

Understanding your business logic

Claude can read your code and infer what it does. It cannot understand why you made the choices you did, what constraints exist outside the codebase, or what the right behavior is for a case that hasn't been written down anywhere.

If the answer to "what should this do?" requires knowledge that exists only in your head or your team's history, Claude will guess. Sometimes the guess is fine. Sometimes it's wrong in a way that takes days to find.

Staying in scope over long sessions

At the start of a session, Claude follows your scope definition well. Three hours in, with a long context and multiple task pivots, it starts doing things adjacent to what you asked. Not because it forgot — because the scope signal gets diluted by all the other content in context.

The fix is explicit scope reminders and the minimal footprint rule in CLAUDE.md, but it degrades and requires active maintenance. It is not a set-and-forget property.

Debugging its own previous output

When Claude wrote code two sessions ago and it's now broken, asking Claude to debug it often produces new code that looks different but has the same underlying problem. It pattern-matches to the error and tries fixes rather than tracing the actual logic.

The isolation prompt helps: give it only the failing code and the exact error, cut out everything else. But it's genuinely harder for Claude to debug its own previous work than fresh code with a clear problem statement.

Estimating complexity

Claude will tell you a task is straightforward when it isn't. Not always, but often enough that "this should be simple" is not useful information. The only way to know if something is simple is to try it.

Related: Claude will say "this will take a few steps" and then take 15 steps. Its pre-task estimates are not reliable. Plan for more than it says.

Things it doesn't know it doesn't know

If your codebase uses an internal library, a recent API version, or an unusual pattern, Claude might use the wrong version or pattern without flagging it. It defaults to the most common version of something in its training data, which may not match your environment.

Always tell Claude the versions of key dependencies. Better: have it read your package.json or equivalent before touching code that depends on them.

The Agent Prompt Playbook has prompts specifically for the situations where Claude is most likely to go wrong. $29.