I've added "Do not over-engineer the solution" to my Claude project instructions. It doesn't work.

Not completely, anyway. The AI still gives me solutions that can be simplified. Every time. Without fail. The first answer is never the simplest answer.

So now I have a habit: whatever the AI proposes, I ask if it can be simpler. And it almost always can. Which raises an uncomfortable question — why didn't it just give me the simpler version first?

The pattern

Here's what happens, over and over:

I ask for a feature. The AI builds something that works. It's well-structured, handles edge cases, follows best practices. It's also more complex than it needs to be.

I ask: "Can this be simpler?"

The AI says yes, and cuts it down. Sometimes significantly. A queue management UI becomes a simple peek bar. A multi-step validation flow becomes a single check. A scattered set of utilities gets consolidated into one file.

The simpler version works just as well. Often better — fewer moving parts means fewer things to break.

Looking at my commit history, this pattern shows up constantly:

  • refactor(callprocessing): replace queue section with bottom peek bar

  • refactor(ai): simplify deadline suggestion by moving logic to AI prompts

  • refactor: improve code quality with centralized utilities and error handling

  • refactor(specs): eliminate requirement duplicates and consolidate overlapping functionality

Each of those represents a moment where I looked at what the AI built and said "this is too much." The refactor commits aren't improvements to working code — they're corrections to over-built code.

Why does this keep happening?

I don't fully know yet. But I have some theories I'm testing:

Theory 1: The AI optimizes for completeness, not simplicity.

When you ask for a feature, the AI tries to give you everything you might need. Edge case handling. Configuration options. Flexibility for future changes. These aren't bad things — but they add complexity that you pay for immediately and may never use.

Theory 2: Training data bias toward enterprise patterns.

The AI learned from millions of codebases, many of them large enterprise projects where abstraction layers and design patterns are appropriate. For a solo project or early prototype, those patterns are overkill. The AI doesn't automatically adjust for project scale.

Theory 3: The AI is trying to impress you.

This sounds anthropomorphic, but there's something to it. A more sophisticated-looking solution might seem more valuable. The AI may have learned that complex-looking code gets better feedback than simple code, even when simple code is better.

Theory 4: It doesn't know what you'll need next.

Without full context of your project's future, the AI builds defensively — adding flexibility "just in case." You know you only need the simple version. The AI doesn't.

I'm still investigating which of these (or what combination) is actually driving the behavior. But knowing why matters less than knowing what to do about it.

What I actually do

The habit is simple: ask for simplification before accepting any solution.

My go-to prompts:

  • "Can this be simpler?"

  • "What's the minimum version that solves the problem?"

  • "Is there anything here we don't need yet?"

  • "Can you cut this in half?"

That last one is surprisingly effective. "Cut this in half" forces a real reduction, not just minor cleanup.

I also added "Do not over-engineer the solution" to my CLAUDE.md file. It helps — the initial solutions are somewhat simpler than they'd otherwise be. But it doesn't eliminate the problem. I still need to ask.

The key insight: simplification is a separate step, not a default.

The AI won't give you the simplest solution automatically. You have to ask for it. Once I accepted that, I stopped being frustrated by it and just built the extra question into my workflow.

When simplification goes wrong

It's not always the right call. I've been burned a few times:

Over-simplified solutions that don't handle real conditions. I asked to simplify some file handling logic. The AI cut out what looked like unnecessary checks. Turns out those checks handled edge cases that appeared in production — files with unusual names, network interruptions mid-process, that sort of thing. I had to add them back.

Cutting things that seemed redundant but weren't. In one case, I had what looked like duplicate validation — once in the UI, once in the data layer. I asked to consolidate. The AI removed the data layer validation. But that validation was the actual safety check; the UI one was just for user feedback. Subtle difference, real bug.

Simplifying too early. Sometimes the "complex" solution is complex because the problem is complex. Asking to simplify before you understand why it's built that way can remove important functionality.

The pattern in my failures: I asked to simplify things I didn't fully understand. When I understood the code well enough to evaluate it, simplification worked. When I was just pattern-matching on "this looks like too much," I got burned.

The meta-lesson

This whole pattern — AI builds complex, I ask for simple, AI delivers simpler — reveals something about working with AI that I keep re-learning:

The AI's first answer is a starting point, not a solution.

It's not that the AI is bad at its job. It's that "generate a good solution" and "generate the right solution for your specific situation" are different tasks. The AI is good at the first one. The second one requires your judgment.

Asking for simplification is one form of that judgment. You're saying: "I know my context better than you do. In my context, less is more."

The skill isn't prompting the AI to be simpler upfront (though that helps). The skill is recognizing that you're the one who has to advocate for simplicity, every single time, because the AI won't do it for you.

The habit, summarized

  1. Get the AI's initial solution

  2. Before accepting it, ask: "Can this be simpler?"

  3. Evaluate the simpler version — does it still solve the problem?

  4. If you don't understand why something is complex, ask about that before cutting it

  5. Repeat until you hit the minimum viable solution

It takes an extra 30 seconds per feature. It saves hours of debugging over-built code later.

Tags: building-with-ai, ai-workflow

Category: Principles

Meta description: AI-generated code is almost always more complex than it needs to be. The fix is simple: ask for simpler. Every single time.

Keep Reading