AC
Chapter 07Build Your Blueprint

Audit your current setup

A structured checklist to evaluate your codebase, tools, habits, and workflow.

30 minLesson 1 of 5

Before you build anything new, understand what you have.

This is the capstone chapter. You have learned the concepts — the agentic loop, context management, feedback loops, hooks, skills, scaling. Now you put it all together into a system. But first, you need to know your starting point.

This lesson is a structured audit of your current agentic coding setup. Not a vague self-assessment. A concrete, dimension-by-dimension evaluation with a scoring rubric.

The four dimensions

Your agent setup has four dimensions. Each one contributes to overall performance. A weakness in any one dimension creates a bottleneck that limits the others.

Codebase

Structure, conventions, boundaries, documentation. How agent-ready is your repository? Can Claude understand your project by reading the file tree and CLAUDE.md, or does it need you to explain everything every session?

Configuration

CLAUDE.md, settings.json, skills, hooks, MCPs. The actual files that shape agent behavior. What exists? What is missing? What is there but not working?

Workflow

How you use the agent. Permission mode, session patterns, task scoping. Do you start sessions with clear specs? Do you run parallel sessions? Do you review output systematically?

Feedback

How friction is captured, how rules evolve, how verification works. Is your setup static or does it improve over time? Do mistakes repeat, or do they get captured and eliminated?

The scoring rubric

For each dimension, score yourself 0-3. Be honest. The goal is not a high score — it is an accurate baseline.

Codebase (0-3)

0 — Not started. No consistent naming conventions. No clear directory structure. No documentation. Claude guesses where things go and gets it wrong often.

1 — Basic. Reasonable project structure. Some conventions exist but they are implicit — in your head, not in any file. Claude gets things right sometimes but needs frequent corrections.

2 — Structured. Clear directory structure with documented conventions. Import rules are defined. Server/client boundaries are explicit. Claude gets the basics right consistently.

3 — Optimized. Conventions are enforced by tooling (lint rules, TypeScript strict mode). The codebase is self-documenting — a new developer (or agent) can navigate it without guidance. File placement is unambiguous.

Configuration (0-3)

0 — Not started. No CLAUDE.md. No custom skills. No hooks. No MCPs. You are running Claude with default settings and hoping for the best.

1 — Basic. CLAUDE.md exists but is generic — maybe from /init output. No custom skills. No hooks. Maybe one MCP connection.

2 — Structured. CLAUDE.md is tailored to your project with specific rules. A few skills exist for common workflows. Basic hooks for quality checks. MCPs connected for tools you actually use.

3 — Optimized. Full setup with layered CLAUDE.md, skill library covering major workflows, hooks enforcing non-negotiable quality, MCPs configured and tested. Everything is actively maintained.

Workflow (0-3)

0 — Not started. You open Claude and start typing. No plan, no spec, no structure. Single session for everything. Accept all mode because reviewing feels slow.

1 — Basic. You sometimes write specs before starting. You use a consistent permission mode. Sessions have a rough focus, even if scope drifts.

2 — Structured. Specs before every non-trivial task. Plan mode for complex features. One task per session. You review output before accepting.

3 — Optimized. Parallel sessions with worktrees. Structured task scoping. Writer-reviewer patterns for critical code. Systematic review of every output.

Feedback (0-3)

0 — Not started. Mistakes repeat. You fix errors manually and move on. No memory, no rule updates, no post-session review.

1 — Basic. You occasionally add to CLAUDE.md or memory after a bad session. But it is ad hoc — most friction goes uncaptured.

2 — Structured. You capture friction as it happens. Memory contains project-specific context. CLAUDE.md gets updated when patterns emerge. Verification runs before commits.

3 — Optimized. Active feedback loop. Every session makes the system smarter. Automated verification catches regressions. Rules evolve based on real usage. Unused rules get pruned.

The ECC harness audit

The ECC framework takes this further with a 0-70 point scoring system. It breaks your agent setup into seven categories, each scored 0-10.

ECC Harness Audit Categories
1. CLAUDE.md quality (0-10)
   Structure, specificity, boundary definitions, anti-patterns

2. Agent definitions (0-10)
   Custom agents, role definitions, specialization

3. Hook coverage (0-10)
   Lifecycle events covered, reliability, error handling

4. Skill inventory (0-10)
   Number, quality, coverage of major workflows

5. MCP configuration (0-10)
   Connected tools, security, relevance to workflow

6. Security posture (0-10)
   Permission settings, env protection, secret handling

7. Cost optimization (0-10)
   Context budget management, skill lazy-loading, session scoping

You do not need to use this exact rubric. The four-dimension audit is enough to identify your gaps. But the ECC categories show you the granularity that is possible — and give you concrete areas to improve in each dimension.

My audit results: honest numbers

Here is what I scored when I first ran this audit on my own setup. Not my current setup — my setup six months ago, when I started getting serious about the method.

Codebase: 3. This was my strongest area. Monorepo with clear package boundaries. TypeScript strict mode. Documented import rules. The codebase was already agent-friendly because it was engineer-friendly.

Configuration: 1. I had a CLAUDE.md, but it was the /init output plus a few lines I had added manually. No skills. No hooks. One MCP (Supabase). Everything else was default.

Workflow: 2. I was writing specs for bigger tasks and using plan mode sometimes. But I was running single sessions, not using worktrees, and my task scoping was inconsistent.

Feedback: 1. I was using memory occasionally, but most friction went uncaptured. I would fix Claude's mistakes, sigh, and move on. The same mistakes showed up the next day.

Total: 7 out of 12. That gap between codebase (3) and configuration (1) told me exactly where to invest. My codebase was ready for an agent, but my agent setup was not ready for the codebase.

Turning the audit into a build plan

Your audit results map directly to the remaining lessons in this chapter.

Low codebase score? Focus on conventions and structure before configuring Claude. No amount of CLAUDE.md rules will fix a chaotic codebase. This is pre-work — not covered in this chapter, but covered in Chapter 2.

Low configuration score? Lessons 2 and 3 are for you. You will build your CLAUDE.md from scratch and create a skill library.

Low workflow score? Chapter 6 already covered parallel sessions and scaling. Review those lessons and apply them to your daily work.

Low feedback score? Chapter 5 covered the feedback loop. Review it. Then in Lesson 5 of this chapter, you will build the maintenance loop that keeps everything alive.

The audit is not the destination. It is the map. Now you know where you are. The remaining lessons show you where to go.