Agentic coding means giving an AI agent the autonomy and the tools to read your codebase, make changes, run commands, and verify its own work. Not chatting with AI. Not code suggestions. Full autonomous execution.
To understand why this works and how to make it work well, you need to understand the architecture. Not the marketing version — the mechanical reality.
The model and the harness
There are two pieces to every agentic coding tool. Most people think of them as one thing. They are not.
The model runs on Anthropic's servers. Claude Opus, Claude Sonnet — these are language models. They receive text, they reason about code, they produce text. They are extraordinarily good at deciding what to do. But on their own, they cannot read your files, run your build, or edit your project. They are brains without hands.
The harness runs on your machine. Claude Code is a harness. It is a program in your terminal that sits between you and the model. When you type a prompt, Claude Code does not just send your message to Anthropic. It adds something critical: a list of tools the model can use.
The model (remote)
Claude Opus or Sonnet running on Anthropic's servers. Receives your prompt plus a list of tool definitions. Returns text and tool call instructions. Cannot touch your files directly.
The harness (local)
Claude Code running in your terminal. Attaches tool definitions to every request. Executes tool calls locally on your machine. Sends results back to the model. Manages the entire conversation loop.
This separation is fundamental. The model decides what to do. The harness does it. The model never touches your files — it sends structured instructions, and Claude Code executes them.
Tools: how the agent gets hands
When you type a prompt in Claude Code, here is what actually happens:
You type a prompt
The model reasons
Claude Code executes
The loop continues
That is the entire mechanism. Here is what the model actually sees:
Your prompt: "Add a contact form with Zod validation"
Available tools:
Read(file_path) → returns file contents
Edit(file_path, old, new) → surgically replaces text in a file
Write(file_path, content) → creates or overwrites a file
Bash(command) → runs a terminal command
Grep(pattern, path) → searches file contents
Glob(pattern) → finds files by name pattern
Model response:
→ tool_call: Read("src/components/ui/index.ts")
→ tool_call: Read("packages/db/src/database.types.ts")
What this looks like in practice
You want to add a contact form to your Next.js application. You type one message:
Add a contact form. Use the existing design system. Validate with Zod. Send submissions to the leads table in Supabase. Show a success toast after submission.
Here is what happens — each line is a tool call the model returns, executed by Claude Code on your machine:
Read("packages/ui/src/index.ts") → finds Button, Input, Card
Read("packages/db/src/database.types.ts") → finds leads table schema
Write("packages/shared/src/schemas/contact.ts") → creates Zod schema
Write("apps/web/components/contact-form.tsx") → builds the form
Edit("packages/api/src/routers/leads.ts") → adds create mutation
Edit("packages/api/src/index.ts") → registers new route
Edit("apps/web/app/contact/page.tsx") → adds form to page
Bash("pnpm build") → type error: missing import
Edit("apps/web/components/contact-form.tsx") → fixes the import
Bash("pnpm build") → success
Ten tool calls. Six files created or modified. One build failure caught and fixed. No copy-pasting. No switching between tabs. No manual wiring. You described the outcome, the model decided the tool calls, and Claude Code executed them.
Now imagine that was automatic
That ten-step process? It works. But notice something: you still had to tell Claude to "use the existing design system" and "validate with Zod." You were the one carrying the knowledge of how your project works.
Now imagine a system where Claude already knows:
The ten-step contact form is impressive. But it is manual orchestration — you directing an autonomous agent. The next level is the agent that already knows how you work, enforces your rules automatically, and catches problems before you even see them.
That is the level this course is building toward.
Why Claude Code
There are many agentic coding tools. Cursor, Windsurf, Cline, Copilot Workspace. Why does this course focus on Claude Code?
Because Claude Code runs in your terminal. And that changes everything.
You control the tool system. The model gets whatever tools Claude Code gives it. You can add new tools via MCP servers — give Claude access to your database, your browser, your documentation. You can restrict tools — a review agent that can read but not edit. You can automate around tools with hooks — a script that runs every time Claude edits a file. IDE-based agents give you their toolset. Claude Code lets you build yours.
It is composable. A CLI program can be scripted, looped, piped, and automated. claude -p "review this file" runs as a one-shot command. You can put it in a bash script. Chain three Claude calls into a pipeline. Run it in CI/CD. Build a PR review bot. None of this is possible when your agent lives inside an IDE.
The economics are better. Claude Code plans give you direct access to the same models — Opus, Sonnet — with usage that refreshes weekly, not monthly. You are paying for the model and the harness. With IDE tools, you are paying for the IDE, the model access, and the markup — and you typically hit usage limits faster when using top-tier models.
What the ceiling looks like
How far does this go? There are open-source projects that demonstrate what a fully optimized system looks like — complete plugin systems built by developers who have pushed Claude Code to its limits.
The gap between getting started and a fully-optimized agentic coding system
Twenty-five specialized agents, each with restricted tool access and the right model for the job. One hundred and twenty-seven skills covering everything from TDD workflows to security reviews. Twenty hooks that automate formatting, block dangerous commands, and manage context compaction — all without you typing a single reminder.
That is not where you start. But it is where the path leads. And this course is the map.
The tradeoff you cannot ignore
More power means more risk. The agent can read, edit, and execute across your entire project. A misunderstanding is not a wrong line suggestion you can backspace — it is a wrong architectural decision executed across multiple files.
That is why the rest of this course exists.
Every chapter after this one teaches you how to direct, constrain, and verify an autonomous coding agent. How to structure your codebase so the agent understands it. How to write specs that leave no room for misinterpretation. How to set up verification pipelines that catch mistakes before they reach production.
The power is real. The risk is real. The methodology is what makes the difference.
Where we go from here
How do you go from "Claude can build a contact form" to "Claude follows my team's patterns, runs verification automatically, and learns from its mistakes"?
That is Chapters 2 through 7.
But first, you need to understand the mechanics deeper. In this chapter, I am going to show you the loop the agent runs (gather, plan, act, verify), the specific tools it has access to, and the one constraint that governs everything: the context window.
Once you understand these fundamentals, every technique in the rest of the course will make intuitive sense. You will know why we structure CLAUDE.md a certain way, why skills exist, why subagents matter. The system starts making sense when you see the machine.
Let's start with the loop.