AC
Chapter 06Scaling Up

Non-interactive mode

Claude in your scripts. Output formats, fan-out patterns, CI/CD integration.

30 minLesson 2 of 4

Up until now, every interaction with Claude has been a conversation. You type, Claude responds, you type again. Interactive mode. But Claude Code is not just a chat tool. It is a command-line program. And command-line programs can be scripted.

claude -p "your prompt" runs a single non-interactive command and returns the result. No conversation. No follow-up. Claude reads the prompt, does the work, prints the output, and exits. This unlocks an entirely different category of usage: automation, CI/CD, scripting, and multi-agent pipelines.

Basic usage

The simplest non-interactive call looks like this:

Basic non-interactive call
# Ask Claude a question, get an answer, done
claude -p "explain what the handleSubmit function in src/components/ContactForm.tsx does"
 
# Pipe the output to a file
claude -p "summarize the changes in the last 5 commits" > changelog.txt
 
# Use it inline in a script
SUMMARY=$(claude -p "one-line summary of this repo's purpose")
echo "Repo: $SUMMARY"

Each call is a fresh context window. Claude reads your prompt, gathers whatever context it needs from the codebase, executes, and returns. No memory of previous calls. No accumulated context. Every run is clean.

Output formats

By default, claude -p returns plain text. But you can get structured output for scripting.

text

Default. Human-readable output. Good for summaries, explanations, changelogs.

json

Structured JSON response. Parseable by jq, scripts, and other tools. Good for automation pipelines.

stream-json

Streaming JSON. Each chunk arrives as it is generated. Good for real-time processing and long-running tasks.

Output format examples
# Plain text (default)
claude -p "list the exported functions in src/lib/utils.ts"
 
# JSON — parseable output
claude -p "list the exported functions in src/lib/utils.ts" --output-format json
 
# Stream JSON — for real-time processing
claude -p "review this file for bugs" --output-format stream-json

JSON format is what makes Claude scriptable. You can pipe the output to jq, parse it in Node.js, or feed it to another command. This is the foundation for everything that follows.

Sequential pipeline pattern

The most powerful non-interactive pattern is chaining. Each step is an isolated context window. No accumulated baggage. No context pollution.

Sequential pipeline
# Step 1: Analyze the PR
ANALYSIS=$(claude -p "analyze the changes in this PR. Focus on breaking changes,
  new dependencies, and migration requirements. Output as structured text.")
 
# Step 2: Generate release notes from the analysis
echo "$ANALYSIS" | claude -p "based on this analysis, write release notes
  for end users. Keep it under 10 bullet points."
 
# Step 3: Check for security issues
claude -p "review the changes in the last commit for security vulnerabilities.
  Only report issues you are confident about. Output as JSON." --output-format json

This is an ECC principle: separate concerns into different context windows. The analysis agent and the writing agent have different jobs. Mixing them in one session muddies both. Isolating them produces better output from each.

Fan-out pattern

When you have N independent subtasks, run them in parallel. This is the fan-out pattern — split, execute concurrently, collect results.

Fan-out: parallel file reviews
#!/bin/bash
# Review 5 files in parallel, collect results
 
FILES=("src/auth.ts" "src/api.ts" "src/db.ts" "src/utils.ts" "src/config.ts")
RESULTS_DIR=$(mktemp -d)
 
for file in "${FILES[@]}"; do
  claude -p "review $file for bugs, security issues, and missing error handling.
    Only report issues you're >80% confident about." \
    > "$RESULTS_DIR/$(basename $file).review" &
done
 
# Wait for all parallel reviews to finish
wait
 
# Combine results
echo "=== Code Review Summary ==="
for file in "$RESULTS_DIR"/*.review; do
  echo "--- $(basename $file .review) ---"
  cat "$file"
  echo ""
done

Five files, five parallel Claude calls, five separate context windows. Each review gets full attention. No context shared between them. The results come back in whatever order they finish, and the script collects them all.

Continuous PR loop

This is the most ambitious non-interactive pattern I use. A fully automated loop: create branch, implement, review, fix, commit, push, create PR, wait for CI, fix failures, merge.

Automated PR loop
#!/bin/bash
TASK="Add rate limiting to the /api/contact endpoint"
BRANCH="feature/rate-limit-contact"
NOTES_FILE="SHARED_TASK_NOTES.md"
 
# Initialize shared notes
echo "# Task: $TASK" > "$NOTES_FILE"
echo "## Progress" >> "$NOTES_FILE"
 
# Step 1: Implement
git checkout -b "$BRANCH"
claude -p "Implement: $TASK. Read SHARED_TASK_NOTES.md for context.
  When done, append your implementation notes to SHARED_TASK_NOTES.md."
 
# Step 2: Review
REVIEW=$(claude -p "Review the changes on this branch. Check for security issues,
  missing error handling, and edge cases. Only report high-confidence issues.")
echo "## Review findings" >> "$NOTES_FILE"
echo "$REVIEW" >> "$NOTES_FILE"
 
# Step 3: Fix review findings
claude -p "Read SHARED_TASK_NOTES.md. Fix all review findings. Update the notes."
 
# Step 4: Commit and push
git add -A && git commit -m "feat: add rate limiting to contact endpoint"
git push -u origin "$BRANCH"
 
# Step 5: Create PR
gh pr create --title "Add rate limiting to contact endpoint" \
  --body "$(claude -p 'write a PR description based on SHARED_TASK_NOTES.md')"
 
rm "$NOTES_FILE"

The key detail: SHARED_TASK_NOTES.md. Each non-interactive call is a fresh context window, so how do they coordinate? Through a shared file. The implementer writes notes. The reviewer reads them and appends findings. The fixer reads everything and acts. Cross-iteration context lives in the file, not in any single session.

Pipeline with quality gates

Chain non-interactive calls into a pipeline where each step is a quality gate. If a step fails, the next call gets the failure context and fixes it.

Quality gate pipeline
#!/bin/bash
# Implement → Lint → Test → Security scan
 
# Step 1: Implement
claude -p "Add input validation to the contact form using Zod"
 
# Step 2: Lint check
LINT_OUTPUT=$(pnpm lint 2>&1)
if [ $? -ne 0 ]; then
  claude -p "The linter found issues. Fix them: $LINT_OUTPUT"
fi
 
# Step 3: Type check
TSC_OUTPUT=$(pnpm tsc --noEmit 2>&1)
if [ $? -ne 0 ]; then
  claude -p "Type checking failed. Fix the type errors: $TSC_OUTPUT"
fi
 
# Step 4: Test
TEST_OUTPUT=$(pnpm test 2>&1)
if [ $? -ne 0 ]; then
  claude -p "Tests failed. Fix them without changing the test expectations: $TEST_OUTPUT"
fi
 
echo "All quality gates passed."

Each quality gate runs in its own context. The implement step does not carry over into the lint fix step. This means the lint fixer sees only the lint errors — not the entire implementation history. Focused context produces focused fixes.

This is where the techniques from the entire course converge. Advanced setups have slash commands like /orchestrate that define a full 6-phase structured workflow: research, plan, implement, review, fix, commit — each phase running as a separate non-interactive call with its own context window and quality gates between them. Another command, /multi-workflow, orchestrates multiple models: one for planning, another for implementation, a third for review. These are the endpoint of the patterns you have learned — skills, hooks, subagents, and non-interactive mode combined into a single automated pipeline.

You are not there yet. But every piece you have built — the skills, the hooks, the verification pipeline — is a building block for this level of automation.

CI/CD integration

Non-interactive mode fits directly into your CI pipeline. Run Claude as a step in GitHub Actions, GitLab CI, or whatever you use.

GitHub Actions step
# .github/workflows/pr-review.yml
- name: AI Code Review
  run: |
    claude -p "Review the changes in this PR. Focus on:
      1. Breaking changes that need migration
      2. Security vulnerabilities
      3. Missing error handling
      Only report issues you're confident about.
      Output as JSON." --output-format json > review.json
 
- name: Post Review Comment
  run: |
    BODY=$(cat review.json | jq -r '.result')
    gh pr comment $PR_NUMBER --body "$BODY"

Automated PR reviews. Changelog generation. Migration verification. Release note drafting. Any task that can be expressed as a prompt and does not need interactive follow-up.

Safety in automation

Non-interactive mode respects the same permission system as interactive mode. But in automation, you need tighter control. No human is watching.

Restricting permissions in scripts
# Only allow read and analysis — no file edits
claude -p "analyze this codebase for security issues" \
  --allowedTools "Read,Glob,Grep"
 
# Allow edits but not bash commands
claude -p "fix all lint errors in src/" \
  --allowedTools "Read,Glob,Grep,Edit"

The --allowedTools flag is your guardrail. In CI, Claude should almost never have Bash access — that is too broad. Limit it to Read, Grep, and Edit for most automation tasks. Add Bash only when you need it and you have verified the prompt cannot produce destructive commands.

Start simple. One claude -p call in a script. See the output. Build from there. The full pipeline patterns come naturally once you are comfortable with the basic call.