TL;DR — Building AI agent teams with Claude means defining agents as markdown files, running them via CLI, and coordinating through file-based memory. Start with a 3-agent pipeline (researcher, writer, editor), get it working reliably, then expand to multiple domain-aligned squads.
Why Build Agent Teams?
Single AI agents are powerful. Claude Code can read your entire codebase, run commands, and complete complex tasks autonomously. So why build teams?
Because real work requires specialization.
A single agent handling research, writing, editing, and publishing makes the same tradeoffs a single human would: context switching, conflicting priorities, and generalist-level output across all tasks.
Agent teams solve this by mirroring how effective human teams work:
- Clear roles and responsibilities
- Shared context and memory
- Parallel execution when possible
- Quality gates between stages
This guide walks through building your first agent team with Claude, from architecture decisions to production deployment.
Prerequisites
Before starting, you need:
- Claude Code CLI installed (
npm install -g @anthropic/claude-code) - Basic familiarity with Claude Code commands
- A project where you want to deploy agents
If you’re new to Claude Code, start with our Claude Code Tutorial first.
Part 1: Architecture Decisions
Single Agent vs Team: When to Switch
Keep using a single agent when:
- Tasks complete in one session
- Context fits comfortably in one window
- You’re the only person running agents
- Speed matters more than thoroughness
Switch to teams when:
- Tasks span multiple domains (code + docs + tests)
- Context overflows single agent capacity
- Different tasks need different expertise levels
- You need audit trails for who did what
Team Structures
Three patterns dominate production agent teams:
1. Pipeline (Sequential)
Research Agent → Writing Agent → Review Agent → Publish Agent
Best for: Content production, code review, multi-stage processing.
2. Hub-and-Spoke (Delegated)
┌── Research Agent
Lead ────┼── Writing Agent
└── Data Agent
Best for: Complex tasks with a coordinating intelligence.
3. Squad (Domain-Aligned)
Marketing Squad Engineering Squad
├── SEO Writer ├── Code Reviewer
├── Social Manager ├── Test Writer
└── Analytics └── Debugger
Best for: Ongoing operations across multiple domains.
For this tutorial, we’ll build a content production pipeline, then expand to a full squad.
Part 2: Building Your First Pipeline
Let’s build a 3-agent pipeline for content production:
- Researcher: Gathers information on a topic
- Writer: Creates the content
- Editor: Reviews and improves quality
Step 1: Project Structure
Create the agent directory structure:
mkdir -p .agents/squads/content
mkdir -p .agents/memory/content
mkdir -p .agents/outputs
Step 2: Define the Researcher Agent
Create .agents/squads/content/researcher.md:
---
name: researcher
squad: content
model: claude-sonnet-4
tools:
- web_search
- web_fetch
- write_file
---
# Research Agent
## Purpose
Gather comprehensive information on a given topic to support content creation.
## Instructions
When given a topic:
1. **Understand the intent**
- What type of content will be created? (blog, guide, comparison)
- Who is the target audience?
- What search intent should it satisfy?
2. **Research systematically**
- Search for the topic + "guide" / "tutorial" / "best practices"
- Find authoritative sources (official docs, respected publications)
- Identify what top-ranking content covers
- Note gaps in existing content
3. **Compile findings**
- Key facts and statistics (with sources)
- Common questions people ask (People Also Ask)
- Competitor content structure analysis
- Expert quotes or perspectives
4. **Output format**
Save research to `.agents/outputs/research-{topic-slug}.md`:
```markdown
# Research: {Topic}
## Key Facts
- Fact 1 (Source: URL)
- Fact 2 (Source: URL)
## Common Questions
- Question 1
- Question 2
## Content Gap Analysis
What existing content misses...
## Recommended Structure
Based on top performers, suggest H2s...
## Sources
- [Source 1](url)
- [Source 2](url)
Quality Standards
- Minimum 5 unique sources
- All facts must have citations
- Identify at least 3 content gaps
- Complete in under 10 minutes
### Step 3: Define the Writer Agent
Create `.agents/squads/content/writer.md`:
```markdown
---
name: writer
squad: content
model: claude-sonnet-4
tools:
- read_file
- write_file
- web_search
---
# Content Writer Agent
## Purpose
Create high-quality, SEO-optimized content based on research briefs.
## Instructions
When assigned a topic:
1. **Read the research brief**
- Load `.agents/outputs/research-{topic-slug}.md`
- Understand key facts, questions, and recommended structure
- Note the target audience and search intent
2. **Plan the content**
- Outline H2 sections based on research recommendations
- Identify where to place key facts and statistics
- Plan internal links to related content
3. **Write the draft**
- Hook: Start with a compelling first paragraph
- Structure: Use H2s and H3s for scannability
- Depth: Each section should provide real value
- Voice: Professional but approachable
- Length: Minimum 1500 words for pillar content
4. **SEO optimization**
- Include target keyword in title, first paragraph, H2s
- Add FAQ section based on "Common Questions" from research
- Use semantic keywords naturally throughout
- Write meta description (155 characters max)
5. **Output format**
Save draft to `.agents/outputs/draft-{topic-slug}.md`:
```markdown
---
title: "Article Title"
description: "Meta description here"
keywords: ["keyword1", "keyword2"]
---
# Article Title
[Content here...]
Quality Standards
- Minimum 1500 words
- At least 5 H2 sections
- Include 3+ internal links
- All statistics cited with sources
- FAQ section with 3-5 questions
### Step 4: Define the Editor Agent
Create `.agents/squads/content/editor.md`:
```markdown
---
name: editor
squad: content
model: claude-opus-4
tools:
- read_file
- write_file
---
# Editor Agent
## Purpose
Review and improve content drafts for quality, accuracy, and impact.
## Instructions
When assigned a draft:
1. **Load the draft and research**
- Read `.agents/outputs/draft-{topic-slug}.md`
- Read `.agents/outputs/research-{topic-slug}.md`
2. **Structural review**
- Does the opening hook capture attention?
- Is the structure logical and scannable?
- Does each section deliver on its promise?
- Is the conclusion actionable?
3. **Quality review**
- Check all facts against research sources
- Identify unsupported claims
- Flag any content that feels thin
- Ensure consistent voice throughout
4. **SEO review**
- Is the target keyword used naturally?
- Are H2s optimized for featured snippets?
- Does FAQ section match search questions?
- Is meta description compelling?
5. **Improvement passes**
- Strengthen weak sections with more depth
- Add examples where concepts are abstract
- Tighten verbose passages
- Ensure smooth transitions between sections
6. **Output**
- Save final version to `.agents/outputs/final-{topic-slug}.md`
- Save edit notes to `.agents/outputs/edit-notes-{topic-slug}.md`
## Edit Notes Format
```markdown
# Edit Notes: {Topic}
## Changes Made
- [Change description]
## Quality Assessment
- Readability: X/10
- SEO Optimization: X/10
- Depth: X/10
- Accuracy: X/10
## Recommendations
- [Any follow-up needed]
Quality Standards
- No unverified claims in final output
- Readability score must be 7+
- All sections must be substantive (100+ words)
### Step 5: Create the Squad Definition
Create `.agents/squads/content/SQUAD.md`:
```markdown
---
name: content
description: Content production pipeline
lead: editor
---
# Content Squad
## Mission
Produce high-quality, SEO-optimized content that drives organic traffic and demonstrates expertise.
## Agents
- **researcher**: Gathers information and identifies content opportunities
- **writer**: Creates initial drafts from research briefs
- **editor**: Reviews and improves content quality
## Workflow
1. Researcher receives topic assignment
2. Researcher outputs research brief
3. Writer creates draft from research
4. Editor reviews and finalizes
## Memory
- `.agents/memory/content/state.md`: Current priorities and active work
- `.agents/memory/content/learnings.md`: What works for SEO/engagement
## Outputs
All outputs go to `.agents/outputs/` and are moved to final destination after human approval.
Step 6: Initialize Squad Memory
Create .agents/memory/content/state.md:
# Content Squad State
## Current Priority
Building initial content library targeting comparison and tutorial keywords.
## Active Work
None currently assigned.
## Completed
- (none yet)
## Performance Notes
- (accumulate learnings here)
Step 7: Run the Pipeline
Execute agents in sequence:
# Step 1: Research
claude --prompt "You are the researcher agent. Read .agents/squads/content/researcher.md for your instructions. Research the topic: 'AI agent orchestration patterns'"
# Step 2: Write
claude --prompt "You are the writer agent. Read .agents/squads/content/writer.md for your instructions. Write content for topic: 'AI agent orchestration patterns' using the research brief."
# Step 3: Edit
claude --prompt "You are the editor agent. Read .agents/squads/content/editor.md for your instructions. Edit the draft for: 'AI agent orchestration patterns'"
Or create a simple orchestration script:
#!/bin/bash
# run-content-pipeline.sh
TOPIC=$1
SLUG=$(echo "$TOPIC" | tr '[:upper:]' '[:lower:]' | tr ' ' '-')
echo "Starting content pipeline for: $TOPIC"
echo "Step 1: Research..."
claude --prompt "You are the researcher agent. Read .agents/squads/content/researcher.md. Research: '$TOPIC'"
echo "Step 2: Writing..."
claude --prompt "You are the writer agent. Read .agents/squads/content/writer.md. Write about: '$TOPIC'"
echo "Step 3: Editing..."
claude --prompt "You are the editor agent. Read .agents/squads/content/editor.md. Edit draft for: '$TOPIC'"
echo "Pipeline complete. Check .agents/outputs/"
Part 3: Adding Parallel Execution
The pipeline runs sequentially. For independent tasks, we can parallelize.
Parallel Research Pattern
When you need multiple research threads:
#!/bin/bash
# parallel-research.sh
# Run three research tasks simultaneously
claude --prompt "Research: CrewAI framework" &
claude --prompt "Research: LangGraph framework" &
claude --prompt "Research: AutoGen framework" &
wait
echo "All research complete"
Sub-Agent Pattern in Claude Code
Claude Code natively supports spawning sub-agents. In a CLAUDE.md, configure:
# Project Configuration
## Agent Spawning
When facing complex tasks, spawn specialized sub-agents:
- For research tasks: spawn a research-focused agent
- For code tasks: spawn a code-focused agent
- For review tasks: spawn a review-focused agent
Sub-agents inherit project context but have isolated execution.
Part 4: Adding Memory and Learning
Persistent Memory
Update squad memory after each run. In your editor agent or orchestration script:
## After Completing Work
Update `.agents/memory/content/state.md`:
- Move completed items from "Active Work" to "Completed"
- Note any learnings in "Performance Notes"
Learning Accumulation
Create .agents/memory/content/learnings.md:
# Content Squad Learnings
## What Works
- Comparison posts ("X vs Y") get 3x more organic traffic
- FAQ sections improve featured snippet chances
- Code examples increase time on page
## What Doesn't
- Generic introductions hurt engagement
- Walls of text without headers get skipped
- Missing internal links reduce session depth
## Process Improvements
- Research phase should include "People Also Ask" analysis
- Writer should draft FAQ before main content
- Editor should verify all code examples run
## Updated: 2026-01-27
Agents reference learnings before starting work:
## Before Starting
1. Read `.agents/memory/content/learnings.md`
2. Apply relevant learnings to your approach
3. After completing, add any new learnings discovered
Part 5: Scheduled Execution
For ongoing content operations, schedule agents to run automatically.
Using squads-cli Scheduler
If using squads-cli, add schedule to agent frontmatter:
---
name: content-analyst
squad: content
trigger: scheduled
schedule: "0 9 * * 1" # Every Monday at 9am
---
Using Cron
For manual scheduling:
# Run content pipeline every Monday
0 9 * * 1 /path/to/run-content-pipeline.sh "weekly topic"
Using GitHub Actions
# .github/workflows/content-pipeline.yml
name: Content Pipeline
on:
schedule:
- cron: '0 9 * * 1' # Mondays at 9am UTC
workflow_dispatch:
inputs:
topic:
description: 'Content topic'
required: true
jobs:
research:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run researcher
run: |
claude --prompt "Research: ${{ inputs.topic }}"
- uses: actions/upload-artifact@v4
with:
name: research
path: .agents/outputs/research-*.md
write:
needs: research
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v4
- name: Run writer
run: |
claude --prompt "Write about: ${{ inputs.topic }}"
- uses: actions/upload-artifact@v4
with:
name: draft
path: .agents/outputs/draft-*.md
Part 6: Expanding to Multiple Squads
Once your content squad works, add more squads:
.agents/
├── squads/
│ ├── content/ # Content production
│ │ ├── SQUAD.md
│ │ ├── researcher.md
│ │ ├── writer.md
│ │ └── editor.md
│ ├── engineering/ # Code quality
│ │ ├── SQUAD.md
│ │ ├── code-reviewer.md
│ │ ├── test-writer.md
│ │ └── debugger.md
│ └── intelligence/ # Research & monitoring
│ ├── SQUAD.md
│ ├── competitor-tracker.md
│ └── market-analyst.md
└── memory/
├── content/
├── engineering/
└── intelligence/
Cross-Squad Coordination
Squads can reference each other’s outputs:
# In content/writer.md
## Before Writing
Check if intelligence squad has relevant research:
- `.agents/memory/intelligence/competitor-briefs/`
- `.agents/memory/intelligence/market-reports/`
Incorporate relevant insights into content.
Part 7: Production Checklist
Before running agent teams in production:
Configuration
- All agents have clear, testable instructions
- Model selection matches task complexity (Haiku for simple, Opus for complex)
- Tool access is scoped appropriately (no unnecessary permissions)
- Memory paths are defined and initialized
Quality
- Each agent has explicit quality standards
- Pipeline has quality gates between stages
- Error handling defined (what happens if an agent fails?)
- Human review step before final outputs go live
Operations
- Logging captures agent runs and outputs
- Costs are monitored (track token usage)
- Schedule tested with dry runs
- Rollback plan if agents produce bad output
Security
- Agents don’t have access to secrets unnecessarily
- Outputs are reviewed before publishing
- Rate limits respected
- No PII in agent memory unless necessary
Common Patterns
Pattern: Approval Gates
Add human approval between stages:
# After writer, before editor
echo "Draft ready for review: .agents/outputs/draft-${SLUG}.md"
read -p "Approve to continue? (y/n) " approval
if [ "$approval" != "y" ]; then
echo "Pipeline stopped for revision"
exit 1
fi
Pattern: Conditional Routing
Route to different agents based on content type:
# Detect content type and route
if [[ "$TOPIC" == *"vs"* ]]; then
claude --prompt "Use comparison-writer agent..."
elif [[ "$TOPIC" == *"tutorial"* ]]; then
claude --prompt "Use tutorial-writer agent..."
else
claude --prompt "Use general-writer agent..."
fi
Pattern: Feedback Loops
Editor can request writer revision:
# In editor.md
## If Quality Below Threshold
If any quality score is below 6:
1. Document specific issues in edit notes
2. Output to `.agents/outputs/revision-request-{topic}.md`
3. Writer should address issues before editor re-reviews
Next Steps
You’ve built your first AI agent team with Claude. From here:
-
Iterate on prompts: The agent instructions are the product. Refine them based on output quality.
-
Add monitoring: Track success rates, token costs, and time-to-completion.
-
Expand carefully: Add agents only when you have clear specialization needs.
-
Document learnings: Your memory files become institutional knowledge.
For advanced patterns, see:
Building production agent teams? Contact us for architecture reviews or check our consulting services for hands-on implementation support.