TL;DR — CrewAI is code-first Python orchestration with maximum flexibility; squads-cli is file-based declarative configuration with Git-native management and built-in scheduling. Choose CrewAI for Python apps and LLM diversity, squads-cli for non-developer accessibility and Claude Code integration.
The Multi-Agent Framework Decision
Building AI agent teams requires choosing between fundamentally different approaches. CrewAI and squads-cli represent two philosophies: Python-native code orchestration versus file-based declarative configuration.
This comparison reflects production experience with both tools. We built systems with CrewAI before developing squads-cli, so we understand the tradeoffs firsthand.
Quick Summary:
| Aspect | squads-cli | CrewAI |
|---|---|---|
| Configuration | Markdown files | Python code |
| Execution | CLI + scheduler | Python runtime |
| Learning curve | Lower (no code required) | Higher (Python knowledge needed) |
| Flexibility | Convention-based | Code-based (unlimited) |
| Best for | DevOps teams, Claude users | Python developers, complex flows |
Architecture Comparison
CrewAI: Code-First Multi-Agent
CrewAI treats agents as Python classes. You define agents, tasks, and processes in code:
from crewai import Agent, Task, Crew, Process
researcher = Agent(
role="Senior Research Analyst",
goal="Uncover cutting-edge developments in AI",
backstory="Expert at finding and synthesizing information",
verbose=True,
allow_delegation=False,
llm=ChatOpenAI(model="gpt-4")
)
writer = Agent(
role="Tech Content Strategist",
goal="Craft compelling content about AI discoveries",
backstory="Experienced writer with deep technical knowledge",
verbose=True,
allow_delegation=True,
llm=ChatOpenAI(model="gpt-4")
)
research_task = Task(
description="Research the latest AI agent frameworks",
expected_output="Detailed report with sources",
agent=researcher
)
writing_task = Task(
description="Write a blog post based on the research",
expected_output="Publication-ready article",
agent=writer,
context=[research_task]
)
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, writing_task],
process=Process.sequential
)
result = crew.kickoff()
Strengths:
- Full Python ecosystem access
- Fine-grained control over agent behavior
- Built-in memory and delegation
- Active community and documentation
Weaknesses:
- Requires Python knowledge
- Agent definitions scattered across code
- Harder to audit and version control
- LLM-agnostic but defaults to OpenAI
Key Takeaway — CrewAI gives you full Python ecosystem access at the cost of requiring Python developers to maintain agent definitions scattered across code files.
squads-cli: File-Based Declarative
squads-cli defines agents as markdown files with YAML frontmatter. No code required:
<!-- .agents/squads/marketing/seo-writer.md -->
---
name: seo-writer
squad: marketing
model: claude-sonnet-4
trigger: manual
tools:
- web_search
- read_file
- write_file
---
# SEO Content Writer
## Purpose
Create search-optimized content based on keyword research and competitor analysis.
## Instructions
1. Research the target keyword using web search
2. Analyze top-ranking content for structure
3. Write comprehensive content that addresses search intent
4. Include relevant internal links and calls to action
## Output
Markdown file saved to /content/publications/
Run agents via CLI:
# Run a specific agent
squads run marketing/seo-writer
# Run entire squad
squads run marketing
# Check status
squads status
Strengths:
- No code required (accessible to non-developers)
- Git-native (agents are just files)
- Clear audit trail (markdown diffs)
- Built for Claude Code ecosystem
- Scheduled execution built-in
Weaknesses:
- Less flexible than code
- Smaller community (newer project)
- Claude-centric (other LLMs require config)
- Convention-driven (opinionated structure)
Configuration Philosophy
CrewAI: Explicit Code
Every behavior is coded explicitly. This provides maximum control but requires understanding Python patterns:
# Memory configuration
from crewai.memory import LongTermMemory, ShortTermMemory
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
long_term_memory=LongTermMemory(storage="sqlite:///memory.db"),
short_term_memory=ShortTermMemory()
)
# Process types
Process.sequential # One task at a time
Process.hierarchical # Manager delegates to workers
Process.consensual # Agents vote on decisions
squads-cli: Convention Over Configuration
squads-cli uses file structure and naming conventions. Standard behaviors are automatic:
.agents/
├── squads/
│ ├── marketing/
│ │ ├── SQUAD.md # Squad definition
│ │ ├── seo-writer.md # Agent
│ │ ├── social-manager.md # Agent
│ │ └── analytics.md # Agent
│ └── engineering/
│ ├── SQUAD.md
│ ├── code-reviewer.md
│ └── test-writer.md
├── memory/
│ ├── marketing/
│ │ ├── state.md # Current context
│ │ └── learnings.md # Accumulated insights
│ └── engineering/
│ ├── state.md
│ └── learnings.md
└── skills/
├── web-research.md
└── code-analysis.md
Squads are defined in SQUAD.md:
---
name: marketing
description: Content creation and distribution
lead: seo-writer
---
# Marketing Squad
## Mission
Drive organic traffic through high-quality content.
## Agents
- seo-writer: Creates search-optimized articles
- social-manager: Distributes content across channels
- analytics: Tracks performance and identifies opportunities
## Goals
1. Publish 4 blog posts per week
2. Maintain organic traffic growth >10% MoM
3. Achieve featured snippets for target keywords
Execution Models
CrewAI: Python Runtime
CrewAI runs as a Python process. You integrate it into applications, scripts, or services:
# Run synchronously
result = crew.kickoff()
# Run with inputs
result = crew.kickoff(inputs={
"topic": "AI agent frameworks",
"tone": "technical"
})
# Async execution
import asyncio
result = await crew.kickoff_async()
Deployment patterns:
- Python scripts (cron jobs)
- FastAPI endpoints
- Celery workers
- AWS Lambda functions
squads-cli: Native Scheduling
squads-cli includes built-in scheduling via Procrastinate (Postgres-based job queue):
# In agent frontmatter
trigger: scheduled
schedule: "0 9 * * 1-5" # Weekdays at 9am
executor: local # or cloud
# Manual execution
squads run marketing/seo-writer
# Daemon mode (scheduler runs continuously)
squads scheduler start
# Check scheduled jobs
squads scheduler status
Deployment patterns:
- VM with scheduler daemon
- Kubernetes cron jobs
- GitHub Actions workflows
- Direct CLI invocation
Memory Systems
CrewAI Memory
CrewAI provides multiple memory types:
# Entity memory (tracks people, places, concepts)
crew = Crew(
agents=[...],
memory=True,
entity_memory=EntityMemory()
)
# RAG memory (vector similarity search)
from crewai.memory import RAGMemory
crew = Crew(
memory=True,
rag_memory=RAGMemory(
embeddings=OpenAIEmbeddings(),
storage="chroma"
)
)
Memory persists across crew runs, enabling agents to recall past interactions and build knowledge over time.
squads-cli Memory
squads-cli uses file-based memory that agents read and write directly:
<!-- .agents/memory/marketing/state.md -->
# Marketing Squad State
## Current Focus
Q1 2026 SEO content push targeting comparison keywords.
## Active Campaigns
- squads-cli vs CrewAI (in progress)
- Claude Code tutorials (drafting)
- AI agent teams guide (scheduled)
## Recent Performance
- Last week: 3 articles published, 12K organic sessions
- Conversion rate: 2.3% (up from 1.8%)
## Blockers
- Need product team input on pricing page update
Agents reference memory in their instructions:
## Context
Before writing, review:
- `.agents/memory/marketing/state.md` for current priorities
- `.agents/memory/marketing/learnings.md` for past successes
The file-based approach is less sophisticated than vector memory but offers complete transparency. Every piece of context is visible, auditable, and version-controlled.
Key Takeaway — CrewAI offers vector/RAG memory for sophisticated recall. squads-cli uses plain markdown files — less powerful, but every piece of context is visible in
git diff.
Tool Integration
CrewAI Tools
CrewAI integrates with LangChain tools and custom tool definitions:
from crewai import Tool
from langchain_community.tools import DuckDuckGoSearchRun
search_tool = DuckDuckGoSearchRun()
custom_tool = Tool(
name="analyze_code",
description="Analyze code for security issues",
func=lambda code: run_security_scan(code)
)
agent = Agent(
role="Security Analyst",
tools=[search_tool, custom_tool]
)
squads-cli Tools
squads-cli leverages Claude Code’s native tool system. Agents specify tools in frontmatter:
tools:
- web_search # WebSearch tool
- read_file # Read tool
- write_file # Write tool
- bash # Bash tool
- glob # Glob tool
- grep # Grep tool
Custom tools are defined as MCP servers or skills:
<!-- .claude/skills/competitor-analysis.md -->
# Competitor Analysis Skill
## Description
Analyze competitor websites for positioning, features, and pricing.
## Steps
1. Fetch competitor homepage with WebFetch
2. Extract key messaging and value props
3. Compare against our positioning
4. Generate differentiation report
Cost Comparison
CrewAI Costs
CrewAI costs depend on your LLM choice:
| Model | Cost per 1M tokens | Typical crew run |
|---|---|---|
| GPT-4 | $30 input, $60 output | $0.50-2.00 |
| GPT-4o | $5 input, $15 output | $0.10-0.50 |
| Claude Sonnet | $3 input, $15 output | $0.08-0.40 |
| Claude Haiku | $0.25 input, $1.25 output | $0.01-0.05 |
Multi-agent crews multiply costs. A 4-agent crew with sequential tasks costs 4x single agent.
squads-cli Costs
squads-cli primarily uses Claude models via Anthropic API:
| Model | Use Case | Cost per run |
|---|---|---|
| Opus | Complex reasoning, architecture | $0.50-5.00 |
| Sonnet | Standard tasks, code review | $0.10-1.00 |
| Haiku | Data gathering, formatting | $0.01-0.10 |
squads-cli encourages model routing by task complexity. Simple agents use Haiku; complex agents use Opus.
The Numbers — A 4-agent CrewAI crew on GPT-4 costs $2-8 per run. squads-cli with model routing (Haiku for simple, Opus for complex) can achieve similar quality at $0.50-2.00 per run.
Use Case Recommendations
Choose CrewAI When:
-
You’re building a Python application
# CrewAI integrates naturally into Python codebases @app.post("/analyze") async def analyze(request: AnalysisRequest): crew = AnalysisCrew() result = await crew.kickoff_async(inputs=request.dict()) return result -
You need complex agent interactions
- Hierarchical management (manager agents delegate to workers)
- Consensus-based decisions (agents vote)
- Dynamic task routing based on agent expertise
-
You want LLM flexibility
- Mix OpenAI, Anthropic, and local models in one crew
- Experiment with different models per agent role
-
You have Python developers
- Team can maintain and extend code-based configurations
- Existing Python infrastructure (monitoring, deployment)
Choose squads-cli When:
-
You want Git-native agent management
# Agents are files - PR workflow applies git diff .agents/squads/marketing/seo-writer.md # +## Additional Instructions # +Always include a FAQ section based on People Also Ask data. -
You’re building a Claude-centric system
- Full Claude Code integration (computer use, tool use)
- CLAUDE.md conventions for project context
- Sub-agent spawning for parallel execution
-
You need non-developers to manage agents
- Markdown is accessible to anyone
- No deployment pipeline for agent changes
- Visual audit trail via file diffs
-
You want built-in scheduling
- Cron-based triggers without external infrastructure
- Execution tracking and logging included
- Human-in-loop approval workflows
Migration Path
CrewAI to squads-cli
If you’re considering switching:
# CrewAI agent
researcher = Agent(
role="Research Analyst",
goal="Find market intelligence",
backstory="Expert researcher with 10 years experience",
tools=[search_tool, scrape_tool]
)
Becomes:
<!-- .agents/squads/intelligence/researcher.md -->
---
name: researcher
squad: intelligence
model: claude-sonnet-4
tools:
- web_search
- web_fetch
---
# Research Analyst
## Purpose
Find market intelligence through systematic research.
## Background
Expert researcher with deep experience in competitive analysis and market trends.
## Tools
- web_search: Find relevant sources
- web_fetch: Extract content from URLs
## Instructions
1. Identify key research questions
2. Search for authoritative sources
3. Extract and synthesize findings
4. Cite all sources with URLs
squads-cli to CrewAI
For maximum flexibility:
<!-- squads-cli agent -->
---
name: code-reviewer
model: claude-sonnet-4
tools:
- read_file
- grep
---
# Code Reviewer
Review PRs for security and quality issues.
Becomes:
from crewai import Agent
code_reviewer = Agent(
role="Code Reviewer",
goal="Review PRs for security and quality issues",
backstory="Senior engineer focused on code quality",
tools=[read_file_tool, grep_tool],
llm=ChatAnthropic(model="claude-sonnet-4")
)
Production Considerations
Observability
CrewAI:
# Built-in verbose mode
crew = Crew(verbose=True)
# Integration with LangSmith
from langsmith import Client
client = Client()
# Traces appear in LangSmith dashboard
squads-cli:
# Execution logs
cat .agents/logs/marketing/seo-writer-1234567890.log
# Memory audit trail
git log --oneline .agents/memory/marketing/
# Dashboard command
squads dashboard
Error Handling
CrewAI:
try:
result = crew.kickoff()
except AgentExecutionError as e:
logger.error(f"Agent {e.agent} failed: {e.message}")
# Retry logic, fallback handling
squads-cli:
# Automatic retry on failure
squads run marketing/seo-writer --retry 3
# Execution status
squads status marketing/seo-writer
# Last run: FAILED (rate limit exceeded)
# Next retry: 2026-01-27 10:30:00
Scaling
CrewAI: Scale via Python infrastructure (workers, queues, async).
squads-cli: Scale via more agents, parallel execution, and cloud workers.
Both frameworks handle moderate scale (tens of agents). For hundreds of agents, expect to invest in custom orchestration regardless of framework choice.
Important — Neither framework scales to hundreds of agents out of the box. At that scale, you’ll need custom orchestration regardless of which you choose.
The Verdict
CrewAI is the right choice if you’re building Python applications that need embedded multi-agent capabilities, want maximum flexibility, or require LLM provider diversity.
squads-cli is the right choice if you want Git-native agent management, non-developer accessibility, built-in scheduling, and deep Claude Code integration.
Neither is universally better. The best framework is the one that fits your team’s skills, infrastructure, and use case.
Getting Started
squads-cli
# Install
pip install squads-cli
# Initialize in your project
squads init
# Create your first agent
squads agent create marketing/seo-writer
# Run it
squads run marketing/seo-writer
Documentation: docs.agents-squads.com
CrewAI
# Install
pip install crewai
# Create project
crewai create my_project
# Run example
cd my_project && python main.py
Documentation: docs.crewai.com
Building multi-agent systems? Check our AI Agent Squads Guide for architectural patterns or Agent Orchestration Patterns for advanced workflows.