Two multi-agent frameworks with different approaches. Understanding which one fits your use case.
| Feature | Agents Squads | AWS Agent Squad |
|---|---|---|
| Primary use case | CLI for dev teams & agent builders | Multi-agent orchestration & request routing |
| Architecture | Domain-organized squads | Supervisor routing pattern |
| Best for | Local dev, Claude Code, transparency | AWS customers, LLM routing, chatbots |
| Memory | Persistent file-based (Git-native) | Conversation context management |
| Deployment | Local-first, any environment | AWS Lambda, local, any cloud |
| Pricing | Open source (LLM API costs only) | Open source (AWS + LLM usage) |
| Target users | Developers, teams building AI workflows | AWS developers, enterprise chatbots |
| Language | TypeScript (CLI) + any LLM | Python & TypeScript |
Agents are organized into squads based on business domains (marketing, engineering, operations). Each squad is a specialized team with persistent memory and domain expertise.
A classifier analyzes incoming requests and routes them to specialized agents. The supervisor pattern coordinates parallel agent execution while maintaining conversation context.
# Install
npm install -g squads-cli
# Initialize project
squads init
# Run agent manually
squads run marketing/content-writer
# Start scheduler for autonomous ops
squads autonomous start Works anywhere you have Node.js. Agents execute via Claude Code, Cursor, or API calls. No cloud dependencies.
# Install (Python)
pip install agent-squad
# Basic usage
from agent_squad import AgentOrchestrator
orchestrator = AgentOrchestrator()
response = await orchestrator.route_request(
user_input,
user_id
) Framework embeds into applications. Integrates with AWS Bedrock but can run anywhere Python runs.
Memory is stored as markdown files in .agents/memory/. Agents read and write context directly, creating a transparent, auditable memory system.
.agents/memory/
├── marketing/
│ ├── state.md # Current context
│ └── learnings.md # Accumulated insights
└── engineering/
├── state.md
└── learnings.md Maintains conversation history for routing decisions. Orchestrator saves conversations before returning responses, enabling context-aware multi-turn interactions.
# Context flows through orchestrator
orchestrator.save_conversation(
user_id,
conversation_history
)
# Classifier uses history for routing
best_agent = classifier.classify(
input,
conversation_history
) One command to initialize. Agents are markdown files - no code required.
Edit agent definitions, commit to Git, run via CLI. Changes are transparent diffs.
Execution logs, memory files, and Git history show exactly what agents did and why.
Low - if you know Git and markdown, you can build agents.
Python or TypeScript installation. Configure orchestrator and agents in code.
Define agents as classes, configure routing logic, integrate into applications.
Built-in logging, optional integration with AWS CloudWatch or custom observability.
Moderate - requires Python/TypeScript knowledge and understanding of the orchestration pattern.
Long-running agents that operate independently with persistent memory
PR workflows, version control, and visual audit trails for agent changes
Native integration with Claude's ecosystem and MCP servers
Every decision, memory, and action is visible in version-controlled files
Markdown-based configuration accessible to anyone
Chatbots, customer support, or conversational AI applications
Natural fit for Bedrock, Lambda, and AWS ecosystem integration
Classifier-based agent selection for complex multi-agent scenarios
Framework available in both Python and TypeScript
Multi-agent orchestration as part of larger applications
The CLI is completely free. You only pay for LLM API usage from your provider of choice.
The framework is free. Costs are AWS infrastructure + LLM API usage.
Choose the framework that fits your use case and get started today.