Research

Agent Orchestration Patterns: Static vs Dynamic

By Agents Squads · · 10 min

The Orchestration Question

Multi-agent systems need coordination. The fundamental question: should agent sequencing be predetermined (static) or decided at runtime (dynamic)?

Neither is universally better. The choice depends on task characteristics, cost constraints, and debugging requirements. This analysis covers how major frameworks handle orchestration and when to use each approach.

Definitions

Static Orchestration

Predetermined flows where agent sequencing is defined at design time.

User Request → Lead Agent → Worker A → Worker B → Output
                   ↓ (fixed)    ↓ (fixed)
              GitHub Issue    PR Creation

Characteristics:

Dynamic Orchestration

LLM-driven routing where agent selection happens at runtime based on task state.

User Request → Orchestrator LLM → [Evaluates] → Agent X
                    ↓ (decides)
              [Evaluates result] → Agent Y or Z
                    ↓ (decides)
              [Continues until done]

Characteristics:

Framework Comparison

How major frameworks handle orchestration:

FrameworkStatic OptionsDynamic OptionsHybrid
Claude Agent SDKProgrammatic agents, MD filesTask tool auto-delegationYes
OpenAI Agents SDKCode-driven chainingLLM-driven handoffsYes
LangGraphSequential, Parallel nodesConditional routingYes
CrewAISequential processHierarchical (manager)Yes
AutoGenRoundRobin, SwarmSelectorGroupChatYes
Google ADKSequentialAgent, ParallelAgentLLM Transfer, AgentToolYes

All major frameworks support both patterns. The question isn’t capability—it’s which to use when.

Claude Agent SDK (Anthropic)

Static Pattern (Programmatic/Filesystem Agents):

const options = {
  agents: {
    'code-reviewer': {
      description: 'Security-focused code reviewer',
      prompt: 'You are a security expert...',
      tools: ['Read', 'Grep', 'Glob'],
      model: 'sonnet'
    }
  }
};

Dynamic Pattern (Task Tool Auto-Delegation): The orchestrator decides when to spawn sub-agents. Sub-agents run with isolated contexts and return summaries.

Key differentiators:

Performance: Orchestrator-worker pattern with Claude Opus 4 + Sonnet 4 sub-agents shows 90% improvement on complex research tasks vs single-agent.

OpenAI Agents SDK

Code-Driven (Static):

result = await research_agent.run(query)
outline = await outline_agent.run(result)
content = await writer_agent.run(outline)
final = await critic_agent.run(content)

LLM-Driven (Dynamic):

coordinator = Agent(
    instructions="Route requests to specialists",
    handoffs=[billing_agent, support_agent, technical_agent]
)
# LLM decides: "This is a billing question" → hands off

LangGraph

Graph-based with explicit edges for static flows, conditional functions for dynamic routing:

graph.add_conditional_edges(
    "classifier",
    route_based_on_category,
    {"billing": "billing_agent", "technical": "tech_agent"}
)

CrewAI

Sequential (Static): Tasks execute linearly Hierarchical (Dynamic): Manager agent delegates and validates

Trade-off: Hierarchical adds a manager LLM call for every delegation decision.

Cost/Benefit Analysis

AspectStaticDynamic
Token costLowerHigher (routing decisions)
LatencyLowerHigher (routing overhead)
PredictabilityHighLow
AdaptabilityLowHigh
Debug complexitySimpleComplex
Error recoveryExplicit handlingMay self-correct

Evidence: Dynamic orchestration adds 15-25% token overhead for routing decisions. For simple tasks, this overhead exceeds the benefit.

When to Use Each

Use Static Orchestration When:

ScenarioWhy Static Works
Predictable workflowsMonitoring → Analysis → Report
Cost-sensitive operationsNo routing overhead
Debugging/compliance needsClear execution trace
Writing-intensive tasksShared context crucial
Well-defined pipelinesIssue → Fix → PR → Review

LangChain research confirms “writing-intensive tasks” and “domains requiring shared context” favor single/static agents.

Use Dynamic Orchestration When:

ScenarioWhy Dynamic Works
Unpredictable task scopeDon’t know which agents needed upfront
Complex researchMay need multiple directions
User-facing routing”What’s my billing status?”
Multi-domain problemsCombine specialists as needed
Exploration tasksUnknown number of steps

Academic research (arXiv:2505.19591) shows dynamic orchestration “achieves superior performance with reduced computational costs” for complex problems where the path isn’t known upfront.

The Hybrid Approach

Most production systems benefit from hybrid orchestration:

Keep Static For:

WorkflowPatternRationale
MonitoringScheduled → ReportPredictable, cost-sensitive
PR ReviewEvent → Review → DecisionFixed checklist
Issue SolvingIssue → Solution → PRWell-defined pipeline
Content UpdatesBrief → Issues → BuildSequential dependency

Add Dynamic For:

WorkflowPatternRationale
ResearchRoute to investigatorsUnknown scope
User RequestsLLM routes to squadUnpredictable intent
Complex DebuggingRoute by error typeNeed different specialists
Cross-Domain WorkCoordinator selects squadsMulti-domain

Implementation

Phase 1: Enhanced Lead Pattern (Low Effort)

Enhance static leads with conditional sub-agent spawning:

## Enhanced Lead Pattern

1. Lead receives request
2. Lead analyzes: "What type of task is this?"
3. Lead spawns appropriate worker(s)
4. Lead evaluates results
5. Lead decides: done, or spawn more workers?
6. Lead synthesizes final output

This adds adaptability without full dynamic orchestration. The lead is still the single point of coordination, but it can adjust based on task needs.

Phase 2: Router Agent (Medium Effort)

Central router for user requests:

## Router Pattern

User: "Help me understand our agent costs"

Router LLM evaluates:
- Involves costs → finance squad?
- Involves agents → engineering squad?
- Internal analysis → intel squad?

Router: Spawns intel-lead with request context

The router handles the initial classification. Once routed, the receiving squad uses its normal (often static) workflows.

Phase 3: Full Dynamic (High Effort, Defer)

Full dynamic orchestration (RL-trained coordinators, learned routing) requires significant training data. Wait until you have enough execution traces to train on.

Handoff Protocol

For any orchestration pattern, handoffs need structure:

Handoff Message

{
  "from_agent": "research-lead",
  "to_agent": "technical-investigator",
  "task": {
    "objective": "Investigate context engineering patterns",
    "constraints": ["Max 2 hours", "Focus on practical"],
    "expected_output": "Deep-dive document with evidence",
    "context_summary": "Building multi-agent systems, need context optimization"
  },
  "callback": {
    "on_complete": "Update GitHub issue #2",
    "on_failure": "Escalate to human"
  }
}

Handoff Response

{
  "from_agent": "technical-investigator",
  "to_agent": "research-lead",
  "status": "complete",
  "result_summary": "Created deep-dive covering 4 techniques...",
  "artifacts": ["research/analyses/context-engineering/..."],
  "follow_up_needed": ["Update skill", "Add tracking"]
}

Structured handoffs work for both static and dynamic patterns. The difference is who decides the next step.

Anti-Patterns

1. Over-Dynamic

Pattern: Using LLM routing for every decision Problem: High token cost, unpredictable behavior Fix: Reserve dynamic for genuinely unpredictable tasks

2. Under-Dynamic

Pattern: Rigid static flows that can’t adapt Problem: Fails on edge cases Fix: Add conditional branches for known variations

3. Delegation Without Context

Pattern: Routing to specialist without enough context Problem: Agent lacks information to succeed Fix: Use explicit handoff protocol

4. No Fallback

Pattern: Dynamic routing without error handling Problem: Infinite loops, stuck states Fix: Max iterations, timeout, human escalation

5. Shared Context Pollution

Pattern: All agents see all context Problem: Context rot, confusion, high cost Fix: Fresh context per worker, summaries for handoffs

Metrics to Track

MetricWhat It MeasuresTarget
Routing accuracy% correct agent selection> 90%
Handoff success% handoffs completing> 85%
Dynamic overheadAdditional tokens for routing< 20%
Escalation rate% requiring human intervention< 10%

Decision Framework

Is the workflow predictable?
├── Yes → Use static orchestration
└── No → Is cost critical?
    ├── Yes → Use enhanced lead (Phase 1)
    └── No → Is scope bounded?
        ├── Yes → Use router pattern (Phase 2)
        └── No → Consider full dynamic (Phase 3)

Most workflows are predictable. Start static, add dynamism only where needed.

Summary

ApproachBest ForToken OverheadDebugging
StaticPredictable workflowsBaselineSimple
Enhanced LeadSemi-predictable+10-15%Moderate
RouterUser requests+15-20%Moderate
Full DynamicComplex exploration+20-30%Complex

The hybrid approach works: static for predictable workflows, dynamic for unpredictable ones. Start with Phase 1 (enhanced leads) and add complexity only when measurements show it’s needed.


Sources: Claude Agent SDK (Anthropic), OpenAI Agents SDK, LangGraph, CrewAI, AutoGen, Google ADK documentation. Academic: arXiv:2505.19591 (Multi-Agent Collaboration via Evolving Orchestration).

Related Reading

Back to Research