Skip to content

Orchestration Patterns

Seven proven patterns for structuring agent teams. Choose based on your task’s coordination needs.

For a quick comparison table, see Pattern Quick Reference. For how each pattern works internally, see Concepts.


Multiple specialists work simultaneously, each with a different focus.

When to use: Code reviews, audits, multi-perspective analysis. Tasks are independent and don’t share data.

Example prompt:

Create a team to review PR #42 with three specialists:
- Security reviewer for vulnerabilities
- Code reviewer for bugs and performance
- Architecture reviewer for design concerns
Have each send findings to team-lead, then synthesize.

Key detail: Spawn all specialists in a single burst. Each works independently, so no task dependencies needed.


Each stage depends on the previous. Work flows linearly through phases.

When to use: Feature development, multi-phase workflows where each step builds on the last.

Example prompt:

Create a pipeline team for OAuth2:
1. Research best practices (adr:adr-researcher)
2. Create implementation plan (Plan agent)
3. Implement (general-purpose)
4. Write tests (general-purpose)
5. Final security review (sdlc:security-reviewer)
Each stage should wait for the previous to complete.

Key detail: Use TaskUpdate with addBlockedBy to create the dependency chain. The system auto-unblocks tasks when dependencies complete.


Workers grab available tasks from a shared pool. Self-organizing, naturally load-balancing.

When to use: Many similar, independent tasks like file reviews, migrations, test writing.

Example prompt:

Create a swarm team to review these 10 files for security issues.
Spawn 3 workers that each grab the next available file, review it,
and move on until all files are done.

Tips:

  • 3 workers is a good starting point; add more for large task pools
  • Workers should check TaskList() after completing each task
  • Each task should be self-contained (one file, one module, one endpoint)

Research first, then implement using findings. Clean phase separation.

When to use: When implementation benefits from prior research. No team needed — uses plain subagents.

Example prompt:

First, research caching best practices for our API using an adr:adr-researcher agent.
Then use the findings to implement caching in the user controller with a general-purpose agent.

Key detail: This pattern uses sequential subagents, not a full team. The research result flows directly into the implementation prompt.


Require a teammate to plan before implementing. The lead reviews and approves or rejects.

When to use: Database migrations, security-sensitive changes, architectural decisions — any high-risk work.

Example prompt:

Spawn an architect teammate in plan mode to design the database migration.
Don't let them implement until I've approved the plan.
Only approve plans that include rollback procedures and data validation.

Key detail: Use the mode: "plan" parameter when spawning to enforce plan approval. The lead can set approval criteria in its prompt.


Coordinated changes across multiple files with fan-in dependencies.

When to use: Refactoring that spans models, controllers, and tests. Each file can be changed independently, but integration testing must wait for all changes.

Example prompt:

Create a team to refactor the auth module:
- Worker 1: Refactor User model (task #1)
- Worker 2: Refactor Session controller (task #2)
- Worker 3: Update all specs (task #3, blocked by #1 and #2)
Workers 1 and 2 can work in parallel. Worker 3 waits for both to finish.

Key detail: Fan-in dependencies ensure the test worker doesn’t start until all code changes are complete. Use TaskUpdate({ taskId: "3", addBlockedBy: ["1", "2"] }).


Divide large files into partitions, analyze each with parallel analyst agents, then synthesize. Supports content-aware chunking (code, CSV, JSON, logs, prose) and multi-file directory analysis.

When to use: Large log analysis, data exports, full-codebase review, CSV processing — any content > ~1500 lines. Also: directory analysis with mixed content types needing cross-file insights.

Example prompt (single file):

Analyze this 8000-line production log for error patterns.
Partition it into 8 chunks. Spawn analyst agents to review
each partition in parallel. Each analyst reports: error types,
frequency counts, temporal patterns, and outliers.
Synthesize all reports into a consolidated analysis.

Example prompt (multi-file directory):

Use the multi-file RLM pattern to analyze the src/ directory.
Detect content types per file, partition by type-specific strategies,
spawn mixed analyst types, and produce a cross-file synthesis.

Key details:

  • Automatic content-type detection (extension mapping + content sniffing)
  • Type-specific partitioning preserves semantic boundaries (functions, CSV headers, valid JSON)
  • Content-type-specific analysts: code, data, JSON, general-purpose
  • For multi-file directories: small files batched by type, two-phase synthesis (per-type then cross-type), findings written to task descriptions to protect Team Lead context
  • See swarm:rlm-pattern for full documentation

Do NOT override analyst models. Leave model unset — Haiku is correct for structured analysis.


When spawning teammates, pick the agent type that matches the task:

// Research phase — read-only agent is sufficient
Task({
team_name: "my-team",
name: "researcher",
subagent_type: "adr:adr-researcher",
prompt: "Research OAuth2 best practices...",
run_in_background: true
})
// Implementation phase — needs full tool access
Task({
team_name: "my-team",
name: "implementer",
subagent_type: "general-purpose",
prompt: "Implement OAuth2 authentication...",
run_in_background: true
})

Tips:

  • Use model: "haiku" for fast, cheap Explore agents
  • Use general-purpose when the agent needs to edit files
  • Use specialized review agents for focused audits
  • Never assign implementation work to read-only agents (Explore, Plan)

For the full agent selection guide, see Agent Types.


  • Independent tasks, same type -> Swarm
  • Independent tasks, different focus -> Parallel Specialists
  • Sequential phases -> Pipeline
  • Learn then build -> Research + Implementation
  • Risky changes -> Plan Approval
  • Cross-file changes with integration step -> Multi-File Refactoring
  • Large document analysis -> RLM

Two teammates editing the same file leads to overwrites. Break work so each teammate owns a different set of files.

  • Too small: coordination overhead exceeds benefit
  • Too large: teammates work too long without check-ins
  • Just right: self-contained units producing a clear deliverable

Shut down all teammates before calling TeamDelete(). Orphaned tmux sessions can be cleaned with tmux kill-session -t <name>.