Orchestration Patterns
Seven proven patterns for structuring agent teams. Choose based on your task’s coordination needs.
For a quick comparison table, see Pattern Quick Reference. For how each pattern works internally, see Concepts.
Parallel Specialists
Section titled “Parallel Specialists”Multiple specialists work simultaneously, each with a different focus.
When to use: Code reviews, audits, multi-perspective analysis. Tasks are independent and don’t share data.
Example prompt:
Create a team to review PR #42 with three specialists:- Security reviewer for vulnerabilities- Code reviewer for bugs and performance- Architecture reviewer for design concernsHave each send findings to team-lead, then synthesize.Key detail: Spawn all specialists in a single burst. Each works independently, so no task dependencies needed.
Pipeline
Section titled “Pipeline”Each stage depends on the previous. Work flows linearly through phases.
When to use: Feature development, multi-phase workflows where each step builds on the last.
Example prompt:
Create a pipeline team for OAuth2:1. Research best practices (adr:adr-researcher)2. Create implementation plan (Plan agent)3. Implement (general-purpose)4. Write tests (general-purpose)5. Final security review (sdlc:security-reviewer)
Each stage should wait for the previous to complete.Key detail: Use TaskUpdate with addBlockedBy to create the dependency chain. The system auto-unblocks tasks when dependencies complete.
Workers grab available tasks from a shared pool. Self-organizing, naturally load-balancing.
When to use: Many similar, independent tasks like file reviews, migrations, test writing.
Example prompt:
Create a swarm team to review these 10 files for security issues.Spawn 3 workers that each grab the next available file, review it,and move on until all files are done.Tips:
- 3 workers is a good starting point; add more for large task pools
- Workers should check
TaskList()after completing each task - Each task should be self-contained (one file, one module, one endpoint)
Research + Implementation
Section titled “Research + Implementation”Research first, then implement using findings. Clean phase separation.
When to use: When implementation benefits from prior research. No team needed — uses plain subagents.
Example prompt:
First, research caching best practices for our API using an adr:adr-researcher agent.Then use the findings to implement caching in the user controller with a general-purpose agent.Key detail: This pattern uses sequential subagents, not a full team. The research result flows directly into the implementation prompt.
Plan Approval
Section titled “Plan Approval”Require a teammate to plan before implementing. The lead reviews and approves or rejects.
When to use: Database migrations, security-sensitive changes, architectural decisions — any high-risk work.
Example prompt:
Spawn an architect teammate in plan mode to design the database migration.Don't let them implement until I've approved the plan.Only approve plans that include rollback procedures and data validation.Key detail: Use the mode: "plan" parameter when spawning to enforce plan approval. The lead can set approval criteria in its prompt.
Multi-File Refactoring
Section titled “Multi-File Refactoring”Coordinated changes across multiple files with fan-in dependencies.
When to use: Refactoring that spans models, controllers, and tests. Each file can be changed independently, but integration testing must wait for all changes.
Example prompt:
Create a team to refactor the auth module:- Worker 1: Refactor User model (task #1)- Worker 2: Refactor Session controller (task #2)- Worker 3: Update all specs (task #3, blocked by #1 and #2)
Workers 1 and 2 can work in parallel. Worker 3 waits for both to finish.Key detail: Fan-in dependencies ensure the test worker doesn’t start until all code changes are complete. Use TaskUpdate({ taskId: "3", addBlockedBy: ["1", "2"] }).
RLM (Recursive Language Model)
Section titled “RLM (Recursive Language Model)”Divide large files into partitions, analyze each with parallel analyst agents, then synthesize. Supports content-aware chunking (code, CSV, JSON, logs, prose) and multi-file directory analysis.
When to use: Large log analysis, data exports, full-codebase review, CSV processing — any content > ~1500 lines. Also: directory analysis with mixed content types needing cross-file insights.
Example prompt (single file):
Analyze this 8000-line production log for error patterns.Partition it into 8 chunks. Spawn analyst agents to revieweach partition in parallel. Each analyst reports: error types,frequency counts, temporal patterns, and outliers.Synthesize all reports into a consolidated analysis.Example prompt (multi-file directory):
Use the multi-file RLM pattern to analyze the src/ directory.Detect content types per file, partition by type-specific strategies,spawn mixed analyst types, and produce a cross-file synthesis.Key details:
- Automatic content-type detection (extension mapping + content sniffing)
- Type-specific partitioning preserves semantic boundaries (functions, CSV headers, valid JSON)
- Content-type-specific analysts: code, data, JSON, general-purpose
- For multi-file directories: small files batched by type, two-phase synthesis (per-type then cross-type), findings written to task descriptions to protect Team Lead context
- See swarm:rlm-pattern for full documentation
Do NOT override analyst models. Leave model unset — Haiku is correct for structured analysis.
Choosing Agents for Teams
Section titled “Choosing Agents for Teams”When spawning teammates, pick the agent type that matches the task:
// Research phase — read-only agent is sufficientTask({ team_name: "my-team", name: "researcher", subagent_type: "adr:adr-researcher", prompt: "Research OAuth2 best practices...", run_in_background: true})
// Implementation phase — needs full tool accessTask({ team_name: "my-team", name: "implementer", subagent_type: "general-purpose", prompt: "Implement OAuth2 authentication...", run_in_background: true})Tips:
- Use
model: "haiku"for fast, cheap Explore agents - Use
general-purposewhen the agent needs to edit files - Use specialized review agents for focused audits
- Never assign implementation work to read-only agents (Explore, Plan)
For the full agent selection guide, see Agent Types.
Best Practices
Section titled “Best Practices”Pick the right pattern
Section titled “Pick the right pattern”- Independent tasks, same type -> Swarm
- Independent tasks, different focus -> Parallel Specialists
- Sequential phases -> Pipeline
- Learn then build -> Research + Implementation
- Risky changes -> Plan Approval
- Cross-file changes with integration step -> Multi-File Refactoring
- Large document analysis -> RLM
Avoid file conflicts
Section titled “Avoid file conflicts”Two teammates editing the same file leads to overwrites. Break work so each teammate owns a different set of files.
Size tasks well
Section titled “Size tasks well”- Too small: coordination overhead exceeds benefit
- Too large: teammates work too long without check-ins
- Just right: self-contained units producing a clear deliverable
Always clean up
Section titled “Always clean up”Shut down all teammates before calling TeamDelete(). Orphaned tmux sessions can be cleaned with tmux kill-session -t <name>.