Skip to content

RLM Pattern Examples

Practical example prompts for every RLM mode. Copy, adapt the file path, and paste into Claude Code.

Prerequisite: The swarm plugin must be installed and agent teams enabled. See Getting Started.


ModeWhen to UseWhat Happens
Basic RLMOne large log or text fileLine-range chunks, general analysts
Content-Aware: Source CodeOne large source fileFunction-boundary chunks, code analysts
Content-Aware: CSV/TSVOne large data fileHeader-preserving chunks, data analysts
Content-Aware: JSON/JSONLOne large JSON fileSchema-aware chunks, JSON analysts
JSONL Log AnalysisJSONL log filesSchema discovery, tailored jq recipes, JSON analysts
Directory AnalysisMultiple files, same typePer-file partitioning, single analyst type
Multi-Type DirectoryMixed file types in a directoryMixed analysts, two-phase synthesis

Analyze a large log file or text document that exceeds context limits. The simplest RLM mode — line-range chunks with the general-purpose analyzer.

Analyze the application log at /var/log/app/production.log for error patterns
and recurring failures. Use the RLM pattern to process it in parallel.
Analyze the compliance document at docs/soc2-audit-report.txt for gaps,
inconsistencies, and areas needing remediation. Use the RLM pattern.

See How RLM processes logs and prose for what happens internally.


Analyze a large source file with function/class boundary awareness. Chunks respect code structure — no splitting mid-function.

Perform a security audit of src/services/payment_processor.py using the
RLM pattern. Focus on injection vulnerabilities, authentication bypass,
and unsafe operations.
Review the architecture of src/core/engine.ts using the RLM pattern.
Focus on coupling, SOLID principles, and dependency patterns.
Analyze lib/data_pipeline.rb for code quality issues using the RLM pattern.
Look for complexity, dead code, and anti-patterns.

See How RLM processes source code for what happens internally. See Analysis Focus Options for steering analyst priorities.


Analyze a large data file with header-preserving chunks. Every chunk includes the original header row so analysts understand column semantics.

Analyze the customer export at data/customers-2025.csv using the RLM pattern.
Report distributions by region and plan type, identify outliers in MRR,
and flag data quality issues.
Analyze the Jira export at exports/support-tickets.csv using the RLM pattern.
Identify top issue categories, recurring error patterns, and resolution
time statistics.

See How RLM processes CSV/TSV for what happens internally. See What Data Analysts Report for the standard output format.


Analyze large JSON documents or JSONL streams with schema awareness. JSON arrays are split into valid sub-arrays; JSONL is split by line count.

Analyze the event log at data/events.jsonl using the RLM pattern.
Report event type distributions, identify schema inconsistencies,
and flag any anomalous patterns.
Analyze the feature flags configuration at config/flags.json using the
RLM pattern. Check for stale flags, conflicting rules, and schema
consistency across entries.

See How RLM processes JSON/JSONL for what happens internally.


Analyze large JSONL log files with automated schema discovery and tailored jq recipes. This is a specialization of JSON/JSONL RLM — it auto-discovers the log schema, classifies fields (timestamp, level, error, etc.), and generates extraction recipes before spawning analysts.

Skill reference: See skills/jsonl-log-analyzer/SKILL.md for the full procedure.

Analyze the application logs at /var/log/app/events.jsonl for error patterns.
Use the JSONL log analyzer skill. I need to understand:
- What types of errors are most frequent?
- Are there temporal spikes?
- Which services are generating the most errors?
Use the JSONL log analyzer to analyze the API gateway log at
data/gateway-access.jsonl. Report on:
- Request volume by endpoint and status code
- P50/P95 latency patterns over time
- Any anomalous traffic patterns or suspicious request bursts
Investigate the production incident using logs at /tmp/incident-2026-02-25.jsonl.
Use the JSONL log analyzer skill to:
- Build a timeline of events leading to the outage
- Trace affected request IDs across services
- Identify the root cause service and error type

See How JSONL Log Analysis works for what happens internally. See Standard vs JSONL Log Analyzer for when to use each mode.


Analyze a directory where all files are the same type. Simpler than multi-type — uses one analyst type with per-file partitioning.

Analyze all Python files in src/mypackage/ using the RLM pattern.
Review for code quality and security issues.
Analyze all CSV files in data/exports/ using the RLM pattern.
Report data quality issues, distributions, and cross-file inconsistencies.

See How RLM processes directories for what happens internally.


The most powerful mode. Analyze a directory containing mixed file types — source code, data files, JSON configs, documentation — in a single session with type-specific analysts and two-phase synthesis.

Analyze the project directory at ./src/ using the RLM pattern.
The directory contains Python source, JSON configs, and CSV test fixtures.
Review for code quality, configuration issues, and data integrity.
Correlate findings across file types.
Use the RLM pattern to analyze the microservice at services/user-service/.
It contains Java source files, application.yml configs, and SQL migration files.
Review for security vulnerabilities, configuration drift, and architectural concerns.
Analyze the data pipeline directory at etl/ using the RLM pattern.
It contains Python ETL scripts, CSV source data, JSONL event streams,
and shell scripts. Focus on data quality issues, transformation correctness,
and error handling.

See How RLM processes directories for what happens internally. See Cross-File Analysis for what multi-type analysis catches that single-file cannot.


The more specific your analysis request, the more targeted the findings:

VagueSpecific
”Analyze this file""Find security vulnerabilities in authentication flows"
"Review this directory""Check for N+1 queries, missing error handling, and SQL injection"
"Look at the data""Report customer churn patterns by region and identify MRR anomalies”
File SizeRecommendation
< 1500 linesNo RLM needed — Claude handles it directly
1500-5000 linesRLM useful — partitions based on content-type chunk targets
5000-50000 linesRLM recommended — partitions scale with file size
50000+ linesRLM essential — partitions scale with file size
Directory ProfileRecommendation
1-3 small filesNo RLM needed
3-10 mixed filesMulti-file RLM useful
10-20 files with large onesMulti-file RLM recommended
20+ filesFilter with include/exclude globs to focus on key files

RLM auto-detects content types. You don’t need to tell Claude what kind of file it is — but you can if auto-detection might be ambiguous (e.g., a .txt file that’s actually CSV data):

Analyze data/export.txt using the RLM pattern. Note: this file contains
CSV data with a header row despite the .txt extension.

Use natural language to filter what gets analyzed:

Analyze src/ using the RLM pattern, but skip test files and migrations.
Focus on the core business logic.