Skip to content

Large Result Offloading

Demand-Driven Context Management for Tool-Augmented Language Models

This is an open-access academic paper exploring Large Result Offloading (LRO). a demand-driven approach to context management in tool-augmented language models. When tool calls return datasets too large for the context window, LRO offloads results to external storage and provides structured summaries with extraction recipes, letting models retrieve exactly what they need on demand.

This paper is published as a living document. We welcome peer review, feedback, and academic collaboration.

Read the Paper

The full paper covering motivation, system design, protocol specification, and evaluation. Read now

Specification

The formal specification covering system model, protocol, data structures, and conformance requirements. View specification

Contributing

How to provide feedback, report issues, and collaborate on this research. How to contribute

Source Code

View the source and revision history on GitHub. View on GitHub

This paper is actively seeking peer review. If you have expertise in language model architectures, context management, tool-augmented AI systems, or related fields, we welcome your feedback through GitHub Issues or GitHub Discussions.