Executive Summary
This consolidated paper unifies four previously separate works on context management in autoregressive language models for multi-agent coordination. The combined treatment reveals a coherent picture: effective coordination is about mode switching with minimal tokens, not information transfer or adaptive guidance.
From Ritual Shape (AI-20): priming sequences are remarkably compact. Just 15 tokens capture 98.8% of the full coordination benefit. Natural conversational phrasing outperforms rigid ritual by 0.48 nats. Repetition degrades performance (+0.07 nats per repeat). The optimal protocol is reset-then-prime, achieving CE 0.733 and beating expert baselines by 39%.
From Shepherd Agents (AI-21): adaptive priming is not just unnecessary but actively harmful. All bare shepherd strategies perform worse than no coordination (storyteller -12.3%, provocateur -26.7%, director -37.1%). With reset prepended, all converge to reset-alone performance. Reset accounts for 22:1 of benefit over best shepherd content.
From Context Fences (AI-23): the garbage collector hypothesis is falsified on three counts. The correct mechanism is mode switching. Standard reset (18 tokens) achieves CE 0.503. Mode and domain are orthogonal composable dimensions (rho 0.858). Reset benefit is constant at approximately 0.076 nats regardless of prior context length.
From Hop Scaling (AI-24): chain degradation is logarithmic, not exponential. CE rises from 0.503 at hop 1 to approximately 0.60 by hop 5, then plateaus through hop 10. Fence placement is irrelevant (fence-every-hop vs fence-final: 0.598 vs 0.597). Cross-domain handoffs outperform within-domain chains because domain discontinuity acts as natural mode reset.
Ritual Shape
Five structural features of priming sequences tested systematically. The first 15 tokens capture 98.8% of total benefit, establishing that coordination is a mode switch rather than information transfer. Natural multi-turn conversation outperforms rigid ritual by 0.48 nats. Within-session repetition degrades by +0.07 nats per repeat. Artificially named patterns perform worse than natural vocabulary (+0.568 nats). The reset-then-prime protocol reduces CE from 1.20 to 0.733.
Shepherd Agents
Three adaptive shepherd strategies tested: storyteller (narrative framing), provocateur (challenge-based prompting), and director (explicit instruction). The more directive the shepherd, the worse the outcome. With reset prepended, all three strategies converge to the performance of reset alone. The ratio is decisive: reset accounts for 22:1 of benefit relative to the best shepherd content. Adaptive priming introduces interference that degrades coordination.
Context Fences
The garbage collector hypothesis is falsified on three counts: (1) activation distance to neutral does not predict CE (rho 0.100); (2) more clearing tokens produce better results (rho -0.672), opposite of garbage collection prediction; (3) reset benefit is constant at approximately 0.076 nats regardless of prior context length. The correct mechanism is mode switching. Mode and domain are orthogonal composable dimensions (rho 0.858, p less than 10^-4).
Hop Scaling
Chain degradation saturates, not compounds. CE rises from 0.503 at hop 1 to approximately 0.60 by hop 5, then plateaus through hop 10. The shape is logarithmic. Fence-every-hop vs fence-final-only produces identical results (0.598 vs 0.597 at 10 hops). Cross-domain handoffs outperform within-domain chains (CE 0.56-0.63 vs 0.72) because domain discontinuity acts as natural mode reset. The protocol scales to arbitrary depth without compounding error.
Key Findings
- Extreme compactness: 15 tokens capture 98.8% of coordination benefit
- Adaptive priming harmful: All shepherd strategies worse than no coordination; reset dominates 22:1
- Mode switching, not garbage collection: Reset triggers processing mode transition, not state clearing
- Mode and domain separable: Two-phase composition with rho 0.858 independence
- Logarithmic chain degradation: CE plateaus by hop 5 through hop 10
- Cross-domain outperforms within-domain: Domain transitions force beneficial re-encoding
- Optimal protocol: Reset then prime — CE 0.733, beating expert baseline by 39%
Superseded Papers
This paper consolidates and supersedes:
- AI-20: Ritual Shape — structural features of effective coordination sequences
- AI-21: Shepherd Agents — adaptive priming through directed intervention
- AI-23: Context Fences — two-phase coordination through mode switching and domain activation
- AI-24: Hop Scaling — multi-agent chain degradation saturates
Key References
Ensemble Gravity (AI-19): priming selection closes 48.9% of coordination gap.
The Coordination Problem Is Interference (AI-22): synthesis of nine experiments.
Full Mind Transfer (AI-17): text vs activation information types.
Sender Continuation Perplexity (AI-18): 63.7% trajectory preservation via text.