Model parameters
Common ground · x30%
Shared context lowers the signal burden — higher x shifts the R(D,x) curve downward in Panel II and reduces rate in Panel III.
Stakes · σ (precision demand)20%
Higher stakes reduce tolerated distortion D → operating point moves left on the R(D) curve → higher required rate and costlier repair events.
Conversation turn · t20
Position on the compression timeline. Amber marker moves in Panel III; Panel II now uses the effective common ground accumulated by this turn.
Panel I empirical
Cross-linguistic clustering around ~39 b/s
39 b/s reference
language (n=17, illustrative scatter)
other IR levels
Panel II interpretive
Normalized rate–distortion surface R(D, x)
R(D, x=current)
other x levels
operating point (stakes → D)
Panel III interpretive
Collaborative compression — normalized rate derived from Panel II, repair emergent from overload
normalized rate — from rdRate(D,x) + variation
common ground
repair — emergent when compression exceeds recoverability
~39
bits per second
The cross-linguistic clustering value from Coupé et al. (2019). The published SD is ~5.1 b/s — this is a soft attractor, not a precise universal constant.
3×
interacting constraints
Physical (timing, motor rhythm), parsing (memory, prediction), and grounding (common ground, repair, stakes) — mutually coupled, not a feedforward pipeline.
Regime,
not rate
not rate
the central reframing
Whether the invariant is best modelled as bits/s, surprisal/unit time, or a chunking budget remains an open research question — as does the dynamics of F (context updating).