A complete graph Kn, where every agent connects to every other. Each node is an autonomous agent with its own generative model, accumulating evidence from sensory states until a precision-weighted threshold is reached and its beliefs update. Each edge is a Markov blanket: the statistical boundary separating one agent's internal states from another's, mediating the flow of prediction errors between them.
Agents run continuous internal cycles, updating beliefs about their environment. Blankets are bistable: they either pass or block information, flipping state when their own evidence threshold is reached. The blanket threshold is half the agent threshold, reflecting the blanket's role as a boundary rather than a model: boundaries need less evidence to reorganise because they carry more uncertainty (higher variance, lower precision).
The gold glow marks epistemic value: the moment just before a belief update when the system carries maximum information about the impending transition. For blankets this peaks at ~90% through the accumulation cycle; for agents, ~97.5%. This asymmetry reflects the blanket's greater flexibility (lower precision) compared to the agent's more rigid generative model.
In the free energy principle, a Markov blanket is the set of states that separates an agent's internal states from external states. In this network, each edge is a blanket: it connects exactly two agents and mediates the statistical dependencies between them.
The quadratic explosion: each new agent must form a blanket with every existing agent. Person 4 doesn't add one blanket; they add three. Person 10 adds nine. Person 50 adds forty-nine. Total blankets = n(n−1)/2. At K₃ blankets equal agents (3 = 3); after that, blankets dominate forever. This is the handshake problem: in a room of 50 people, there are 1,225 pairwise statistical boundaries.
Each blanket carries twice the variance (half the precision) of each agent. This follows from the blanket's role: it must be permeable to mediate exchange, while the agent must be precise to maintain a stable model. The blankets therefore don't just outnumber the agents; they carry more uncertainty per component.
Agent autonomy Aagent = 1/n: the fraction of total system variance carried by the agents' internal models. Collective autonomy Acoll = (n−1)/n: the fraction carried by the Markov blankets. These sum to 1.
At K₂: 50/50. At K₁₀: 90% collective. At K₂₀: 95% collective. Collective autonomy measures the degree to which the ensemble has become autonomous from its own agents. The blankets, not the agents, carry most of the system's uncertainty. Yet agent autonomy never reaches zero; every agent retains an irreducible internal model.
The Kuramoto order parameter Φ = |1/n · Σ exp(iθⱼ)|, where θⱼ is each agent's phase in its evidence-accumulation cycle. Φ = 1 when all agents update beliefs simultaneously. Φ ≈ 1/√n when they accumulate independently. Watch how Φ fluctuates at K₂ (phases drifting in and out of alignment), then settles near 1/√n at larger n.
With ε = 0, agents accumulate evidence independently. Raise the coupling slider and each update — whether blanket or agent — delivers a prediction error of ε · σ² to connected agents. Crucially, agents near their update threshold are more susceptible: the kick is amplified by (1 + 3p²), where p is how far through the cycle the receiving agent is. An agent at 90% receives nearly 3.5× the kick of one at the start. This is critical slowing down: the closer a system is to transition, the more sensitive it becomes to perturbation.
The result is cascades. One agent updates, kicking its neighbours. Those nearest their own thresholds get pushed over, triggering their updates, which kick their neighbours. At high ε, watch for waves of red sweeping through the network. This is generalised synchrony: no conductor, no shared clock, just Markov blankets transmitting prediction errors with susceptibility-weighted gain.
The evidence thresholds, variance parameters, autonomy formulae, and epistemic value function are derived from information geometry: specifically, the Cramér-Rao bound on the statistical manifold of the agents' generative models. The agent's update threshold (2π) and the blanket's update threshold (π) follow from the rotational and reflective symmetry of their respective state spaces. The variance ratio (blanket = 2× agent) follows from the same symmetry classification. These are not free parameters.
ACTIVE INFERENCE INSTITUTE · NETWORK DYNAMICS · 2025
Beyond ~20 agents, the network crosses a perceptual threshold. Individual blankets become invisible; the canvas becomes a mesh. This is not a rendering limitation. It is what actually happens to the agents: at K₅₀ there are 1,225 Markov blankets and no agent can attend to each one. The system's free energy landscape is dominated by the collective, not by any particular pairwise boundary.
Blankets scale as n(n−1)/2. At K₂₀ there are 190. At K₅₀ there are 1,225. At K₁₅₀ there are 11,175. The ratio of blankets to agents is (n−1)/2, which means K₁₅₀ has 74.5 blankets per agent. Each agent is embedded in 149 simultaneous statistical boundaries. No generative model can track this. The ensemble manages itself.
This is why organisations develop norms, cultures, and institutions: they are compression mechanisms for a blanket space too complex for any agent to model. A shared language is a set of blanket states made portable across the network. A norm is a blanket pattern that updates and reconstitutes faster than any agent can register. An institution is a hierarchical decomposition of the blanket space into tractable sub-networks.
Robin Dunbar proposed ~150 as the cognitive limit for stable social relationships. In this framework, at K₁₅₀, agent autonomy is 1/150 ≈ 0.7%. The collective carries 99.3% of the system's variance. Beyond this, the blankets are so dominant that adding more agents barely changes the collective's dynamics; you need hierarchical structure (sub-networks, departments, clans) to maintain any meaningful agent contribution.
Dunbar's limit, on this account, is not about memory capacity. It is about the ratio of agent to blanket variance reaching a floor below which agents cannot sustain meaningful individuality within the ensemble. This interpretation is consistent with, but not derivable from, the free energy principle alone; it requires the specific variance assignments that follow from the symmetry classification of agent and blanket state spaces.
With ε > 0 at large n, watch for synchronisation cascades. Each update — blanket or agent — delivers a prediction error to connected agents, amplified by the receiver's susceptibility: agents near their own threshold receive up to 3.5× the base kick. At K₁₅₀, each agent has 149 blankets. When several blankets update in quick succession, the cumulative prediction error — amplified by susceptibility — pushes the agent past threshold. Its belief update propagates outward through all 149 of its blankets, perturbing other agents near their thresholds, and so on. The cascade appears as a wave of red sweeping across the network.
This is a phase transition in the Kuramoto sense: below a critical coupling, agents drift independently. Above it, the network locks into collective oscillation. The critical coupling decreases with n; larger ensembles synchronise more easily. This follows from the mean-field structure of the complete graph: each agent receives prediction errors from n−1 blankets, so the effective coupling scales linearly with network size.
The simulation uses information-geometric methods to derive the update thresholds, variance parameters, and coupling strengths from the symmetry class of each component's state space. Agents have rotational (continuous) symmetry; blankets have reflective (bistable) symmetry. The Cramér-Rao bound on each symmetry class determines the evidence threshold for belief updating. The Kuramoto order parameter Φ is standard. The autonomy partition 1/n follows from the variance ratio and the quadratic scaling of blankets. These quantities are computed, not fitted.
ACTIVE INFERENCE INSTITUTE · NETWORK DYNAMICS · 2025