This is a pedagogical demonstration applying Active Inference to morphogenesis — the process by which biological form emerges from growth. Friston, Levin and colleagues (2015) showed that morphogenesis can be understood through the Free Energy Principle: developing tissues maintain their organisation by minimising variational free energy, just as brains do. Here, every branch tip is an agent that holds a generative model — a prediction about where it should grow — and acts to minimise the discrepancy between that prediction and the world.
The tree doesn't just grow. It expects to grow, and then fulfils that expectation. This is what Hohwy (2016) calls self-evidencing: a system that gathers evidence for its own existence by acting on the world. All concepts and equations presented here are drawn from the Active Inference textbook by Parr, Pezzulo & Friston (2022).
Each branch tip maintains a generative model: a probabilistic description of how its observations are generated from hidden causes. For an above-ground tip, the model says: "I am a branch. I grow toward light. My parent constrains my angle. I will receive brightness proportional to my orientation."
For a root tip, the model says: "I am a root. I grow toward moisture. The soil offers nutrients in a downward gradient."
The model generates predictions about what sensory data the tip should receive. When actual observations differ from predictions, the tip experiences prediction error.
The tip can resolve this error in two ways:
Perception — update beliefs about where it is and what's happening. Change your mind to fit the world. Action — grow in a direction that makes observations match predictions. Change the world to fit your mind.
Precision (Π) is the inverse variance of prediction errors: how confident the system is in its predictions. Move the slider and watch:
High precision → the tree grows rigidly, exploiting its current model. Each branch is highly constrained by its parent's prediction. This is exploitation — the organism commits to what it already believes.
Low precision → the tree grows loosely, exploring widely. Branches splay outward. The model is uncertain, so alternatives are entertained. This is exploration.
In the POMDP formulation (Chapter 4), the C-matrix encodes prior preferences over observations — what the agent wants to observe. For branches: light from above. For roots: moisture from below.
Move the slider: at high values, the tree grows strongly toward its preferred observations (phototropism above, hydrotropism below). At low values, preferences weaken and growth becomes isotropic.
Above ground, you can see the sun — the source of preferred observations. Below ground, moisture gradients are rendered as blue diffusion. The tree's prior preferences align with these environmental features because evolution has shaped its generative model to expect these regularities.
The Epistemic ↔ Pragmatic slider controls two components of Expected Free Energy (G):
Epistemic value (information gain) — branches explore to reduce uncertainty about their environment. "Where might light be? Let me grow sideways to find out."
Pragmatic value (expected utility) — branches exploit known resources. "I know light is above. Let me grow straight up."
Each branch tip has a Markov blanket — a statistical boundary separating internal states (its beliefs about growth direction) from external states (actual resource gradients). The blanket comprises sensory states (light, chemicals) and active states (growth direction, branching).
The ground line itself is a visible Markov blanket at a larger scale — it separates the above-ground system (branches responding to light) from the below-ground system (roots responding to moisture). Each domain has its own sensory world, its own preferred observations, and its own inference problem.
You may notice that the tree cannot grow infinitely deep. When you zoom out, distant branches simplify. When you zoom in, finer structure appears — but only where you look. The tree has limited computational resources. This is not a bug. It is a faithful demonstration of one of the deepest insights in Active Inference.
Exact Bayesian inference is intractable. The textbook is explicit: for any complex model, computing the exact posterior P(x|y) requires marginalising over all hidden states — a computation that grows exponentially with model complexity. No biological system can do this. No computer can do this for realistic models. Certainly this simulation cannot.
Active Inference's solution is variational inference: substitute the intractable exact posterior with an approximate posterior Q that is tractable to compute. The quality of this approximation is measured by the KL divergence between Q and the true posterior — and minimising variational free energy is precisely the process of making Q as good as possible given finite resources.
In our tree, you can see this approximation at work:
The 8,000 segment limit is the tree's finite computational budget — like a brain with a fixed number of neurons. It cannot represent every possible branch. It must allocate resources where they matter most.
Zoom-dependent depth is precision-weighted resource allocation. Just as the brain devotes more computational resources to attended stimuli (precision-weighting), the tree grows finer structure only where you observe it. Unattended regions remain coarse — like peripheral vision.
Off-screen pruning is complexity minimisation. The second line of the free energy equation tells us F = complexity − accuracy. Maintaining beliefs about regions that provide no observations is pure complexity cost with zero accuracy benefit. A good variational agent prunes these — exactly as the tree does.
This connects to the textbook's treatment of bounded rationality (Chapter 10): a bounded agent must balance the costs of deliberation against the value of more accurate inference. The "free energy theory of bounded rationality" formalises this trade-off — accuracy improves with more computation, but computation has a complexity cost. The optimum is not infinite precision but the best approximation achievable with available resources.
So when you see the tree's limits — its finite depth, its attention-dependent detail, its pruning of the unseen — you are watching variational inference in action. The tree is not failing to be a perfect Bayesian. It is succeeding at being a bounded one.
The tree demonstrates hierarchical or deep Active Inference (Chapter 6). The trunk evolves slowly and provides context that constrains faster-timescale branches — exactly like sentences contextualising words, or goals contextualising subgoals.
When you zoom in, you increase the precision of observation. Finer structure resolves at deeper generations — just as attending more closely to a stimulus reveals hierarchical detail that was always latent in the generative model.
Notice: wherever you look first grows the deepest structure. Your act of zooming is itself Active Inference — you sample the world at higher precision to reduce uncertainty.
But each observation commits the system. The prediction error is resolved, the belief is updated, the growth is fixed. The branch architecture is the generative model — and it was shaped by your observation.
§ Active Inference Tutorial — The Self-Evidencing Tree Concepts & equations: Parr, Pezzulo & Friston (2022) Active Inference: The Free Energy Principle in Mind, Brain, and Behavior. MIT Press. Morphogenesis framing: Friston, Levin et al. (2015) "Knowing one's place: a free-energy approach to pattern regulation." Self-evidencing: Hohwy (2016) "The self-evidencing brain." Simulation: Alexander Sabine · Active Inference Institute Board of Directors temporalgrammar.ai · Alexander@activeinference.institute