Using SPM for Dynamic Causal Modelling of COVID-19

Recently, Karl Friston et al released the paper ‘Dynamic causal modelling of COVID-19’ which applies the variational Bayesian methods they normally apply to neuroscience to the coronavirus pandemic, using the country-by-country coronavirus data collected by Johns Hopkins University. See also the video here.

This was reported on in the Guardian newspaper, with a subsequent UnHerd LockdownTV interview:

Friston is a member of the ‘Independent SAGE’ committee – an alternative to the UK government’s  ‘Scientific Advisory Group for Emergencies’ (SAGE) committee, making open recommendations to the government: report here (PDF) with a video of the press conference.

The modelling in the paper uses the SPM (Statistical Parametric Mapping) software originally created by Friston for brain imaging analysis (of fMRIPETSPECTEEG and MEG) and this was subsequently made freely available at https://www.fil.ion.ucl.ac.uk/spm  (the Functional Imaging Laboratory at the Wellcome Centre for Human Neuroimaging, University College London). It is a toolbox for MATLAB.

The COVID19 modelling is now included within SPM as one of the many demos. Anyone can run this modelling, recreating the figures presented in the paper.

The demo is within the spm12\toolbox\DEM (Dynamical Expectation Maximization) directory. DEM is an extension of the Expectation-Maximization algorithm (referring back to previous posts, both K-means and Gaussian Mixture Models Machine Learning methods are based around the Expectation-Maximization algorithm). See ‘DEM: A variational treatment of dynamic systems’, for more information on DEM algorithms. The directory contains many other example demos concerning Variational Bayesian methods and the Free Energy Principle.

Here I:

  • Describe how to re-create the results of the COVID-19 and other demos.
  • Provide summaries of many of the demos, just extracted from banner comments of the demo scripts.
  • Show many of the figures produced from the demos.

It provides a reference overview. It:

  • showcases SPM for dynamical modelling (i.e. beyond its original purpose of being used for analysing brain imaging data).
  • includes the birdsong, mountain car problem and occulomotor examples described in so many of Friston’s Free Energy Principle papers.
  • shows the wealth of additional biological scenarios that the Free Energy Principle has been applied to.

But it doesn’t capture the animations and sounds that running the actual scripts does.

 

Installation Options

Since SPM is a toolbox for MATLAB, we need to be able to run MATLAB scripts (programs). There are three options here:

  1. Using Mathworks’ MATLAB with the SPM12 library. This is the preferred choice but Matlab will require a licence so that may preclude it,
  2. Octave with the SPM12 library. Octave is an open source equivalent to MATLAB. See SPM/Octave wikibook page for more info.
  3. Standalone SPM, which is built using MATLAB Compiler which does need a MATLAB licence (see limitations). See SPM/Standalone wikibook page for more info.

Unfortunately:

  • Octave doesn’t completely work with SPM yet. But access to the Matlab scripts gives you a starting point to debug.
  • I haven’t got the Standalone SPM working yet (watch this space for when my Matlab licence expires). Additionally, the COVID19 demo is not yet included in the Standalone deliverable.

The Latest version is SPM12. Access to older versions of SPM may be useful e.g. SPM8. I could only find the source for some fundamental script algorithms elsewhere e.g. the fundamental DEM script spm_ADEM is on neurodebian’s github.

Installing Standalone SPM

Watch this space! (But I think you will need a Matlab licence to create the executable. Perhaps there is an executable to download from somewhere.)

Glossary

For the descriptions of the non-COVID demo examples, here is a glossary on much of the biological terminology and some of the more advanced math/statistics. Normal terminology within the Free Energy Principle (e.g. Markov blanket, Kullback-leibler divergence) is not included.

  • active inference: use of an internal generative model to predict incoming sensory data and improving the fit between this model and data by acting on the world which then changes the sensory data such that it is more consistent with the model.
  • adiabatic: transfers energy to the surroundings only as work, not as heat.
  • affordance: action possibilities of the environment
  • allostatic: the process of maintaining homeostasis through the adaptive change of the organism’s internal environment
  • attractor: see Lorenz attractor
  • autovitiate: self-damage
  • eigenmodes, eigenvalues, etc: in linear algebra and geometry, concerning transformations
  • EKF: Extended Kalman filter – a non-linear Kalman filter
  • empirical Bayes: statistical inference in which the prior distribution is estimated from the data (unlike standard Bayesian methods).
  • entrainment: the processof making something have the same pattern or rhythm as something else
  • epistemic foraging: searching for information
  • ergodic: having the same statistical behavior averaged over time as over the system’sentire possible state space
  • Event-related potential (ERP): a brain signal response to sensory input.
  • exogenous: outside of the body of an organism
  • ethological: concerning the study of animal behaviour
  • F statistics: Statistics concerning the F-distribution family of probability distributions that, depending on parameters, can look like a Normal distribution or a Poisson distribution.
  • flexion: increased bending of a limb, in contrast to ‘extension’ being the straightening.
  • Fokker Planck equation: in statistical mechanics, a differential equation describing the time evolution of a probability density function.
  • Generalised Linear Model (GLM): in statistics, generalizing linear regression to cover non-Normal probability distributions.
  • Gabor patch: a series of black and white bars at any orientation for visual experiments
  • haemodynamic: concerning blood flow / circulation.
  • heteroclinic orbit: a path in phase space which joins two different equilibrium points
  • infomax principle: optimization principle for artificial neural networks that maximizes the average Shannon mutual information between input and output
  • Jacobian: in vector calculus, a matrix of partial derivatives
  • Kalman filter: calculates Bayesian joint probability distribution estimates of the true values from a noisy series of measurements over time.
  • Lagrangian: in mathematical optimization, a function used to solve constrained minimization problems. Used in the calculus of variations.
  • Laplace assumption: assuming a Gaussian distribution. assuming a Gaussian distribution
  • Lorenz attractor: in dynamical systems, a point in state space that a system will tend to evolve towards
  • Lotka-Volterra dynamics: (also known as the predator–prey equations): involving nonlinear differential equations that are frequently used to describe the dynamics of biological systems in which two species interact
  • Lyapunov function: a function that may be used to prove the stability of an equilibrium of an ordinary differential equation.
  • Markov Decision Process (MDP): a discrete-time stochastic (partly-random) control process for optimizing actions of an agent in an environment.
  • Mismatched negativity (MMN): the component of a brain signal (ERP) to something atypical in an input sequence
  • nosology: the study of the classification of diseases.
  • occulomotor: motion of the eye
  • occlusion: hiding in order to study how people use visionto anticipate the best course of action
  • Ornstein-Uhlenbeck process: a stochastic Gaussian Markov process like a random walk in continuous time in which there is an attraction back towards a central location.
  • parametric empirical Bayes: empirical Bayes in which the likelihood and prior are defined in simple parametric forms e.g. Normal distribution (mean and variance).
  • parcellation scheme: defining distinct partitions in the brain.
  • perseveration: pathological persistent repetition of word, gesture, or act.
  • principle of minimum redundancy: Horace Barlow’s ‘efficient coding hypothesis’ that aims to minimize Information Theory ‘redundancy’ (‘wasted space’ in transmission).
  • proprioceptive: of the awareness of the position of the various parts of the agent’s body .
  • pursuit: (slower) smooth eye movement
  • P300: a brain signal response (ERP) occurring not because of a stimulus but to a person’s reaction to it (decision).
  • Restricted Maximum Likelihood (ReML): in statistics, a form of maximum likelihood estimation in which the likelihood function is calculated from a transformed set of data to produce unbiased estimates of variance.
  • saccades: rapid eye movements
  • salience: attentional mechanism, for items to stand out
  • signal detection theory: ability to differentiate between stimulus and noise.
  • SSE: sum of squared error
  • syrinx: the vocal organ of birds
  • Wiener process: Brownian motion

List of Demos

Below are descriptions of many of the DEM demos (‘DEM’ as in ‘Dynamic Expectation Maximization’, not as in ‘DEMo’) in SPM12 and a capture of much of the raphical output from them. The list below is the order they are presented with an attempt made to group these logically. To run each demo, just type the demo name since the demo script is .m file e.g.:

>> addpath ‘C:\Matlab_SPM\spm12\toolbox\DEM’

>> DEM_COVID

The demos are…

COVID-19 Simulations:

  • DEM_COVID

Vision: Occlusion, Pursuit and Saccades:

  • ADEM_occlusion (Slow pursuit and occlusion under active inference)
  • ADEM_occulomotor_delays (under active inference)
  • ADEM_pursuit (Slow pursuit under active inference)
  • ADEM_salience (Saccadic eye movements under active inference)
  • ADEM_visual
  • DEM_demo_Gabor (generative model predicts Gabor patch moving in visual field)

Motor:

  • ADEM_motor
  • ADEM_reaching
  • ADEM_writing (prediction errors to movement trajectories)
  • ADEM_cued_response (under active inference)
  • DEM_demo_Posner (Posner paradigm – speed-accuracy trade-offs)

The Mountain Car Problem:

  • ADEM_mountaincar_loss
  • ADEM_cost
  • ADEM_learning (Value learning using the mountain car problem)

Bird Song:

  • DEM_demo_song_inference (Perceptual categorisation of bird songs)
  • DEM_demo_song_omission (deconvolving dynamic changes in its control parameters)
  • DEM_demo_song_priors (simulating local field potential)
  • DEM_demo_SOC (self organised criticality)
  • DEM_demo_MMN (simulate chirps and mismatch negativity)

The Free Energy Principle:

  • FEP_fluctuations (the Bayesian perspective)
  • FEP_Manifold (emergence of life in terms of active inference)
  • FEP_MB_demo (hierarchical decomposition of Markov blankets)
  • FEP_physics (different perspectives: quantum, statistical and classical mechanics)
  • FEP_self_entropy (self organisation in terms of the entropy of blanket states)
  • DEM_morphogenesis (self-assembly under active inference)
  • DEMO_Lindley_paradox (Bayes factors and classical p-values)

Markov Decision Processes:

  • DEM_MDP_decision (agent moves from undecided state to definitive choice)
  • DEMO_MDP_questions (2-level simulation of reading with epistemic foraging)
  • DEMO_MDP_voice (simulating reading for deep temporal generative models)
  • DEMO_MDP_maze (continuous and discrete state space modelling)
  • MDP_DEM_Mixed_Models_Movement (examination of the biceps reflex)

Dynamic Causal Modelling:

  • DEMO_DCM_MB (Markov blankets)
  • DEMO_DCM_PEB (group DCM for electrophysiology)
  • DEMO_BMR_PEB (Empirical Bayes and Bayesian model reduction)

Prognosis and Prediction:

  • DEM_demo_ontology (generative model for computational nosology)

Miscellaneous / Basic:

  • KLDemo (Illustration of information gains with Bayesian fusion)
  • ADEM_lorenz (learning the parameters of a Lorenz attractor)
  • DEMO_BAYES_FACTORS (Bayes factors and classical p-values)
  • DEM_FEP_Least_Action
  • DEM_self_entropy (self organisation in terms of entropy of blanket states)
  • DEM_path_integrals (approximations to path integrals – Fokker Planck)
  • DEM_demo_GF_and_KF (generalised and Kalman filtering)
  • DEM_demo_OU (linear deconvolution – not actually Ornstein-Uhlenbeck process!)
  • DEM_demo_GLM (comparing DEM and ReML under a general linear model)
  • DFP_demo_double_well (comparing Variational filtering with particle filtering)

COVID-19 Simulations

DEM_COVID

The simulation requires COVID-19 data from Johns Hopkins University CSSE (Center for Systems Science and Engineering). Go to https://github.com/CSSEGISandData/COVID-19. If you have a github account, you can go to ‘clone or download’ to download the whole database as COVID-19-master.zip. But you only actually need 3 files within https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_time_series:

  • csv
  • csv
  • csv

Setup an account if you don’t have one! Or if you don’t want to or don’t want to download the whole database, go to the specific files and right-click on ‘raw’ and ‘save-as’ (use .csv file extension). Simply (and dirtily) save the files in the SPM12/toolbox/DEM directory.

DEM_COVID generates all of the figures in the COVID-19 paper (with minor differences)

DEM_COVID3456

Figures 3, 4, 5 and 6

DEM_COVID78910

Figures 7, 8, 9 and 10

DEM_COVID111213

Figures 11, 12, 13 and an additional figure

Vision: Occlusion, Pursuit and Saccades

ADEM_occlusion (Slow pursuit and occlusion under active inference)

This demo illustrates slow pursuit in the context of visual occlusion. We start with a simulation of canonical slow pursuit of a visual target with sine wave motion. Crucially, the generative model is equipped with a simple empirical prior encoding the hidden motion of the target (using a linear oscillator, whose frequency is determined by a hidden cause).  We then examine failures of tracking and anticipation during occlusion and when the target re-emerges from behind the occluder. We look at a simulation in which the precision of the oscillator dynamics modelling long-term behaviour of the target is reduced (cf., neuromodulatory deficits in cortical areas encoding biological motion). This has an effect of producing a failure of pursuit, resulting in a catch-up saccade on target reappearance. The suppression of prior precision can however have beneficial effects when motion is itself unpredicted (as shown with differential pursuit performance under a reversal of the trajectory towards the end of motion). Finally, we look at how prior beliefs are acquired during exposure to the target – in terms of cumulative inference on the hidden causes encoding the frequency of periodic motion. This can be regarded as a high order form of evidence accumulation. Importantly, this (experience-dependent) inference is markedly impaired by the simulated lesion to precision above. In other words, a single failure of inference in modelling the motion of hidden states can have secondary consequences – such as a failure to even register and remember regularities. All these simulations are based upon active inference; with the prior belief that the centre of gaze is attracted to the same point responsible for target motion.

ADEM_occlusion1

ADEM_occlusion2

ADEM_occulomotor_delays (under active inference)

This demo illustrates oculomotor following and slow pursuit. The focus here is on oculomotor delays and their compensation in generalised coordinates of motion. This is illustrates using a ‘sweep’ of motion to examine the effects of motor delays, sensory delays and their interaction under active inference. We then move on to oculomotor following of sine wave motion, in which the trajectory entrains following under compensated dynamics. This entrainment can be destroyed by rectifying the sine wave which calls for a more realistic (hierarchical) model of motion that registers its phase and anticipates the next onset of motion (cf movement behind an occluder). These simulations depend delicately on delays and precisions (gains) that are chosen to make oculomotor following under uncertainty relatively slow. The dependency on visual uncertainty (contrast) is illustrated by changing the precision of the generalised motion of the sensory target.

ADEM_occulomotor_delays1

ADEM_occulomotor_delays2

ADEM_pursuit (Slow pursuit under active inference)

This demo illustrates slow pursuit eye movements under active inference.  Its focus is on frames of references and the entrainment of gaze-direction by the motion of a visual target. The generative process (and model) is based upon the itinerant trajectory of a target (in Cartesian coordinates) produced by Lotka-Volterra dynamics. The agent expects its sampling (in polar coordinates) to be centred on the target. Here, the agent is equipped with a model of the trajectory and the oculomotor plant. This means it represents both the location of the target and the mapping from target location (in relation to a fixation point) to egocentric polar coordinates. We simulate behavioural (saccadic) and electrophysiological (ERP) responses to expected and unexpected changes in the direction of a target moving on the unit circle. The agent expects the target to reverse its direction during the trajectory but when this reversal is omitted (and the target persists in a clockwise direction) violation responses are emitted.

ADEM_pursuit

ADEM_salience (Saccadic eye movements under active inference)

This demo illustrates exploration or visual search in terms of optimality principles based on straightforward ergodic or allostatic principles. In other words, to maintain the constancy of our external milieu, it is sufficient to expose ourselves to predicted and predictable stimuli. Being able to predict what is currently seen also enables us to predict fictive sensations that we will experience from another viewpoint. This provides a principled way in which to explore and sample the world for example with visual searches using saccadic eye movements. These theoretical considerations are remarkably consistent with a number of compelling heuristics; most notably the Infomax principle or the principle of minimum redundancy, signal detection theory and recent formulations of salience in terms of Bayesian surprise. The example here uses saliency (the posterior precision associated with fictive sampling of sensory data) to simulate saccadic eye movements under active inference.

ADEM_salience

ADEM_visual

DEM demo for active inference (i.e. action-perception optimisation of free energy).  This simulation calls on spm ADEM to simulate visual sampling of the world and demonstrate retinal stabilisation or visual tracking. We simulate a simple 2-D plaid stimulus and subject it to an exogenous perturbations. By employing tight and broad priors on the location of the stimulus, we can show that action does and does not explain away the visual consequences of the perturbation (i.e., the movement is seen or not).  This illustrates how one can reframe stabilisation or tracking in terms of sampling sensory input to ensure conditional expectations are met; and how these expectations are shaped by prior expectations.

ADEM_visual

DEM_demo_Gabor (generative model predicts Gabor patch moving in visual field)

State-space demo routine simulating position invariant representations in the visual system.  The generative model predicts a one-dimensional Gabor patch that moves in a (one-dimensional) visual field. The inversion of this dynamic model can be viewed as deconvolving spatial and category attributes from a moving stimulus (or selective re-sampling of the input) to recover the stimulus that can be represented. The prediction shown in the lower panels had position information removed.

DEM_demo_Gabor

Motor

ADEM_motor

This demo illustrates how action can fulfil prior expectations by explaining away sensory prediction errors prescribed by desired movement trajectory. It is based on the same linear convolution model of the motor plant considered in the visual tracking example. Here, we induce prediction errors; not through exogenous perturbation to sensory input but through tight priors encoding a desired or expected trajectory. We then show how the movement is robust to changes in the true motor dynamics and other exogenous perturbations, late in movement execution

ADEM_motor

ADEM_reaching

This demo illustrates how action can fulfil prior expectations by explaining away sensory prediction errors prescribed by desired movement trajectories. In this example a two-joint arm is trained to touch a target so that spontaneous reaching occurs after training.

ADEM_reaching

ADEM_writing (prediction errors to movement trajectories)

This demo illustrates how action can fulfill prior expectations by explaining away sensory prediction errors prescribed by desired movement trajectories. In this example a two-joint arm follows a stable heteroclinic channel, prescribed by a set of fixed-point attractors. The locations of the successive (autovitiated) attractors are defined by parameters. The ensuing trajectories are illustrated here in terms of synthetic writing.

ADEM_writing

ADEM_cued_response (under active inference)

This demo illustrates cued sequential movements. It uses active inference under a hierarchical generative model of sequential cues and consequent movements. The agent has a (second level) model of (two) contingencies or contexts; these correspond to the sequential appearance of targets in a clockwise direction. The other context has no sequential aspect. The first level model is contextually modulated to produce the appropriate sequence of (location – specific) affordances, which predict both visual and proprioceptive consequences. This is sufficient to engender cued reaching movements, which are slightly anticipatory if the agent infers the correct probabilistic context. However, if we reverse the order of the stimuli there is an accuracy and reaction time cost, due to the fact that the sequence is unpredictable.  Furthermore, there is a set switching cost as the hidden states at the second (contextual) level are inferred. This provides a simple but very rich model of cued reaching movements and set switching that is consistent with notions of salience and affordance. Furthermore, we can simulate Parkinsonism by reducing the precision of affordance – based cues. These are the visual attributes that confer saliency on the current target. Reducing this precision (for example, dopamine) delays and can even preclude set switching, with associated costs in pointing accuracy. By completely removing the precision of the salience or affordance cues, we obtain autonomous behaviour that is prescribed by the itinerant expectations of the agent. This can be regarded as perseveration in a pathological setting or the emission of autonomous behaviour in the absence of any precise sensory information

ADEM_cued_response1

ADEM_cued_response2

DEM_demo_Posner (Posner paradigm – speed-accuracy trade-offs)

This demonstration routine simulates the Posner paradigm to show that some of the characteristic speed-accuracy trade-offs associated with valid and invalid cueing can be explained easily in terms of optimizing precisions during hierarchical inference. This demonstration uses generalised filtering and state-space model that includes state-dependent noise. Here, this dependency is used to set the attentional gain or bias using a cue, which modulates the prediction errors induced by subsequent targets. The phenomena that emerge from this scheme include a competition for attentional resources; given that only one context can exist at any time and this probabilistic context is encoded by state-dependent precisions on the causes of sensory input. Attended stimuli have greater precision and greater penetration of their prediction errors in the hierarchy. We will also see characteristic differences between perceptual inference, under valid and invalid cues. This is illustrated using simulated psychophysical and electrophysiological responses. Biased competition is simulated by presenting both valid and invalid targets simultaneously.

DEM_demo_Posner

The Mountain Car Problem

The Mountain Car Problem is a classic test example in reinforcement learning. It must be learnt that the car must go backwards uphill to build up potential energy before accelerating forwards in order to get up the steep slope.

ADEM_learning (Value learning using the mountain car problem)

Value learning demo using the mountain car problem. This demo questions the need for reinforcement learning and related paradigms from machine-learning, when trying to optimise the behaviour of an agent.  We show that it is fairly simple to teach an agent complicated and adaptive behaviours under the free energy principle.  This principle suggests that agents adjust their internal states and sampling of the environment to minimize their free energy.  In this context, free energy represents a bound on the probability of being in a particular state, given the nature of the agent, or more specifically the model of the environment an agent entails.  We show that such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. The result is a behavioural policy that reproduces exactly the policies that are optimised by reinforcement learning and dynamic programming. Critically, at no point do we need to invoke the notion of reward, value or utility.

ADEM_learning

ADEM_mountaincar_loss

This demo re-visits the mountain car problem to show that adaptive (desired) behaviour can be prescribed in terms of loss-functions (i.e reward functions of state-space). It exploits the fact that under the free energy formulation, loss is divergence. This means that priors can be used to make certain parts of state-space costly (i.e. with high divergence) and others rewarding (low divergence). Active inference under these priors will lead to sampling of low-cost states and (apparent) attractiveness of those states.

ADEM_mountaincar_loss

ADEM_cost

This demo re-visits the mountain car problem to show that adaptive (desired) behaviour can be prescribed in terms of loss-functions (i.e. reward functions of state-space). It exploits the fact that under the free energy formulation, loss is divergence. This means that priors can be used to make certain parts of state-space costly (i.e. with high divergence) and others rewarding (low divergence). Active inference under these priors will lead to sampling of low-cost states and (apparent) attractiveness of those states.

ADEM_cost

Bird Song

DEM_demo_song_inference (Perceptual categorisation of bird songs)

Perceptual categorisation of bird songs: The generative model of birdsong used in this simulation comprises a Lorenz attractor with two control parameters (or hidden causes), which, in turn, delivers two control parameters to a synthetic syrinx to produce ‘chirps’ that are modulated in amplitude and frequency.  The chirps were then presented as a stimulus to a synthetic bird to see if it could infer the underlying causal states and thereby categorise the song. This entails minimising free energy by changing the internal representation of the control parameters. Each simulated song comprises a series of chirps whose frequency and number fall progressively from song a to song c, as a causal state (known as the Raleigh number) is decreased.  The simulations show that the causes are identified after about 600 milliseconds with high conditional precision (90% confidence intervals are shown in grey). These simulations illustrate the nature of perceptual categorisation under generalised predictive coding: Here, recognition corresponds to mapping from a continuously changing and chaotic sensory input to a fixed point in perceptual space.

The various bird songs can be played by right clicking on the sonogram images, after the routine has completed.

DEM_demo_song_inference

DEM_demo_song_omission (deconvolving dynamic changes in its control parameters)

Demo for bird song: In this example, we show that DEM can not only estimate the hidden states of an autonomous system but can also deconvolve dynamic changes in its control parameters.  We illustrate this using a slow Lorenz attractor to drive a fast one; both showing deterministic chaos.  We endow the simulations with a little ethological validity by using the states of the fast Lorenz attractor as control variables in the syrinx of a song bird (usually these would control a van der Pol oscillator model). We will look at the true and inferred songs with and without the last chirps missing.  The sonograms displayed can be played by a mouse click on the image.  Subsequent plots show simulated event-related potential to show that there is a marked responses (prediction error) of the system when an expected ‘syllable’ is omitted. This demonstrates the implicit sequence-decoding of input streams, using generative models based upon attractors. Having simulated normal omission-related responses, we then reduce the precision at the second level (on both hidden causes and states) and repeat the simulation. The result is an attenuation of the omission- related response or mismatch negativity. If we try to compensate by reducing the sensory precision, then the autonomous dynamics predicting the sequence of chirps supervenes, producing false inference. This can be thought of as a – crude – model of hallucinosis.

DEM_demo_song_omission

DEM_demo_song_priors (simulating local field potential)

Demo for bird song: In this example, we simulate local field potential using the prediction error from the song-bird example below. We look at these responses under natural stimuli and after removing the second level of the hierarchy to show it is necessary for veridical perception. We then repeat but omitting dynamical priors by forsaking generalised coordinates

DEM_demo_song_priors

DEM_demo_SOC (self organised criticality)

Demo for bird song: this routine illustrates self organised criticality in terms of stimulus induced bifurcations and weak synchronisation of recognition (neuronal) dynamics. It uses the birdsong example, where stimuli are generated using a Lorenz attractor and modelled with the same attractor, with state dependent parameters. These control parameters are categorised in terms of a softmax function of point attractors in a (two-dimensional) perceptual space. We examine the self organised criticality in terms of Lyapunov exponents and the free energy – as a function of precision of the motion of hidden states.

DEM_demo_SOC

DEM_demo_MMN (simulate chirps and mismatch negativity)

This Demo uses the linear convolution model of previous examples to simulate chirps (c.f., the bird song demos). By presenting a train of chirps and changing the stimulus after a couple of presentations, we can simulate a roving oddball paradigm used in ERP research. Critically, we hope to see a more exuberant response to the first presentation of a novel chirp (oddball) relative to the same stimulus after learning (standard).  The simulation shows that although veridical percepts obtain from variational de-convolution, the prediction error continues to fall with repetition (as the parameters are optimised). This repetition suppression subtends a mismatch response that has many of the characteristics of the well-known mismatch negativity (MMN).

The Free Energy Principle

FEP_fluctuations (the Bayesian perspective)

This demonstration uses an ensemble of particles with intrinsic (Lorenz attractor) dynamics and (Newtonian) short-range coupling.  The focus of this routine is to unpack the Bayesian perspective. We first simulate dynamics to nonequilibrium steady-state, identify the Markov blanket and then examine the encoding of external states by internal states; in terms of their expected values.

The crucial aspect of this implicit inference (and the basis of the free energy principle) is the existence of a conditional synchronisation manifold, when conditioning internal and external states on the Markov blanket. This provides the basis for a mapping between internal and external states that can be interpreted in terms of a probabilistic representation or inference.

This Bayesian perspective is illustrated in terms of a mapping between the canonical modes of internal and external states (as approximated with a polynomial expansion). The canonical modes here are evaluated using an estimate of the conditional expectations based upon the Euclidean proximity of Markov blanket states. The ensuing posterior over external states is then illustrated, in relation to the actual external states. We also simulate event related potentials by identifying several points in time when the Markov blankets revisit the same neighbourhood. Finally, to illustrate the underlying dynamics, the Jacobians or coupling among internal and external states are presented; using different orders of coupling (i.e., degrees of separation)

FEP_fluctuations

FEP_Manifold (emergence of life in terms of active inference)

This demonstration routine simulates the emergence of life – as defined in terms of active inference – using a synthetic primordial soup. The key aspect of this dynamics is that there is a separation between dynamical states and structural states; where the dynamical states of the microsystem are equipped with a Lorenz attractor and the structural states correspond to position and velocity. The flow of structural states conforms to classical Newtonian mechanics. Crucially, the physical motion of each microsystem is coupled to its internal dynamics and vice versa; where the coupling among dynamics rests upon short range (electrochemical) forces. This means that the dependencies among the dynamics of each microsystem dependent on their positions. This induces a dependency of the systems structural integrity on its internal dynamics – which leads to biological self-organisation. This biological self-organisation is illustrated in terms of the following:

  1. i) the existence of a Markov blanket that separates internal and external states, where the internal states are associated with a system that engages in active or embodied inference.
  2. ii) emergent inference is demonstrated by showing that the internal states can predict the external states, despite their separation by the Markov blanket.

iii) this inference (encoded by the internal dynamics) is necessary to maintain structural integrity, as illustrated by simulated lesion experiments, in which the influence of various states are quenched.

FEP_Manifold

FEP_MB_demo (hierarchical decomposition of Markov blankets)

This routine illustrates a hierarchical decomposition of Markov blankets (of Markov blankets). It rests upon the dual operators of finding a partition (a Markov partition) and then using an adiabatic dimensional reduction (using the eigensolution of the Markov blanket). In brief, this means the states of particles at the next level become mixtures of the Markov blanket of particles at the level below.

The ensuing hierarchical decomposition is illustrated in terms of Jacobians and locations in a scaling space (evaluated using the graph Laplacian). This demonstration uses a fictive Jacobian that is created by hand – or the equivalent Jacobian of a synthetic soup (i.e., active matter)

FEP_MB_demo

FEP_physics (different perspectives: quantum, statistical and classical mechanics)

This demonstration uses an ensemble of particles with intrinsic (Lorenz attractor) dynamics and (Newtonian) short-range coupling. the setup is used to solve for dynamics among an ensemble of particles; from which a Markov blanket emerges (which forms a particle at the next hierarchical scale. These ensemble dynamics are then used to illustrate different perspectives; namely, those afforded by quantum, statistical and classical mechanics. A detailed description of each of these three treatments precedes each of the sections in the script. these descriptions are in the form of a figure legend, where each section is summarised with a figure.

FEP_physics1

FEP_physics2

FEP_self_entropy (self organisation in terms of the entropy of blanket states)

This demonstration uses an ensemble of particles with intrinsic (Lorenz attractor) dynamics and (Newtonian) short-range coupling.  This routine illustrates self-organisation in terms of the entropy of blanket states (and concomitant changes in terms of mutual information (i.e., complexity cost or risk). Here, the ensemble average of these entropy measures is taken over all (128) particles of macromolecules; where the Markov blanket of each particle comprises all but the third (electrochemical) hidden state. The graphics produced by this routine simply plot the decrease in blanket entropy (and complexity cost) as the system approaches its random dynamical attractor. Illustrative trajectories of the particles are provided at three points during the (stochastic) chaotic transient.

DEM_morphogenesis (self-assembly under active inference)

This routine illustrates self-assembly or morphogenesis under active inference (free energy minimisation).  It exploits the fact that one can express a systems (marginal) Lyapunov function in terms of a variational free energy.  This means that one can prescribe an attracting set in terms of the generative model that defines variational free energy.  In this example, the attracting set is a point attractor in the phase space of a multi-celled organism: where the states correspond to the location and (chemotactic) signal expression of 16 cells.  The generative model and process are remarkably simple; however, the ensuing migration and differentiation of the 16 cells illustrates self-assembly – in the sense that each cell starts of in the same location and releasing the same signals.  In essence, the systems dynamics rest upon each cell inferring its unique identity (in relation to all others) and behaving in accord with those inferences; in other words, inferring its place in the assembly and behaving accordingly.  Note that in this example there are no hidden states and everything is expressed in terms of hidden causes (because the attracting set is a point attractor) Graphics are produced illustrating the morphogenesis using colour codes to indicate the cell type – that is interpreted in terms of genetic and epigenetic processing.

DEM_morphogenesis

DEMO_Lindley_paradox (Bayes factors and classical p-values)

This demonstration routine uses a simple linear model to examine the relationship between free energy differences of log Bayes factors and classical F statistics.

[error in script]

Markov Decision Processes

DEM_MDP_decision (agent moves from undecided state to definitive choice)

This routine illustrates the treatment of signal detection paradigms in the context of active inference and Markov decision processes. This is probably one of the simplest paradigms to model; in which there are just two hidden states generating ambiguous stimuli – and the agent moves from an undecided (hidden) state to a definitive choice. The A tensor in this instance encodes ambiguity (perceptual noise), while the B matrix encodes the behaviour-dependent transitions among decision states. Finally, the C matrix encodes prior costs or preferences. In this instance, the agent does not want to be wrong – and prefers to be right.

In what follows, we simulate a single trial to illustrate the underlying Bayesian belief updating and associated behavioural and physiological responses. We then consider multiple trials under different levels of ambiguity and cost. The dependent measures in this instance include the behavioural accuracy, reaction times (assuming 250 ms time bins) and the uncertainty about the cause of sensory cues and control – as measured by the entropy of posterior beliefs prior to making a choice.

DEM_MDP_decision

DEMO_MDP_questions (2-level simulation of reading with epistemic foraging)

This routine provide simulations of reading to demonstrate deep temporal generative models. It builds upon the scene construction simulations to equip the generative model with a second hierarchical level. In effect, this creates an agent that can accumulate evidence at the second level based upon epistemic foraging at the first. In brief, the agent has to categorise a sentence or narrative into one of two categories (happy or sad), where it entertains six possible sentences. Each sentence comprises four words, which are themselves constituted by two pictures or graphemes These are the same visual outcomes used in previous illustrations of scene construction and saccadic searches.

Here, the agent has policies at two levels. The second level policy (with just one step into the future) allows it to either look at the next word or stay on the current page and make a decision. Concurrently, a first level policy entails one of four saccadic eye movements to each quadrant of the current page, where it will sample a particular grapheme.

This provides a rough simulation of reading – that can be made more realistic by terminating first level active inference, when there can be no further increase in expected free energy (i.e., all uncertainty about the current word has been resolved). The subsequent inferred hidden states then become the outcome for the level above.

To illustrate the schemes biological plausibility, one can change the agent’s prior beliefs and repeat the reading sequence under violations of either local (whether the graphemes are flipped vertically) or globally (whether the sentence is surprising) expectations. This produces a mismatch negativity (MMN) under local violations and a MMN with a P300 with global violations.

DEMO_MDP_questions1

DEMO_MDP_questions

DEMO_MDP_voice (simulating reading for deep temporal generative models)

This routine provides simulations of reading to demonstrate deep temporal generative models. It builds upon the scene construction simulations to equip the generative model with a second hierarchical level. In effect, this creates an agent that can accumulate evidence at the second level based upon epistemic foraging at the first. In brief, the agent has to categorise a sentence or narrative into one of two categories (happy or sad), where it entertains six possible sentences. Each sentence comprises four words, which are themselves constituted by two pictures or graphemes These are the same visual outcomes used in previous illustrations of scene construction and saccadic searches.

Here, the agent has policies at two levels. The second level policy (with just one step into the future) allows it to either look at the next word or stay on the current page and make a decision. Concurrently, a first level policy entails one of four saccadic eye movements to each quadrant of the current page, where it will sample a particular grapheme.

This provides a rough simulation of reading – that can be made more realistic by terminating first level active inference, when there can be no further increase in expected free energy (i.e., all uncertainty about the current word has been resolved). The subsequent inferred hidden states then become the outcome for the level above.

To illustrate the schemes biological plausibility, one can change the agent’s prior beliefs and repeat the reading sequence under violations of either local (whether the graphemes are flipped vertically) or globally (whether the sentence is surprising) expectations. This produces a mismatch negativity (MMN) under local violations) and a MMN with a P300 with global violations.

[requires the Matlab Signal Processing Toolbox]

DEMO_MDP_maze (continuous and discrete state space modelling)

This demonstration of active inference focuses on navigation and planning in a fairly complicated maze. The idea is to demonstrate how epistemic foraging and goal (target) directed behaviour are integrated in the minimisation of expected free energy. In this illustration, and 8 x 8 maze is learned through novelty driven evidence accumulation – to learn the likelihood mapping between hidden states (locations in the maze) and outcomes (whether the current location is open or closed). This accumulated experience is then used to plan a path from a start to an end (target location) under a task set specified by prior preferences over locations. These priors are based upon a simple diffusion (CF backwards induction) heuristic that specifies subgoals. The subgoals (i.e., locations) contain the most paths from the target within the horizon of the current policy. We will first illustrate the novelty driven epistemic foraging that efficiently scans the maze to learn its structure. We then simulate planning of (shortest path) trajectory to the target under the assumption the maze has been previously learned. Finally, we consider exploration under prior preferences to simulate behaviour when both epistemic and goal directed imperatives are in play. The focus on this demo is on behavioural and electrophysiological responses over moves. A key aspect of this formulation is the hierarchical decomposition of goal directed behaviour into subgoals that are within the horizon of a limited policy – here, to moves that correspond to a trial. The prior preferences then contextualise each policy or trial to ensure that the ultimate goal is achieved. Empirically, this sort of construction suggests the existence of Path cells; namely, cells who report the initial location of any subsequence and continue firing until the end of the local path. This is illustrated by plotting simulated activity as a function of trajectories during exploration.

DEMO_MDP_maze1

DEMO_MDP_maze2

DEMO_MDP_maze3

MDP_DEM_Mixed_Models_Movement (examination of the biceps reflex)

This demo illustrates a series of computational pathologies as elicited during a synthetic neurological examination. This focuses upon an examination of the biceps reflex, and a simple coordination task. Each of these are simulated for an arm that may move in three dimensions through internal and external rotation of the shoulder, flexion and extension of the shoulder, and flexion and extension of the elbow. These dynamics play out through an active Bayesian filtering scheme, where a series of attracting points draw the hand to different locations in 3D space. The selection of these attracting points involves a hierarchical Markov Decision Process, which identifies these sequences based upon the prior belief that (1) the sequence will minimise expected free energy and (2) the sequence is consistent with the trajectory predicted by the highest (contextual) level of the model.

MDP_DEM_Mixed_Models_Movement

MDP_DEM_Mixed_Models_Movement2

Dynamic Causal Modelling

DEMO_DCM_MB (Markov blankets)

This demo uses the notion of Markov blankets and the renormalisation group to evaluate the coupling among neuronal systems at increasing spatial scales. The underlying generative model is based upon the renormalisation group: a working definition of renormalization involves three elements: vectors of random variables, a course-graining operation and a requirement that the operation does not change the functional form of the Lagrangian. In our case, the random variables are neuronal states; the course graining operation corresponds to the grouping (G) into a particular partition and adiabatic reduction (R) – that leaves the functional form of the dynamics unchanged.

Here, the grouping operator (G) is based upon coupling among states as measured by the Jacobian. In brief, the sparsity structure of the Jacobian is used to recursively identify Markov blankets around internal states to create a partition of states – at any level – into particles; where each particle comprises external and blanket states. The ensuing reduction operator (R) eliminates the internal states and retains the slow eigenmodes of the blanket states. These then constitute the (vector) states at the next level and the process begins again.

This routine starts using a simple form of dynamic causal modelling applied to the principal eigenvariate of local parcels (i.e., particles) of voxels with compact support. The Jacobian is estimated using a linearised dynamic causal (state space) model, where observations are generated by applying a (e.g., haemodynamic) convolution operator to hidden (e.g., neuronal) states. This estimation uses parametric empirical Bayes (PEB: spm_PEB). The ensuing estimates of the Jacobian (i.e., effective connectivity) are reduced using Bayesian model reduction (BMR: spm_dcm_BMR_all) within a bespoke routine (spm_dcm_J).

The Jacobian is then partitioned using the course graining operator into particles or parcels (using spm_markov blanket). The resulting partition is then reduced by eliminating internal states and retaining slow eigenmodes with the largest (real) eigenvalues (spm_A_reduce). The Jacobian of the reduced states is then used to repeat the process – recording the locations of recursively coarse-grained particles – until there is a single particle.

The result of this recursive decomposition (i.e., renormalisation) affords a characterisation of directed coupling, as characterised by a complex Jacobian; namely, a multivariate coupling matrix, describing the coupling between eigenmodes of Markov blankets at successive scales. This can be regarded as a recursive parcellation scheme based upon effective connectivity and a generative (dynamic causal) model of multivariate (neuronal) timeseries.

DEMO_DCM_PEB (group DCM for electrophysiology)

This routine illustrates the use of Bayesian model reduction when inverting hierarchical (dynamical) models; for example, multisubject DCM models. In this context, we have hierarchical models that are formally similar to parametric empirical Bayesian models – with the exception that the model of the first level can be nonlinear and dynamic. In brief, this routine shows how to finesse the brittleness of Bayesian model comparison at the single subject level by equipping the model with an extra (between subject) level. It illustrates the recovery of group effects on modulatory changes in effective connectivity (in the mismatch negativity paradigm) – based upon real data.

First, an EEG DCM (using empirical grand mean data) is inverted to find plausible group mean parameters. Single subject data are then generated using typical within and between subject variance (here, group differences in the modulation of intrinsic connectivity. We then illustrate a variety of Bayesian model averaging and reduction procedures to recover the underlying group effects.

DEMO_BMR_PEB (Empirical Bayes and Bayesian model reduction)

This routine illustrates the use of Bayesian model reduction when inverting hierarchical (linear) models – it is essentially a software validation demo and proof of concept. It uses a parametric empirical Bayesian model (i.e., nested linear models) to eschew local minima issues and to assure the Laplace assumption is correct. In brief, the data are generated for multiple subjects, under a linear model with subject specific parameters at the first level and group specific parameters at the second. These model a group effect common to all subjects in a subset of parameters and differences in a further subset. The objective of empirical Bayesian inversion is to recover the group effects in terms of posteriors and perform Bayesian model comparison at the second (between subject) level.

This provides empirical shrinkage priors at the first level, which can be used to compute the predictive posterior for any subject. In turn, the predictive posterior can be used for leave-one-out cross validation.

The key aspect of this approach to empirical Bayesian modelling is that we use Bayesian model reduction throughout. In other words, after the subject-specific models have been inverted the data are discarded and we deal only with the free energies and posteriors for subsequent hierarchical analysis. This can be computationally very efficient when dealing with large first-level or complicated models: as in DCM.

The parameterisation of the models uses the format of DCM. This means parameters are specified as a structure with key parameters being in the fields A, B and C.

Prognosis and Prediction

DEM_demo_ontology (generative model for computational nosology)

This demonstration routine illustrates how a generative model can be used to furnish a computational nosology. In brief, it generates symptoms and diagnostic profiles from hidden or latent exogenous causes (e.g., therapeutic interventions) that are mediated by latent (pathophysiological and psychopathological) states.  Pathophysiological trajectories are modelled with a Lorenz attractor that (with a linear mapping) produces (two-dimensional) psychopathology. In turn, the psychopathological states generate symptoms (with a non-linear function of linear mixtures) and diagnostic outcomes (with a softmax function of diagnostic potential). The psychopathological state of a subject is associated with a diagnostic potential in terms of its Euclidean distance from disease categories (locations in the associated state space).

We start by simulating a relapsing-remitting disease process and then infer the latent states and parameters of a particular subject. This is then repeated in the setting of a therapeutic intervention. The demonstration then briefly considers model identification and selection by focusing on the mapping between pathophysiology and psychopathology. Finally, we consider, prognosis and prediction by estimating subject-specific parameters prior to therapy and then predicting putative response in the future, based upon a posterior predictive density.

DEM_demo_ontology1

DEM_demo_ontology2

Miscellaneous / Basic

KLDemo (Illustration of information gains with Bayesian fusion)

This routine illustrates the benefit of multimodal or Bayesian fusion in terms of conditional dependencies among parameters. In other words, it shows that even if one data modality contains no information about a particular set of parameters, it can help resolve uncertainty about another set and thereby disclose information contained in the other modality. This is illustrated here using a simple linear model with neuronal and haemodynamic parameters to show that EEG can provide some information gain, in relation to haemodynamic parameters.

KLDemo

ADEM_lorenz (learning the parameters of a Lorenz attractor)

Action-DEM demo specifying an attractor (in terms of the parameters of desired equations of motion) This demo first exposes an agent to a Lorenz attractor world such that it can learn the three parameters of the Lorenz system. It is then placed in a test world with a fixed-point attractor to see if it has remembered the chaotic dynamics it learnt in the training environment

ADEM_lorenz

DEMO_BAYES_FACTORS (Bayes factors and classical p-values)

This demonstration routine uses a simple linear model to examine the relationship between free energy differences or log Bayes factors and classical F statistics. Using re-randomisation of a design matrix, it computes the null distribution over both statistics and plots them against each other.  There is a linear relationship, which allows one to evaluate the false-positive rate for any threshold on the Bayes factor. Ideally, one would like to see a positive log Bayes factor map to a classical threshold of p=0.05. The offset and slope of the linear relationship between the two statistics depends upon prior beliefs about the covariance of the parameters and the log precision. These can be changed by editing the code below (or supplying input arguments).

[error in script]

DEM_FEP_Least_Action

This routine uses a Lorenz system to show that the most likely autonomous path (or deterministic path) is uniquely identified by its initial and final states. In other words, if we knew the end state of an autonomous trajectory, then we would implicitly know the path taken from the initial particular state, even if we did not know the external states. This point is demonstrated using numerical analyses of the Lorenz system; treating the first state and an active state and the third as an external state.

DEM_self_entropy (self organisation in terms of entropy of blanket states)

Routine to produce graphics illustrating self organisation in terms of the entropy of blanket states and associated trajectories. A low blanket entropy induces anomalous diffusion and itinerancy with power law scaling (i.e., self similar dynamics). This example uses a fixed form (quadratic) likelihood and optimises the density over hidden states to minimise blanket (i.e., self) entropy explicitly. In this example, there is just one active and sensory state and one hidden state to illustrate noise-phase symmetry breaking as the probability density over action reduces the entropy of sensory states under a fixed density of hidden or external states.

DEM_self_entropy

DEM_path_integrals (approximations to path integrals – Fokker Planck)

Illustration of approximations to path integrals. This routine generates a path from dynamics whose Fokker Planck solution corresponds to a Gaussian with a given (diagonal) precision. It then samples random segments (after scaling and smoothing) and evaluates their action. This evaluation is in terms of the sum of squares residuals between realised and predicted flow and path dependent and path-independent terms based upon the surprisal associated with the solution of the Fokker Planck equation. The point being made here is that the terms based upon the surprisal (cyan dots) upper bound the action (blue dots).

DEM_demo_GF_and_KF (generalised and Kalman filtering)

A demonstration of generalised and Kalman filtering where the number of hidden states exceeds the number of variables observed. The metrics of performance are the mean sum of squared error and the SSE normalized by the posterior precision (NESS). The results of a single time series analysis are shown first and then the simulations are repeated under linear and nonlinear observation models to compare the relative performance of DEM and EKF. The superiority of DEM (generalised filtering) over Kalman filtering rests on the optimisation of K – the rate of generalised descent on free energy.

DEM_demo_OU (linear deconvolution – not actually Ornstein-Uhlenbeck process!)

DEM demo for linear deconvolution:  This demo considers the deconvolution of one of the simplest dynamical process; a random walk or Ornstein-Uhlenbeck process.  It shows how DEM can infer on the causes as stochastic innovations (c.f., Bayesian filtering) by exploiting temporal correlations.  Strictly speaking this is not an Ornstein-Uhlenbeck process because the innovations are themselves correlated and would normally be a Wiener process

DEM_demo_GLM (comparing DEM and ReML under a general linear model)

Demo comparing DEM and ReML (restricted maximum likelihood) under a simple general linear model (GLM).  Slight differences in the hyperpriors of both schemes make this an interesting exercise.  Note that ReML uses a covariance hyper-parameterisation; whereas DEM uses precision hyperparameters.  This demo uses a non-hierarchical GLM and switches the roles of parameters and causes to illustrate their equivalence under a DEM inversion.

DFP_demo_double_well (comparing Variational filtering with particle filtering)

Demo comparing Variational filtering with particle filtering in the context of a bimodal conditional density.  This demonstrates that the variational filter can not only represent free-form densities on the states but also the causes of responses.

DFP_demo_double_well

Worth investigating further

To investigate in detail:

  • ADEM_salience and DEM_demo_SOC: both directly provide graphs of free energy.
  • ADEM_learning: alternative to reinforcement learning

And also these:

  • ADEM_motor and ADEM_reaching: motion as a result of explaining away prediction errors
  • ADEM_cued_response: example of pathological condition
  • ADEM_mountaincar_loss
  • FEP_physics: Markov blankets
  • DEM_morphogenesis: graph of free energy
  • DEM_MDP_maze: complex exploratory learning
This entry was posted in Uncategorized and tagged , , , , , , , , , , , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s