library(odin2)
library(dust2)8 Order of events for discrete-time stochastic models
Here we have a discrete-time stochastic model
sir <- odin({
update(S) <- S - n_SI
update(I) <- I + n_SI - n_IR
update(R) <- R + n_IR
update(incidence) <- incidence + n_SI
beta <- interpolate(beta_time, beta_value, "linear")
beta_time <- parameter(constant = TRUE)
beta_value <- parameter(constant = TRUE)
dim(beta_time, beta_value) <- parameter(rank = 1)
p_SI <- 1 - exp(-beta * I / N * dt)
p_IR <- 1 - exp(-gamma * dt)
n_SI <- Binomial(S, p_SI)
n_IR <- Binomial(I, p_IR)
initial(S) <- N - I0
initial(I) <- I0
initial(R) <- 0
initial(incidence, zero_every = 1) <- 0
N <- parameter(1000)
I0 <- parameter(10)
gamma <- parameter(0.1)
## Comparison to data
phi <- interpolate(phi_time, phi_value, "constant")
phi_time <- parameter(constant = TRUE)
phi_value <- parameter(constant = TRUE)
dim(phi_time, phi_value) <- parameter(rank = 1)
cases <- data()
exp_noise <- Exponential(1e6)
cases ~ Poisson(phi * incidence + exp_noise)
})this model includes many components we have introduced in preceding sections, such as variables that reset periodically, use of interpolation functions and comparison to data.
It is useful to understand the order in which events will be evaluated as the model updates from time = t0 to time = t0 + dt. There will first be a series of updates to the variables, followed by a comparison to data (assuming you are comparing the model to data).
8.1 Updates
At the beginning of the update we have time = t0 and then the updates proceed in the following order:
8.1.1 Reset any variables that use zero_every
Any variable that uses zero_every = x will be reset to 0 if t0 is a multiple of x. In our model we have
initial(incidence, zero_every = 1) <- 0so incidence will reset to 0 whenever t0 is an integer.
8.1.2 Read from variables
All variables will be read at their current values. In our example these are S, I, R and incidence.
8.1.3 Look up interpolation
Any uses of interpolate relating to updates are evaluated using time = t0. Thus in our example
beta <- interpolate(beta_time, beta_value, "linear")is evaluated to give beta its value at time = t0.
8.1.4 Evaluate assignments
Assignment equations should have a logical order of evaluation (circularity should prevent compilation). For instance in our model it would naturally evaluate
p_SI <- 1 - exp(-beta * I / N * dt)
p_IR <- 1 - exp(-gamma * dt)and then
n_SI <- Binomial(S, p_SI)
n_IR <- Binomial(I, p_IR)Equations will be evaluated using the variables that were read in earlier (note that this is after any variables that use zero_every have been reset to 0). As with interpolation, any equations that use time (see Section 5.1) will use time = t0.
8.1.5 Write out new values of state
Here we write out new values of variables as given by update equations. In our example these are
update(S) <- S - n_SI
update(I) <- I + n_SI - n_IR
update(R) <- R + n_IR
update(incidence) <- incidence + n_SIAs with the other assignment equations, these are evaluated using values of the variables that were read in earlier. Thus in the equation
update(S) <- S - n_SIit is telling us the value we will assign to S for time = t0 + dt, but on the right hand side it will use the value of S at time = t0.
You should be careful to avoid introducing lags between state variables in your update equations.
For example for this simple model
\[\begin{gather*} X(t + dt) = X(t) + Normal(0, 1)\\ Y(t + dt) = Y(t) + Normal(0, 1)\\ Z(t) = X(t) + Y(t) \end{gather*}\]
it would be incorrect to code this as
m <- odin({
update(X) <- X + dX
update(Y) <- Y + dY
update(Z) <- X + Y
dX <- Normal(0, 1)
dY <- Normal(0, 1)
initial(X) <- 0
initial(Y) <- 0
initial(Z) <- 0
})as we can see that running this results in Z being lagged by a time-step relative to X and Y:
sys <- dust_system_create(m, list(), seed = 1)
dust_system_set_state_initial(sys)
t <- seq(0, 5)
y <- dust_system_simulate(sys, t)
y <- dust_unpack_state(sys, y)
y$X + y$Y
#> [1] 0.000000 1.666908 1.358213 4.669377 4.191150 6.221572
y$Z
#> [1] 0.000000 0.000000 1.666908 1.358213 4.669377 4.191150This is because
update(Z) <- X + Ycorresponds to \(Z(t + dt) = X(t) + Y(t)\) and not \(Z(t) = X(t) + Y(t)\). Thus we need to account for the updates to X and Y on the right-hand side.
A correct way to write it could be
update(Z) <- X + dX + Y + dYhowever in more complex models such an approach may become unwieldy, and further changes to updates for X and Y must be replicated in the update to Z.
A safer approach is to introduce intermediate variables representing the updated values of X and Y
m <- odin({
update(X) <- X_new
update(Y) <- Y_new
update(Z) <- X_new + Y_new
dX <- Normal(0, 1)
X_new <- X + dX
dY <- Normal(0, 1)
Y_new <- Y + dY
initial(X) <- 0
initial(Y) <- 0
initial(Z) <- 0
})which means that subsequent changes to how X updates are automatically factored into how Z updates. We can see this version no longer results in a lag:
sys <- dust_system_create(m, list(), seed = 1)
dust_system_set_state_initial(sys)
t <- seq(0, 5)
y <- dust_system_simulate(sys, t)
y <- dust_unpack_state(sys, y)
y$X + y$Y
#> [1] 0.000000 1.666908 1.358213 4.669377 4.191150 6.221572
y$Z
#> [1] 0.000000 1.666908 1.358213 4.669377 4.191150 6.2215728.1.6 Update time to t0 + dt
The series of updates ends with time being increased by dt.
8.2 Comparison to data
The comparison to data follows on from the above series of updates, and as such it is common that all equations relating to the comparison to data appear in a block at the end.
Equations relating to comparison to data are evaluated after the update of time to t0 + dt and will only be evaluated if you have any data at time = t0 + dt.
8.2.1 Read from variables
All variables will be read at their current values, which are the values corresponding to time = t0 + dt. Again, in our example the variables are S, I, R and incidence, although only incidence is used in the comparison to data
8.2.2 Look up interpolation
Any uses of interpolate that are used in the comparison to data will be evaluated using time = t0 + dt (not using time = t0 as was used earlier in the “update” step). Compilation of odin code will detect which interpolations are used in the comparison to data.
In our example we
phi <- interpolate(phi_time, phi_value, "constant")where phi might represent a time-varying ascertainment rate of cases (which could be affected by e.g. whether schools are open, or day of the week effects).
8.2.3 Evaluate assignments
As with interpolation, compilation of odin code will detect which assignments are used only in the comparison to data. For instance in our example this would be
exp_noise <- Exponential(1e6)Any use of variables in these equations on the right-hand side will be evaluated using their values at time = t0 + dt.
8.2.4 Compare to data
Finally we evaluate the density given by any relationship equations. In our example this is
cases ~ Poisson(phi * incidence + exp_noise)As with assignments in the comparison to data step, any use of variables in relationship equations on the right-hand side will be evaluated using their values at time = t0 + dt. No actual assignment is done in relationship equations, but the density accumulates over all non-NA data entries.