odin and dustodin\[\begin{gather*} \frac{dS}{dt} = -\beta S \frac{I}{N}\\ \frac{dI}{dt} = \beta S \frac{I}{N} - \sigma I\\ \frac{dR}{dt} = \sigma I \end{gather*}\]
Things to note:
initial and deriv pairodin package, along with pkgbuild and pkgloadodinodin.dustodin.dustrun: runs up to some time, returns final values(*)simulate: runs over a series of times, returning values at each timeodin vs odin.dust:output()interpolate() (we might restore this later)delay()runif(min, max)rnorm(mean, sd)rhyper(m, n, k)rpois(lambda)rbinom(n, p)rgamma(shape, scale)rnbinom(size, prob)rexp(rate)update(S) <- S - n_SI
update(I) <- I + n_SI - n_IR
update(R) <- R + n_IR
update(S[]) <- S[i] - n_SI[i]
update(I[]) <- I[i] + n_SI[i] - n_IR[i]
update(R[]) <- R[i] + n_IR[i]
The odin code
becomes (approximately)
m[, ] <- user() # age-structured contact matrix
s_ij[, ] <- m[i, j] * I[j]
lambda[] <- beta * sum(s_ij[i, ])
p_SI[] <- 1 - exp(-lambda[i] * dt)
update(S[]) <- S[i] - n_SI[i]
N_age <- user()
dim(S) <- N_age
dim(m) <- c(N_age, N_age)You must declare the dimensions of all arrays!
$num_procs
[1] 20
$max_threads
[1] 20
$thread_limit
[1] 2147483647
$openmp_version
[1] 201511
$has_openmp
[1] TRUE
$mc.cores
[1] NA
$OMP_THREAD_LIMIT
[1] NA
$OMP_NUM_THREADS
[1] NA
$MC_CORES
[1] 10
$limit_r
[1] 10
$limit_openmp
[1] 20
$limit
[1] 10
Always parallelise at the coarsest level first
Don’t use mclapply or parLapply, etc.
i, j, k
l, i5, i6, i7, i8 for dimensions 5 – 8rel_susceptibility[, ] <- user()
p_vacc[, ] <- 1 - exp(-eta[i, j] * dt)
n_S_vacc[, ] <- rbinom(S[i, j] - n_SI[i, j], p_vacc[i, j])
p_SI[, ] <- 1 - exp(-rel_susceptibility[i, j] * lambda[i] * dt) # S to I
new_S[, ] <- S[i, j] - n_SI[i, j] - n_S_vacc[i, j] +
(if (j == 1) n_S_vacc[i, N_vacc_classes] else n_S_vacc[i, j - 1])
update(S[, ]) <- new_S[i, j]usethis::create_r_package("mymodel")inst/odinDESCRIPTION:
cpp11 and dust to section LinkingTodust to ImportsSystemRequirements: C++11#' @useDynLib mymodel, .registration = TRUE somewhere (e.g., R/zzz.R)odin.dust::odin_dust_package() to generate filesdevtools::document() to update NAMESPACEpkgload::load_all() to compile and loadinst/odinodin.dust::odin_dust_package() to generate filespkgload::load_all() to compile and loadpkgdownDetails: https://r-pkgs.org/
Use step to index
nvcc toolchain (this is annoying to install)gpu <- dust::dust_cuda_options(fast_math = TRUE)
gen <- odin.dust::odin_dust_(path, real_type = "float", gpu = gpu)
mod <- gen$new(list(), 0, 65536, gpu_config = 0L)
mod$run(100) # runs on GPU!Expect to run tens of thousands of particles or more, and have a strategy for working with this much data!
k_I <- user(integer = TRUE)
p_I_progress <- 1 - exp(-gamma * dt) # I to R
n_I_progress[, , ] <- rbinom(I[i, j, k], p_I_progress)
new_I[, , ] <- I[i, j, k] - n_I_progress[i, j, k] +
(if (k == 1) n_SI[i, j] else n_I_progress[i, j, k - 1])
update(I[, , ]) <- new_I[i, j, k]
update(R[, ]) <- R[i, j] + n_I_progress[i, j, k_I]
dim(I) <- c(N_age, N_vacc_classes, k_I)steps_per_day <- user(integer = TRUE)
dt <- 1 / steps_per_day
# Cumulative infections
update(infections_tot) <- infections_tot + sum(n_SI)
initial(infections_tot) <- 0
# Daily infections incidence
update(infections_inc) <- if (step %% steps_per_day == 0)
sum(n_SI) else infections_inc + sum(n_SI)
initial(infections_inc) <- 0