Run a pMCMC, with sensible random number behaviour, but schedule execution of the chains yourself. Use this if you want to distribute chains over (say) the nodes of an HPC system.
pmcmc_chains_prepare(path, pars, filter, control, initial = NULL)
pmcmc_chains_run(chain_id, path, n_threads = NULL)
pmcmc_chains_collect(path)
pmcmc_chains_cleanup(path)
The path to use to exchange inputs and results. You
can use a temporary directory or a different path (relative or
absolute). Several rds files will be created. It is strongly
recommended not to use .
A pmcmc_parameters
object containing
information about parameters (ranges, priors, proposal kernel,
translation functions for use with the particle filter).
A particle_filter
object
A pmcmc_control object which will control how the MCMC runs, including the number of steps etc.
Optional initial starting point. If given, it must
be compatible with the parameters given in pars
, and must be
valid against your prior. You can use this to override the
initial conditions saved in your pars
object. You can provide
either a vector of initial conditions, or a matrix with
n_chains
columns to use a different starting point for each
chain.
The integer identifier of the chain to run
Optional thread count, overriding the number set
in the control
. This will be useful where preparing the
threads on a machine with one level of resource and running it
on another.
Basic usage will look like
path <- mcstate::pmcmc_chains_prepare(tempfile(), pars, filter, control)
for (i in seq_len(control$n_chains)) {
mcstate::pmcmc_chains_run(i, path)
}
samples <- mcstate::pmcmc_chains_collect(path)
mcstate::pmcmc_chains_cleanup(path)
You can safely parallelise (or not) however you like at the point where the loop is (even across other machines) and get the same outputs regardless.