Specify resources for worker processes. If given, the values here
will override those in didehpc_config()
. See
vignette("workers")
for more details.
worker_resource(
template = NULL,
cores = NULL,
wholenode = NULL,
parallel = NULL
)
A job template. On fi--dideclusthn this can be "GeneralNodes" or "8Core". On "fi--didemrchnb" this can be "GeneralNodes", "12Core", "16Core", "12and16Core", "20Core", "24Core", "32Core", or "MEM1024" (for nodes with 1Tb of RAM; we have three - two of which have 32 cores, and the other is the AMD epyc with 64). On the new "wpia-hn" cluster, you should currently use "AllNodes". See the main cluster documentation if you tweak these parameters, as you may not have permission to use all templates (and if you use one that you don't have permission for the job will fail). For training purposes there is also a "Training" template, but you will only need to use this when instructed to.
The number of cores to request. If
specified, then we will request this many cores from the windows
queuer. If you request too many cores then your task will queue
forever! 24 is the largest this can be on fi--dideclusthn. On fi--didemrchnb,
the GeneralNodes template has mainly 20 cores or less, with a single 64 core
node, and the 32Core template has 32 core nodes. On wpia-hn, all the nodes are
32 core. If cores
is omitted then a single core is assumed, unless
wholenode
is TRUE.
If TRUE, request exclusive access to whichever compute node is allocated to the job. Your code will have access to all the cores and memory on the node.
Should we set up the parallel cluster? Normally
if more than one core is implied (via the cores
or wholenode
arguments, then a parallel cluster will be set up (see
Details). If parallel
is set to FALSE
then this
will not occur. This might be useful in cases where you want to
manage your own job level parallelism (e.g. using OpenMP) or if
you're just after the whole node for the memory).
A list with class worker_resource
which can be passed
into didehpc_config