Create a bulk set of tasks. Variables in data
take precedence
over variables in the environment in which expr
was
created. There is no "pronoun" support yet (see rlang docs). Use
!!
to pull a variable from the environment if you need to, but
be careful not to inject something really large (e.g., any vector
really) or you'll end up with a revolting expression and poor
backtraces.
rrq_task_create_bulk_expr(
expr,
data,
queue = NULL,
separate_process = FALSE,
timeout_task_run = NULL,
depends_on = NULL,
controller = NULL
)
An expression, as for rrq_task_create_expr
Data that you wish to inject row-wise into the expression
The queue to add the task to; if not specified the "default" queue (which all workers listen to) will be used. If you have configured workers to listen to more than one queue you can specify that here. Be warned that if you push jobs onto a queue with no worker, it will queue forever.
Logical, indicating if the task should be
run in a separate process on the worker. If TRUE
, then the
worker runs the task in a separate process using the callr
package. This means that the worker environment is completely
clean, subsequent runs are not affected by preceding ones. The
downside of this approach is a considerable overhead in starting
the external process and transferring data back.
Optionally, a maximum allowed running
time, in seconds. This parameter only has an effect if
separate_process
is TRUE
. If given, then if the task takes
longer than this time it will be stopped and the task status set
to TIMEOUT
.
Vector or list of IDs of tasks which must have completed before this job can be run. Once all dependent tasks have been successfully run, this task will get added to the queue. If the dependent task fails then this task will be removed from the queue.
The controller to use. If not given (or NULL
)
we'll use the controller registered with
rrq_default_controller_set()
.
A character vector with task identifiers; this will have a
length equal to the number of row in data