If 0 is set as context/nthreads configuration value, the amount of core
will be also auto-calculated.
Signed-off-by: Guillaume Serrière <guillaume.serriere@esial.net>
By default, Surf computes the analytical models sequentially to share their
resources and update their actions. It is possible to run them in parallel,
using the \b surf/nthreads item (default value: 1). If you use a
By default, Surf computes the analytical models sequentially to share their
resources and update their actions. It is possible to run them in parallel,
using the \b surf/nthreads item (default value: 1). If you use a
-negative value, the amount of available cores is automatically
+negative or null value, the amount of available cores is automatically
detected and used instead.
Depending on the workload of the models and their complexity, you may get a
detected and used instead.
Depending on the workload of the models and their complexity, you may get a
request to execute the user code in parallel. Several threads are
launched, each of them handling as much user contexts at each run. To
actiave this, set the \b contexts/nthreads item to the amount of
request to execute the user code in parallel. Several threads are
launched, each of them handling as much user contexts at each run. To
actiave this, set the \b contexts/nthreads item to the amount of
-cores that you have in your computer (or -1 to have the amount of cores
-auto-detected).
+cores that you have in your computer (or lower than 1 to have
+the amount of cores auto-detected).
Even if you asked several worker threads using the previous option,
you can request to start the parallel execution (and pay the
Even if you asked several worker threads using the previous option,
you can request to start the parallel execution (and pay the