my_simulator --cfg=Item:Value (other arguments)
\endverbatim
-Several \c --cfg command line arguments can naturally be used. If you
+Several \c `--cfg` command line arguments can naturally be used. If you
need to include spaces in the argument, don't forget to quote the
argument. You can even escape the included quotes (write \' for ' if
you have your argument between ').
within that tag, you can pass one or several \c \<prop\> tags to specify
the configuration to use. For example, setting \c Item to \c Value
can be done by adding the following to the beginning of your platform
-file: \verbatim
+file:
+\verbatim
<config>
<prop id="Item" value="Value"/>
</config>
\endverbatim
A last solution is to pass your configuration directly using the C
-interface. Unfortunately, this path is not really easy to use right
-now, and you mess directly with surf internal variables as follows. Check the
-\ref XBT_config "relevant page" for details on all the functions you
-can use in this context, \c _surf_cfg_set being the only configuration set
-currently used in SimGrid. \code
+interface. If you happen to use the MSG interface, this is very easy
+with the MSG_config() function. If you do not use MSG, that's a bit
+more complex, as you have to mess with the internal configuration set
+directly as follows. Check the \ref XBT_config "relevant page" for
+details on all the functions you can use in this context, \c
+_sg_cfg_set being the only configuration set currently used in
+SimGrid.
+
+@code
#include <xbt/config.h>
-extern xbt_cfg_t _surf_cfg_set;
+extern xbt_cfg_t _sg_cfg_set;
int main(int argc, char *argv[]) {
- MSG_init(&argc, argv);
+ SD_init(&argc, argv);
- xbt_cfg_set_parse(_surf_cfg_set,"Item:Value");
+ /* Prefer MSG_config() if you use MSG!! */
+ xbt_cfg_set_parse(_sg_cfg_set,"Item:Value");
// Rest of your code
}
-\endcode
+@endcode
\section options_model Configuring the platform models
By default, Surf computes the analytical models sequentially to share their
resources and update their actions. It is possible to run them in parallel,
using the \b surf/nthreads item (default value: 1). If you use a
-negative value, the amount of available cores is automatically
+negative or null value, the amount of available cores is automatically
detected and used instead.
Depending on the workload of the models and their complexity, you may get a
It is possible to specify a timing gap between consecutive emission on
the same network card through the \b network/sender_gap item. This
is still under investigation as of writting, and the default value is
-to wait 0 seconds between emissions (no gap applied).
+to wait 10 microseconds (1e-5 seconds) between emissions.
\subsubsection options_model_network_asyncsend Simulating asyncronous send
with a call to \ref MSG_mailbox_set_async . For MSG, all messages sent to this
mailbox will have this behavior, so consider using two mailboxes if needed.
+This value needs to be smaller than or equals to the threshold set at
+\ref options_model_smpi_detached , because asynchronous messages are
+meant to be detached as well.
+
\subsubsection options_pls Configuring packet-level pseudo-models
When using the packet-level pseudo-models, several specific
If you want to push the scalability limits of your code, you really
want to reduce the \b contexts/stack_size item. Its default value
-is 128 (in Kib), while our Chord simulation works with stacks as small
-as 16 Kib, for example. For the thread factory, the default value
+is 8192 (in KiB), while our Chord simulation works with stacks as small
+as 16 KiB, for example. For the thread factory, the default value
is the one of the system, if it is too large/small, it has to be set
with this parameter.
request to execute the user code in parallel. Several threads are
launched, each of them handling as much user contexts at each run. To
actiave this, set the \b contexts/nthreads item to the amount of
-cores that you have in your computer (or -1 to have the amount of cores
-auto-detected).
+cores that you have in your computer (or lower than 1 to have
+the amount of cores auto-detected).
Even if you asked several worker threads using the previous option,
you can request to start the parallel execution (and pay the
- Any SimGrid-based simulator (MSG, SimDag, SMPI, ...) and raw traces:
\verbatim
---cfg=tracing:1 --cfg=tracing/uncategorized:1 --cfg=triva/uncategorized:uncat.plist
+--cfg=tracing:yes --cfg=tracing/uncategorized:yes --cfg=triva/uncategorized:uncat.plist
\endverbatim
The first parameter activates the tracing subsystem, the second
tells it to trace host and link utilization (without any
- MSG or SimDag-based simulator and categorized traces (you need to declare categories and classify your tasks according to them)
\verbatim
---cfg=tracing:1 --cfg=tracing/categorized:1 --cfg=triva/categorized:cat.plist
+--cfg=tracing:yes --cfg=tracing/categorized:yes --cfg=triva/categorized:cat.plist
\endverbatim
The first parameter activates the tracing subsystem, the second
tells it to trace host and link categorized utilization and the
smpirun -trace ...
\endverbatim
The <i>-trace</i> parameter for the smpirun script runs the
-simulation with --cfg=tracing:1 and --cfg=tracing/smpi:1. Check the
+simulation with --cfg=tracing:yes and --cfg=tracing/smpi:yes. Check the
smpirun's <i>-help</i> parameter for additional tracing options.
Sometimes you might want to put additional information on the trace to
Please, use these two parameters (for comments) to make reproducible
simulations. For additional details about this and all tracing
-options, check See the \ref tracing_tracing_options "Tracing
-Configuration Options subsection".
+options, check See the \ref tracing_tracing_options.
\section options_smpi Configuring SMPI
simulation kernel (default value: 1e-6). Please note that in some
circonstances, this optimization can hinder the simulation accuracy.
+If the "application" is in fact doing a "live replay" of another MPI
+app (e.g., ScalaTrace's replay tool, various on-line simulators that
+run an app at scale), the computation due to the replay logic should
+not be simulated by SMPI. In that case the \b
+smpi/simulation_computation item can be set to 'no' causing all the
+compute bursts between MPI calls to be ignored by SMPI. Then only the
+communications are simulated. This implies to add explicit calls to \c
+smpi_execute() in the "application" to simulate computations.
+
+
\subsection options_smpi_timing Reporting simulation time
Most of the time, you run MPI code through SMPI to compute the time it
Simulation time: 1e3 seconds.
\endverbatim
+\subsection options_smpi_global Automatic privatization of global variables
+
+MPI executables are meant to be executed in separated processes, but SMPI is
+executed in only one process. Global variables from executables will be placed
+in the same memory zone and shared between processes, causing hard to find bugs.
+To avoid this, several options are possible :
+ - Manual edition of the code, for example to add __thread keyword before data
+ declaration, which allows the resulting code to work with SMPI, but only
+ if the thread factory (see \ref options_virt_factory) is used, as global
+ variables are then placed in the TLS (thread local storage) segment.
+ - Source-to-source transformation, to add a level of indirection
+ to the global variables. SMPI does this for F77 codes compiled with smpiff,
+ and used to provide coccinelle scripts for C codes, which are not functional anymore.
+ - Compilation pass, to have the compiler automatically put the data in
+ an adapted zone.
+ - Runtime automatic switching of the data segments. SMPI stores a copy of
+ each global data segment for each process, and at each context switch replaces
+ the actual data with its copy from the right process. This mechanism uses mmap,
+ and is for now limited to systems supporting this functionnality (all Linux
+ and some BSD should be compatible).
+ Another limitation is that SMPI only accounts for global variables defined in
+ the executable. If the processes use external global variables from dynamic
+ libraries, they won't be switched correctly. To avoid this, using static
+ linking is advised (but not with the simgrid library, to avoid replicating
+ its own global variables).
+
+ To use this runtime automatic switching, the variable \b smpi/privatize_global_variables
+ should be set to yes
+
+
+
+\subsection options_model_smpi_detached Simulating MPI detached send
+
+This threshold specifies the size in bytes under which the send will return
+immediately. This is different from the threshold detailed in \ref options_model_network_asyncsend
+because the message is not effectively sent when the send is posted. SMPI still waits for the
+correspondant receive to be posted to perform the communication operation. This threshold can be set
+by changing the \b smpi/send_is_detached item. The default value is 65536.
+
+\subsection options_model_smpi_collectives Simulating MPI collective algorithms
+
+SMPI implements more than 100 different algorithms for MPI collective communication, to accurately
+simulate the behavior of most of the existing MPI libraries. The \b smpi/coll_selector item can be used
+ to use the decision logic of either OpenMPI or MPICH libraries (values: ompi or mpich, by default SMPI
+uses naive version of collective operations). Each collective operation can be manually selected with a
+\b smpi/collective_name:algo_name. Available algorithms are listed in \ref SMPI_collective_algorithms .
+
+\subsection options_model_smpi_computation_simulation Benchmarking/simulating application computation
+
+By default, SMPI benchmarks computational phases of the simulated application (i.e., CPU bursts in
+between MPI calls) so that these phases can be simulated. In some cases, however, one may wish to
+disable simulation of application computation. This is the case when SMPI is used not to simulate
+an MPI applications, but instead an MPI code that performs "live replay" of another MPI app (e.g.,
+ScalaTrace's replay tool, various on-line simulators that run an app at scale). In this case the
+computation of the replay/simulation logic should not be simulated by SMPI. Instead, the replay
+tool or on-line simulator will issue "computation events", which correspond to the actual MPI simulation
+being replayed/simulated. At the moment, these computation events can be simulated using SMPI by
+calling internal smpi_execute*() functions.
+
+To disable the benchmarking/simulation of computation in the simulated application via this runtime automatic
+switching, the variable \b smpi/privatize_global_variables should be set to no
+
\section options_generic Configuring other aspects of SimGrid
\subsection options_generic_path XML file inclusion path
- \c surf/nthreads: \ref options_model_nthreads
+- \c smpi/simulation_computation: \ref options_smpi_bench
- \c smpi/running_power: \ref options_smpi_bench
- \c smpi/display_timing: \ref options_smpi_timing
- \c smpi/cpu_threshold: \ref options_smpi_bench
- \c smpi/async_small_thres: \ref options_model_network_asyncsend
+- \c smpi/send_is_detached: \ref options_model_smpi_detached
+- \c smpi/coll_selector: \ref options_model_smpi_collectives
+- \c smpi/privatize_global_variables: \ref options_smpi_global
- \c path: \ref options_generic_path
- \c verbose-exit: \ref options_generic_exit