type the following on the command-line:
.. code-block:: shell
-
+
my_simulator --cfg=Item:Value (other arguments)
Several ``--cfg`` command line arguments can naturally be used. If you
file:
.. code-block:: xml
-
+
<config>
<prop id="Item" value="Value"/>
</config>
with :cpp:func:`simgrid::s4u::Engine::set_config` or :cpp:func:`MSG_config`.
.. code-block:: cpp
-
+
#include <simgrid/s4u.hpp>
int main(int argc, char *argv[]) {
simgrid::s4u::Engine e(&argc, argv);
-
+
e->set_config("Item:Value");
-
+
// Rest of your code
}
.. _options_list:
-
+
Existing Configuration Items
----------------------------
models for all existing resources.
- ``network/model``: specify the used network model. Possible values:
-
+
- **LV08 (default one):** Realistic network analytic model
(slow-start modeled by multiplying latency by 13.01, bandwidth by
.97; bottleneck sharing uses a payload of S=20537 for evaluating
RTT). Described in `Accuracy Study and Improvement of Network
Simulation in the SimGrid Framework
- <http://mescal.imag.fr/membres/arnaud.legrand/articles/simutools09.pdf>`_.
+ <http://mescal.imag.fr/membres/arnaud.legrand/articles/simutools09.pdf>`_.
- **Constant:** Simplistic network model where all communication
take a constant time (one second). This model provides the lowest
realism, but is (marginally) faster.
<ftp://ftp.ens-lyon.fr/pub/LIP/Rapports/RR/RR2002/RR2002-40.ps.gz>`_.
- **Reno/Reno2/Vegas:** Models from Steven H. Low using lagrange_solve instead of
lmm_solve (experts only; check the code for more info).
- - **NS3** (only available if you compiled SimGrid accordingly):
+ - **NS3** (only available if you compiled SimGrid accordingly):
Use the packet-level network
simulators as network models (see :ref:`pls_ns3`).
This model can be :ref:`further configured <options_pls>`.
-
+
- ``cpu/model``: specify the used CPU model. We have only one model
for now:
allow parallel tasks because these beasts need some collaboration
between the network and CPU model. That is why, ptask_07 is used by
default when using SimDag.
-
+
- **default:** Default host model. Currently, CPU:Cas01 and
network:LV08 (with cross traffic enabled)
- **compound:** Host model that is automatically chosen if
configurations.
- items ``network/optim`` and ``cpu/optim`` (both default to 'Lazy'):
-
+
- **Lazy:** Lazy action management (partial invalidation in lmm +
heap in action remaining).
- **TI:** Trace integration. Highly optimized mode when using
now).
- **Full:** Full update of remaining and variables. Slow but may be
useful when debugging.
-
+
- items ``network/maxmin-selective-update`` and
``cpu/maxmin-selective-update``: configure whether the underlying
should be lazily updated or not. It should have no impact on the
and you should use the last one, which is the maximal size.
.. code-block:: shell
-
+
cat /proc/sys/net/ipv4/tcp_rmem # gives the sender window
cat /proc/sys/net/ipv4/tcp_wmem # gives the receiver window
.. _cfg=network/bandwidth-factor:
.. _cfg=network/latency-factor:
.. _cfg=network/weight-S:
-
+
Correcting Important Network Parameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
computations. More details in @ref plugin_energy.
- **link_energy:** keeps track of the energy dissipated by
communications. More details in @ref SURF_plugin_energy.
- - **host_load:** keeps track of the computational load.
+ - **host_load:** keeps track of the computational load.
More details in @ref plugin_load.
.. _options_modelchecking:
-
+
Configuring the Model-Checking
------------------------------
be executed using the simgrid-mc wrapper:
.. code-block:: shell
-
+
simgrid-mc ./my_program
Safety properties are expressed as assertions using the function
:cpp:func:`void MC_assert(int prop)`.
.. _cfg=model-check/property:
-
+
Specifying a liveness property
..............................
.. code-block:: shell
-
+
simgrid-mc ./my_program --cfg=model-check/property:<filename>
.. _cfg=model-check/checkpoint:
-
+
Going for Stateful Verification
...............................
raw implementation.
|br| Install the relevant library (e.g. with the
libboost-contexts-dev package on Debian/Ubuntu) and recompile
- SimGrid.
+ SimGrid.
- **raw:** amazingly fast factory using a context switching mechanism
of our own, directly implemented in assembly (only available for x86
and amd64 platforms for now) and without any unneeded system call.
.. _cfg=contexts/nthreads:
.. _cfg=contexts/parallel-threshold:
.. _cfg=contexts/synchro:
-
+
Running User Code in Parallel
.............................
your machine for no good reason. You probably prefer the other less
eager schemas.
-
Configuring the Tracing
-----------------------
- SMPI simulator and traces for a space/time view:
.. code-block:: shell
-
+
smpirun -trace ...
The `-trace` parameter for the smpirun script runs the simulation
- Add the contents of a textual file on top of the trace file as comment:
.. code-block:: shell
-
+
--cfg=tracing/comment-file:my_file_with_additional_information.txt
Please, use these two parameters (for comments) to make reproducible
application, the variable ``smpi/simulate-computation`` should be set
to no. This option just ignores the timings in your simulation; it
still executes the computations itself. If you want to stop SMPI from
-doing that, you should check the SMPI_SAMPLE macros, documented in
+doing that, you should check the SMPI_SAMPLE macros, documented in
Section :ref:`SMPI_adapting_speed`.
+------------------------------------+-------------------------+-----------------------------+
| Solution | Computations executed? | Computations simulated? |
-+====================================+=========================+=============================+
++====================================+=========================+=============================+
| --cfg=smpi/simulate-computation:no | Yes | Never |
+------------------------------------+-------------------------+-----------------------------+
| --cfg=smpi/cpu-threshold:42 | Yes, in all cases | If it lasts over 42 seconds |
http://simgrid.gforge.inria.fr/contrib/smpi-saturation-doc.html
.. _cfg=smpi/display-timing:
-
+
Reporting Simulation Time
.........................
increase the latency, i.e., values larger than or equal to 1 are valid here.
.. _cfg=smpi/papi-events:
-
+
Trace hardware counters with PAPI
.................................
files (See Section :ref:`tracing_tracing_options`).
.. warning::
-
+
This feature currently requires superuser privileges, as registers
are queried. Only use this feature with code you trust! Call
smpirun for instance via ``smpirun -wrapper "sudo "
use. Example:
.. code-block:: shell
-
+
ldd allpairf90
...
libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 (0x00007fbb4d91b000)
Then, it can be deallocated by calling SMPI_SHARED_FREE(mem).
When smpi/shared-malloc:global is used, the memory consumption problem
-is solved, but it may induce too much load on the kernel's pages table.
+is solved, but it may induce too much load on the kernel's pages table.
In this case, you should use huge pages so that we create only one
entry per Mb of malloced data instead of one entry per 4k.
To activate this, you must mount a hugetlbfs on your system and allocate
at least one huge page:
.. code-block:: shell
-
+
mkdir /home/huge
sudo mount none /home/huge -t hugetlbfs -o rw,mode=0777
sudo sh -c 'echo 1 > /proc/sys/vm/nr_hugepages' # echo more if you need more
to issue if your application contains such a loop:
.. code-block:: cpp
-
+
while(MPI_Wtime() < some_time_bound) {
/* some tests, with no communication nor computation */
}
set variable simgrid::simix::breakpoint = 3.1416
.. _cfg=verbose-exit:
-
+
Behavior on Ctrl-C
..................
In this mode, your application is actually executed. Every computation
occurs for real while every communication is simulated. In addition,
the executions are automatically benchmarked so that their timings can
-be applied within the simulator.
+be applied within the simulator.
SMPI can also go offline by replaying a trace. :ref:`Trace replay
<SMPI_offline>` is usually ways faster than online simulation (because
.. _SMPI_use_colls:
-................................
+................................
Simulating Collective Operations
................................
- **ompi:** default selection logic of OpenMPI (version 3.1.2)
- **mpich**: default selection logic of MPICH (version 3.3b)
- **mvapich2**: selection logic of MVAPICH2 (version 1.9) tuned
- on the Stampede cluster
+ on the Stampede cluster
- **impi**: preliminary version of an Intel MPI selector (version
4.1.3, also tuned for the Stampede cluster). Due the closed source
nature of Intel MPI, some of the algorithms described in the
- documentation are not available, and are replaced by mvapich ones.
+ documentation are not available, and are replaced by mvapich ones.
- **default**: legacy algorithms used in the earlier days of
SimGrid. Do not use for serious perform performance studies.
-.. todo:: default should not even exist.
+.. todo:: default should not even exist.
....................
Available Algorithms
- mpich: use mpich selector for the alltoall operations
- mvapich2: use mvapich2 selector for the alltoall operations
- impi: use intel mpi selector for the alltoall operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- bruck: Described by Bruck et.al. in <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=642949">this paper</a>
- - 2dmesh: organizes the nodes as a two dimensional mesh, and perform allgather
+ - 2dmesh: organizes the nodes as a two dimensional mesh, and perform allgather
along the dimensions
- 3dmesh: adds a third dimension to the previous algorithm
- - rdb: recursive doubling: extends the mesh to a nth dimension, each one
+ - rdb: recursive doubling: extends the mesh to a nth dimension, each one
containing two nodes
- pair: pairwise exchange, only works for power of 2 procs, size-1 steps,
each process sends and receives from the same process at each step
- mpich: use mpich selector for the alltoallv operations
- mvapich2: use mvapich2 selector for the alltoallv operations
- impi: use intel mpi selector for the alltoallv operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- bruck: same as alltoall
- pair: same as alltoall
- pair_light_barrier: same as alltoall
- mpich: use mpich selector for the barrier operations
- mvapich2: use mvapich2 selector for the barrier operations
- impi: use intel mpi selector for the barrier operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- ompi_basic_linear: all processes send to root
- ompi_two_procs: special case for two processes
- ompi_bruck: nsteps = sqrt(size), at each step, exchange data with rank-2^k and rank+2^k
- mpich: use mpich selector for the scatter operations
- mvapich2: use mvapich2 selector for the scatter operations
- impi: use intel mpi selector for the scatter operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
- - ompi_basic_linear: basic linear scatter
+ - automatic (experimental): use an automatic self-benchmarking algorithm
+ - ompi_basic_linear: basic linear scatter
- ompi_binomial: binomial tree scatter
- mvapich2_two_level_direct: SMP aware algorithm, with an intra-node stage (default set to mpich selector), and then a basic linear inter node stage. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
- mvapich2_two_level_binomial: SMP aware algorithm, with an intra-node stage (default set to mpich selector), and then a binomial phase. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
- mpich: use mpich selector for the reduce operations
- mvapich2: use mvapich2 selector for the reduce operations
- impi: use intel mpi selector for the reduce operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- arrival_pattern_aware: root exchanges with the first process to arrive
- binomial: uses a binomial tree
- flat_tree: uses a flat tree
- - NTSL: Non-topology-specific pipelined linear-bcast function
+ - NTSL: Non-topology-specific pipelined linear-bcast function
0->1, 1->2 ,2->3, ....., ->last node: in a pipeline fashion, with segments
of 8192 bytes
- scatter_gather: scatter then gather
- ompi_chain: openmpi reduce algorithms are built on the same basis, but the
topology is generated differently for each flavor
- chain = chain with spacing of size/2, and segment size of 64KB
- - ompi_pipeline: same with pipeline (chain with spacing of 1), segment size
+ chain = chain with spacing of size/2, and segment size of 64KB
+ - ompi_pipeline: same with pipeline (chain with spacing of 1), segment size
depends on the communicator size and the message size
- ompi_binary: same with binary tree, segment size of 32KB
- - ompi_in_order_binary: same with binary tree, enforcing order on the
+ - ompi_in_order_binary: same with binary tree, enforcing order on the
operations
- - ompi_binomial: same with binomial algo (redundant with default binomial
+ - ompi_binomial: same with binomial algo (redundant with default binomial
one in most cases)
- ompi_basic_linear: basic algorithm, each process sends to root
- mvapich2_knomial: k-nomial algorithm. Default factor is 4 (mvapich2 selector adapts it through tuning)
- mvapich2_two_level: SMP-aware reduce, with default set to mpich both for intra and inter communicators. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
- - rab: `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_'s reduce algorithm
+ - rab: `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_'s reduce algorithm
MPI_Allreduce
^^^^^^^^^^^^^
- mpich: use mpich selector for the allreduce operations
- mvapich2: use mvapich2 selector for the allreduce operations
- impi: use intel mpi selector for the allreduce operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- lr: logical ring reduce-scatter then logical ring allgather
- rab1: variations of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: reduce_scatter then allgather
- rab2: variations of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: alltoall then allgather
- - rab_rsag: variation of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: recursive doubling
- reduce_scatter then recursive doubling allgather
+ - rab_rsag: variation of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: recursive doubling
+ reduce_scatter then recursive doubling allgather
- rdb: recursive doubling
- - smp_binomial: binomial tree with smp: binomial intra
+ - smp_binomial: binomial tree with smp: binomial intra
SMP reduce, inter reduce, inter broadcast then intra broadcast
- smp_binomial_pipeline: same with segment size = 4096 bytes
- - smp_rdb: intra: binomial allreduce, inter: Recursive
+ - smp_rdb: intra: binomial allreduce, inter: Recursive
doubling allreduce, intra: binomial broadcast
- - smp_rsag: intra: binomial allreduce, inter: reduce-scatter,
+ - smp_rsag: intra: binomial allreduce, inter: reduce-scatter,
inter:allgather, intra: binomial broadcast
- - smp_rsag_lr: intra: binomial allreduce, inter: logical ring
+ - smp_rsag_lr: intra: binomial allreduce, inter: logical ring
reduce-scatter, logical ring inter:allgather, intra: binomial broadcast
- smp_rsag_rab: intra: binomial allreduce, inter: rab
reduce-scatter, rab inter:allgather, intra: binomial broadcast
- mpich: use mpich selector for the reduce_scatter operations
- mvapich2: use mvapich2 selector for the reduce_scatter operations
- impi: use intel mpi selector for the reduce_scatter operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- ompi_basic_recursivehalving: recursive halving version from OpenMPI
- ompi_ring: ring version from OpenMPI
- mpich_pair: pairwise exchange version from MPICH
- mpich: use mpich selector for the allgather operations
- mvapich2: use mvapich2 selector for the allgather operations
- impi: use intel mpi selector for the allgather operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- 2dmesh: see alltoall
- 3dmesh: see alltoall
- bruck: Described by Bruck et.al. in <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=642949">
- Efficient algorithms for all-to-all communications in multiport message-passing systems</a>
+ Efficient algorithms for all-to-all communications in multiport message-passing systems</a>
- GB: Gather - Broadcast (uses tuned version if specified)
- - loosely_lr: Logical Ring with grouping by core (hardcoded, default
+ - loosely_lr: Logical Ring with grouping by core (hardcoded, default
processes/node: 4)
- NTSLR: Non Topology Specific Logical Ring
- NTSLR_NB: Non Topology Specific Logical Ring, Non Blocking operations
- rdb: see alltoall
- rhv: only power of 2 number of processes
- ring: see alltoall
- - SMP_NTS: gather to root of each SMP, then every root of each SMP node
- post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
+ - SMP_NTS: gather to root of each SMP, then every root of each SMP node
+ post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
using logical ring algorithm (hardcoded, default processes/SMP: 8)
- - smp_simple: gather to root of each SMP, then every root of each SMP node
- post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
+ - smp_simple: gather to root of each SMP, then every root of each SMP node
+ post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
using simple algorithm (hardcoded, default processes/SMP: 8)
- spreading_simple: from node i, order of communications is i -> i + 1, i ->
i + 2, ..., i -> (i + p -1) % P
- - ompi_neighborexchange: Neighbor Exchange algorithm for allgather.
+ - ompi_neighborexchange: Neighbor Exchange algorithm for allgather.
Described by Chen et.al. in `Performance Evaluation of Allgather
Algorithms on Terascale Linux Cluster with Fast Ethernet <http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1592302>`_
- mvapich2_smp: SMP aware algorithm, performing intra-node gather, inter-node allgather with one process/node, and bcast intra-node
- mpich: use mpich selector for the allgatherv operations
- mvapich2: use mvapich2 selector for the allgatherv operations
- impi: use intel mpi selector for the allgatherv operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- GB: Gatherv - Broadcast (uses tuned version if specified, but only for Bcast, gatherv is not tuned)
- pair: see alltoall
- ring: see alltoall
- mpich: use mpich selector for the bcast operations
- mvapich2: use mvapich2 selector for the bcast operations
- impi: use intel mpi selector for the bcast operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- arrival_pattern_aware: root exchanges with the first process to arrive
- arrival_pattern_aware_wait: same with slight variation
- binomial_tree: binomial tree exchange
- SMP_linear: linear algorithm with 8 cores/SMP
- ompi_split_bintree: binary tree algorithm from OpenMPI, with message split in 8192 bytes pieces
- ompi_pipeline: pipeline algorithm from OpenMPI, with message split in 128KB pieces
- - mvapich2_inter_node: Inter node default mvapich worker
+ - mvapich2_inter_node: Inter node default mvapich worker
- mvapich2_intra_node: Intra node default mvapich worker
- mvapich2_knomial_intra_node: k-nomial intra node default mvapich worker. default factor is 4.
.. warning:: This is still very experimental.
-An automatic version is available for each collective (or even as a selector). This specific
-version will loop over all other implemented algorithm for this particular collective, and apply
-them while benchmarking the time taken for each process. It will then output the quickest for
-each process, and the global quickest. This is still unstable, and a few algorithms which need
+An automatic version is available for each collective (or even as a selector). This specific
+version will loop over all other implemented algorithm for this particular collective, and apply
+them while benchmarking the time taken for each process. It will then output the quickest for
+each process, and the global quickest. This is still unstable, and a few algorithms which need
specific number of nodes may crash.
Adding an algorithm
and compare collective algorithms, you should set the
``tracing/smpi/internals`` configuration item to 1 instead of 0.
-Here are examples of two alltoall collective algorithms runs on 16 nodes,
+Here are examples of two alltoall collective algorithms runs on 16 nodes,
the first one with a ring algorithm, the second with a pairwise one.
.. image:: /img/smpi_simgrid_alltoall_ring_16.png
:align: center
-
+
Alltoall on 16 Nodes with the Ring Algorithm.
.. image:: /img/smpi_simgrid_alltoall_pair_16.png
:align: center
-
+
Alltoall on 16 Nodes with the Pairwise Algorithm.
-------------------------
....................
Our coverage of the interface is very decent, but still incomplete;
-Given the size of the MPI standard, we may well never manage to
+Given the size of the MPI standard, we may well never manage to
implement absolutely all existing primitives. Currently, we have
almost no support for I/O primitives, but we still pass a very large
amount of the MPICH coverage tests.
then this macro can dramatically shrink your memory consumption. For example,
that will be very beneficial to a matrix multiplication code, as all blocks will
be stored on the same area. Of course, the resulting computations will useless,
-but you can still study the application behavior this way.
+but you can still study the application behavior this way.
Naturally, this won't work if your code is data-dependent. For example, a Jacobi
iterative computation depends on the result computed by the code to detect
SMPI_SAMPLE_GLOBAL. Of course, none of this will work if the execution
time of your loop iteration are not stable.
-This feature is demoed by the example file
+This feature is demoed by the example file
`examples/smpi/NAS/ep.c <https://framagit.org/simgrid/simgrid/tree/master/examples/smpi/NAS/ep.c>`_
.............................
precious for that). Then, try to modify your model (of the platform,
of the collective operations) to reduce the most preeminent differences.
-If the discrepancies come from the computing time, try adapting the
+If the discrepancies come from the computing time, try adapting the
``smpi/host-speed``: reduce it if your simulation runs faster than in
reality. If the error come from the communication, then you need to
fiddle with your platform file.
explicitely told what compiler to use, as follows:
.. code-block:: shell
-
+
SMPI_PRETEND_CC=1 ./configure CC=smpicc # here come the other configure parameters
make
Although SMPI is often used for :ref:`online simulation
<SMPI_online>`, where the application is executed for real, you can
-also go for offline simulation through trace replay.
+also go for offline simulation through trace replay.
SimGrid uses time-independent traces, in which each actor is given a
script of the actions to do sequentially. These trace files can
.. code-block:: shell
- $ smpirun -trace-ti --cfg=tracing/filename:LU.A.32 -np 32 -platform ../cluster_backbone.xml bin/lu.A.32
+ $ smpirun -trace-ti --cfg=tracing/filename:LU.A.32 -np 32 -platform ../cluster_backbone.xml bin/lu.A.32
The produced trace is composed of a file ``LU.A.32`` and a folder
``LU.A.32_files``. The file names don't match with the MPI ranks, but