3 Simulating MPI Applications
4 ===========================
9 SimGrid can not only :ref:`simulate algorithms <usecase_simalgo>`, but
10 it can also be used to execute real MPI applications on top of
11 virtual, simulated platforms with the SMPI module. Even complex
12 C/C++/F77/F90 applications should run out of the box in this
13 environment. In fact, almost all proxy apps provided by the `ExaScale
14 Project <https://proxyapps.exascaleproject.org/>`_ only require minor
15 modifications to `run on top of SMPI
16 <https://github.com/simgrid/SMPI-proxy-apps/>`_.
18 This setting permits to debug your MPI applications in a perfectly
19 reproducible setup, with no Heisenbugs. Enjoy the full Clairevoyance
20 provided by the simulator while running what-if analysis on platforms
21 that are still to be built! Several `production-grade MPI applications
22 <https://framagit.org/simgrid/SMPI-proxy-apps#full-scale-applications>`_
23 use SimGrid for their integration and performance testing.
25 MPI 2.2 is already partially covered: over 160 primitives are
26 supported. Some parts of the standard are still missing: MPI-IO, MPI3
27 collectives, spawning ranks, inter-communicators, and some others. If
28 one of the functions you use is still missing, please drop us an
29 email. We may find the time to implement it for you.
31 Multi-threading support is very limited in SMPI. Only funneled
32 applications are supported: at most one thread per rank can issue any
33 MPI calls. For better timing predictions, your application should even
34 be completely mono-threaded. Using OpenMP (or pthreads directly) may
35 greatly decrease SimGrid predictive power. That may still be OK if you
36 only plan to debug your application in a reproducible setup, without
37 any performance-related analysis.
42 In SMPI, communications are simulated while computations are
43 emulated. This means that while computations occur as they would in
44 the real systems, communication calls are intercepted and achived by
47 To start using SMPI, you just need to compile your application with
48 ``smpicc`` instead of ``mpicc``, or with ``smpiff`` instead of
49 ``mpiff``, or with ``smpicxx`` instead of ``mpicxx``. Then, the only
50 difference between the classical ``mpirun`` and the new ``smpirun`` is
51 that it requires a new parameter ``-platform`` with a file describing
52 the simulated platform on which your application shall run.
54 Internally, all ranks of your application are executed as threads of a
55 single unix process. That's not a problem if your application has
56 global variables, because ``smpirun`` loads one application instance
57 per MPI rank as if it was another dynamic library. Then, MPI
58 communication calls are implemented using SimGrid: data is exchanged
59 through memory copy, while the simulator's performance models are used
60 to predict the time taken by each communications. Any computations
61 occuring between two MPI calls are benchmarked, and the corresponding
62 time is reported into the simulator.
64 .. image:: /tuto_smpi/img/big-picture.svg
67 Describing Your Platform
68 ------------------------
70 As a SMPI user, you are supposed to provide a description of your
71 simulated platform, that is mostly a set of simulated hosts and network
72 links with some performance characteristics. SimGrid provides a plenty
73 of :ref:`documentation <platform>` and examples (in the
74 `examples/platforms <https://framagit.org/simgrid/simgrid/tree/master/examples/platforms>`_
75 source directory), and this section only shows a small set of introductory
78 Feel free to skip this section if you want to jump right away to usage
81 Simple Example with 3 hosts
82 ...........................
84 At the most basic level, you can describe your simulated platform as a
85 graph of hosts and network links. For instance:
87 .. image:: /tuto_smpi/3hosts.png
90 .. literalinclude:: /tuto_smpi/3hosts.xml
93 Note the way in which hosts, links, and routes are defined in
94 this XML. All hosts are defined with a speed (in Gflops), and links
95 with a latency (in us) and bandwidth (in MBytes per second). Other
96 units are possible and written as expected. Routes specify the list of
97 links encountered from one route to another. Routes are symmetrical by
100 Cluster with a Crossbar
101 .......................
103 A very common parallel computing platform is a homogeneous cluster in
104 which hosts are interconnected via a crossbar switch with as many
105 ports as hosts, so that any disjoint pairs of hosts can communicate
106 concurrently at full speed. For instance:
108 .. literalinclude:: ../../examples/platforms/cluster_crossbar.xml
112 One specifies a name prefix and suffix for each host, and then give an
113 integer range. In the example the cluster contains 65535 hosts (!),
114 named ``node-0.simgrid.org`` to ``node-65534.simgrid.org``. All hosts
115 have the same power (1 Gflop/sec) and are connected to the switch via
116 links with same bandwidth (125 MBytes/sec) and latency (50
123 Cluster with a Shared Backbone
124 ..............................
126 Another popular model for a parallel platform is that of a set of
127 homogeneous hosts connected to a shared communication medium, a
128 backbone, with some finite bandwidth capacity and on which
129 communicating host pairs can experience contention. For instance:
132 .. literalinclude:: ../../examples/platforms/cluster_backbone.xml
136 The only differences with the crossbar cluster above are the ``bb_bw``
137 and ``bb_lat`` attributes that specify the backbone characteristics
138 (here, a 500 microseconds latency and a 2.25 GByte/sec
139 bandwidth). This link is used for every communication within the
140 cluster. The route from ``node-0.simgrid.org`` to ``node-1.simgrid.org``
141 counts 3 links: the private link of ``node-0.simgrid.org``, the backbone
142 and the private link of ``node-1.simgrid.org``.
151 Many HPC facilities use torus clusters to reduce sharing and
152 performance loss on concurrent internal communications. Modeling this
153 in SimGrid is very easy. Simply add a ``topology="TORUS"`` attribute
154 to your cluster. Configure it with the ``topo_parameters="X,Y,Z"``
155 attribute, where ``X``, ``Y`` and ``Z`` are the dimension of your
158 .. image:: ../../examples/platforms/cluster_torus.svg
161 .. literalinclude:: ../../examples/platforms/cluster_torus.xml
164 Note that in this example, we used ``loopback_bw`` and
165 ``loopback_lat`` to specify the characteristics of the loopback link
166 of each node (i.e., the link allowing each node to communicate with
167 itself). We could have done so in previous example too. When no
168 loopback is given, the communication from a node to itself is handled
169 as if it were two distinct nodes: it goes twice through the private
170 link and through the backbone (if any).
175 This topology was introduced to reduce the amount of links in the
176 cluster (and thus reduce its price) while maintaining a high bisection
177 bandwidth and a relatively low diameter. To model this in SimGrid,
178 pass a ``topology="FAT_TREE"`` attribute to your cluster. The
179 ``topo_parameters=#levels;#downlinks;#uplinks;link count`` follows the
180 semantic introduced in the `Figure 1B of this article
181 <http://webee.eedev.technion.ac.il/wp-content/uploads/2014/08/publication_574.pdf>`_.
183 Here is the meaning of this example: ``2 ; 4,4 ; 1,2 ; 1,2``
185 - That's a two-level cluster (thus the initial ``2``).
186 - Routers are connected to 4 elements below them, regardless of its
187 level. Thus the ``4,4`` component that is used as
188 ``#downlinks``. This means that the hosts are grouped by 4 on a
189 given router, and that there is 4 level-1 routers (in the middle of
191 - Hosts are connected to only 1 router above them, while these routers
192 are connected to 2 routers above them (thus the ``1,2`` used as
194 - Hosts have only one link to their router while every path between a
195 level-1 routers and level-2 routers use 2 parallel links. Thus the
196 ``1,2`` that is used as ``link count``.
198 .. image:: ../../examples/platforms/cluster_fat_tree.svg
201 .. literalinclude:: ../../examples/platforms/cluster_fat_tree.xml
209 This topology was introduced to further reduce the amount of links
210 while maintaining a high bandwidth for local communications. To model
211 this in SimGrid, pass a ``topology="DRAGONFLY"`` attribute to your
212 cluster. It's based on the implementation of the topology used on
213 Cray XC systems, described in paper
214 `Cray Cascade: A scalable HPC system based on a Dragonfly network <https://dl.acm.org/citation.cfm?id=2389136>`_.
216 System description follows the format ``topo_parameters=#groups;#chassis;#routers;#nodes``
217 For example, ``3,4 ; 3,2 ; 3,1 ; 2``:
219 - ``3,4``: There are 3 groups with 4 links between each (blue level).
220 Links to nth group are attached to the nth router of the group
221 on our implementation.
222 - ``3,2``: In each group, there are 3 chassis with 2 links between each nth router
223 of each group (black level)
224 - ``3,1``: In each chassis, 3 routers are connected together with a single link
226 - ``2``: Each router has two nodes attached (single link)
228 .. image:: ../../examples/platforms/cluster_dragonfly.svg
231 .. literalinclude:: ../../examples/platforms/cluster_dragonfly.xml
237 We only glanced over the abilities offered by SimGrid to describe the
238 platform topology. Other networking zones model non-HPC platforms
239 (such as wide area networks, ISP network comprising set-top boxes, or
240 even your own routing schema). You can interconnect several networking
241 zones in your platform to form a tree of zones, that is both a time-
242 and memory-efficient representation of distributed platforms. Please
243 head to the dedicated :ref:`documentation <platform>` for more
249 It is time to start using SMPI yourself. For that, you first need to
250 install it somehow, and then you will need a MPI application to play with.
255 The easiest way to take the tutorial is to use the dedicated Docker
256 image. Once you `installed Docker itself
257 <https://docs.docker.com/install/>`_, simply do the following:
259 .. code-block:: shell
261 docker pull simgrid/tuto-smpi
262 docker run -it --rm --name simgrid --volume ~/smpi-tutorial:/source/tutorial simgrid/tuto-smpi bash
264 This will start a new container with all you need to take this
265 tutorial, and create a ``smpi-tutorial`` directory in your home on
266 your host machine that will be visible as ``/source/tutorial`` within the
267 container. You can then edit the files you want with your favorite
268 editor in ``~/smpi-tutorial``, and compile them within the
269 container to enjoy the provided dependencies.
273 Any change to the container out of ``/source/tutorial`` will be lost
274 when you log out of the container, so don't edit the other files!
276 All needed dependencies are already installed in this container
277 (SimGrid, the C/C++/Fortran compilers, make, pajeng, R and pajengr). Vite being
278 only optional in this tutorial, it is not installed to reduce the
281 The container also include the example platform files from the
282 previous section as well as the source code of the NAS Parallel
283 Benchmarks. These files are available under
284 ``/source/simgrid-template-smpi`` in the image. You should copy it to
285 your working directory when you first log in:
287 .. code-block:: shell
289 cp -r /source/simgrid-template-smpi/* /source/tutorial
292 Using your Computer Natively
293 ............................
295 To take the tutorial on your machine, you first need to :ref:`install
296 SimGrid <install>`, the C/C++/Fortran compilers and also ``pajeng`` to
297 visualize the traces. You may want to install `Vite
298 <http://vite.gforge.inria.fr/>`_ to get a first glance at the
299 traces. The provided code template requires make to compile. On
300 Debian and Ubuntu for example, you can get them as follows:
302 .. code-block:: shell
304 sudo apt install simgrid pajeng make gcc g++ gfortran vite
306 For R analysis of the produced traces, you may want to install R,
307 and the `pajengr<https://github.com/schnorr/pajengr#installation/>`_ package.
309 .. code-block:: shell
311 sudo apt install r-base r-cran-devtools cmake flex bison
312 Rscript -e "library(devtools); install_github('schnorr/pajengr');"
314 To take this tutorial, you will also need the platform files from the
315 previous section as well as the source code of the NAS Parallel
316 Benchmarks. Just clone `this repository
317 <https://framagit.org/simgrid/simgrid-template-smpi>`_ to get them all:
319 .. code-block:: shell
321 git clone https://framagit.org/simgrid/simgrid-template-smpi.git
322 cd simgrid-template-smpi/
324 If you struggle with the compilation, then you should double check
325 your :ref:`SimGrid installation <install>`. On need, please refer to
326 the :ref:`Troubleshooting your Project Setup
327 <install_yours_troubleshooting>` section.
332 It is time to simulate your first MPI program. Use the simplistic
334 <https://framagit.org/simgrid/simgrid-template-smpi/raw/master/roundtrip.c?inline=false>`_
335 that comes with the template.
337 .. literalinclude:: /tuto_smpi/roundtrip.c
340 Compiling and Executing
341 .......................
343 Compiling the program is straightforward (double check your
344 :ref:`SimGrid installation <install>` if you get an error message):
347 .. code-block:: shell
349 $ smpicc -O3 roundtrip.c -o roundtrip
352 Once compiled, you can simulate the execution of this program on 16
353 nodes from the ``cluster_crossbar.xml`` platform as follows:
355 .. code-block:: shell
357 $ smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile ./roundtrip
359 - The ``-np 16`` option, just like in regular MPI, specifies the
360 number of MPI processes to use.
361 - The ``-hostfile cluster_hostfile`` option, just like in regular
362 MPI, specifies the host file. If you omit this option, ``smpirun``
363 will deploy the application on the first machines of your platform.
364 - The ``-platform cluster_crossbar.xml`` option, **which doesn't exist
365 in regular MPI**, specifies the platform configuration to be
367 - At the end of the line, one finds the executable name and
368 command-line arguments (if any -- roundtrip does not expect any arguments).
370 Feel free to tweak the content of the XML platform file and the
371 program to see the effect on the simulated execution time. It may be
372 easier to compare the executions with the extra option
373 ``--cfg=smpi/display-timing:yes``. Note that the simulation accounts
374 for realistic network protocol effects and MPI implementation
375 effects. As a result, you may see "unexpected behavior" like in the
376 real world (e.g., sending a message 1 byte larger may lead to
377 significant higher execution time).
379 Lab 1: Visualizing LU
380 ---------------------
382 We will now simulate a larger application: the LU benchmark of the NAS
383 suite. The version provided in the code template was modified to
384 compile with SMPI instead of the regular MPI. Compare the difference
385 between the original ``config/make.def.template`` and the
386 ``config/make.def`` that was adapted to SMPI. We use ``smpiff`` and
387 ``smpicc`` as compilers, and don't pass any additional library.
389 Now compile and execute the LU benchmark, class S (i.e., for `small
391 <https://www.nas.nasa.gov/publications/npb_problem_sizes.html>`_) with
394 .. code-block:: shell
396 $ make lu NPROCS=4 CLASS=S
398 $ smpirun -np 4 -platform ../cluster_backbone.xml bin/lu.S.4
401 To get a better understanding of what is going on, activate the
402 vizualization tracing, and convert the produced trace for later
405 .. code-block:: shell
407 smpirun -np 4 -platform ../cluster_backbone.xml -trace --cfg=tracing/filename:lu.S.4.trace bin/lu.S.4
409 You can then produce a Gantt Chart with the following R chunk. You can
410 either copy/paste it in a R session, or `turn it into a Rscript executable
411 <https://swcarpentry.github.io/r-novice-inflammation/05-cmdline/>`_ to
412 run it again and again.
420 df_state = pajeng_read("lu.S.4.trace")
421 names(df_state) = c("Type", "Rank", "Container", "Start", "End", "Duration", "Level", "State");
422 df_state = df_state[!(names(df_state) %in% c("Type","Container","Level"))]
423 df_state$Rank = as.numeric(gsub("rank-","",df_state$Rank))
425 # Draw the Gantt Chart
426 gc = ggplot(data=df_state) + geom_rect(aes(xmin=Start, xmax=End, ymin=Rank, ymax=Rank+1,fill=State))
432 This produces a file called ``Rplots.pdf`` with the following
433 content. You can find more visualization examples `online
434 <https://simgrid.org/contrib/R_visualization.html>`_.
436 .. image:: /tuto_smpi/img/lu.S.4.png
439 Lab 2: Tracing and Replay of LU
440 -------------------------------
442 Now compile and execute the LU benchmark, class A, with 32 nodes.
444 .. code-block:: shell
446 $ make lu NPROCS=32 CLASS=A
448 This takes several minutes to to simulate, because all code from all
449 processes has to be really executed, and everything is serialized.
451 SMPI provides several methods to speed things up. One of them is to
452 capture a time independent trace of the running application, and
453 replay it on a different platform with the same amount of nodes. The
454 replay is much faster than live simulation, as the computations are
455 skipped (the application must be network-dependent for this to work).
457 You can even generate the trace during the live simulation as follows:
459 .. code-block:: shell
461 $ smpirun -trace-ti --cfg=tracing/filename:LU.A.32 -np 32 -platform ../cluster_backbone.xml bin/lu.A.32
463 The produced trace is composed of a file ``LU.A.32`` and a folder
464 ``LU.A.32_files``. You can replay this trace with SMPI thanks to ``smpirun``.
465 For example, the following command replays the trace on a different platform:
467 .. code-block:: shell
469 $ smpirun -np 32 -platform ../cluster_crossbar.xml -hostfile ../cluster_hostfile -replay LU.A.32
471 All the outputs are gone, as the application is not really simulated
472 here. Its trace is simply replayed. But if you visualize the live
473 simulation and the replay, you will see that the behavior is
474 unchanged. The simulation does not run much faster on this very
475 example, but this becomes very interesting when your application
476 is computationally hungry.
480 The commands should be separated and executed by some CI to make sure
481 the documentation is up-to-date.
483 Lab 3: Execution Sampling on Matrix Multiplication example
484 -------------------------------
486 The second method to speed up simulations is to sample the computation
487 parts in the code. This means that the person doing the simulation
488 needs to know the application and identify parts that are compute
489 intensive and take time, while being regular enough not to ruin
490 simulation accuracy. Furthermore there should not be any MPI calls
491 inside such parts of the code.
493 Use for this part the `gemm_mpi.c
494 <https://gitlab.com/PRACE-4IP/CodeVault/raw/master/hpc_kernel_samples/dense_linear_algebra/gemm/mpi/src/gemm_mpi.cpp>`_
495 example, which is provided by the `PRACE Codevault repository
496 <http://www.prace-ri.eu/prace-codevault/>`_.
498 The computing part of this example is the matrix multiplication routine
500 .. literalinclude:: /tuto_smpi/gemm_mpi.cpp
505 .. code-block:: shell
507 $ smpicc -O3 gemm_mpi.cpp -o gemm
508 $ time smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile --cfg=smpi/display-timing:yes --cfg=smpi/running-power:1000000000 ./gemm
510 This should end quite quickly, as the size of each matrix is only 1000x1000.
511 But what happens if we want to simulate larger runs ?
512 Replace the size by 2000, 3000, and try again.
514 The simulation time increases a lot, while there are no more MPI calls performed, only computation.
516 The ``--cfg=smpi/display-timing`` option gives more details about execution,
517 and advises to use sampling if the time spent in computing loops seems too high.
519 The ``--cfg=smpi/running-power:1000000000`` option sets the speed of the processor used for
520 running the simulation. Here we say that its speed is the same as one of the
521 processors we are simulation (1Gf), so that 1 second of computation is injected
522 as 1 second in the simulation.
524 .. code-block:: shell
526 [5.568556] [smpi_kernel/INFO] Simulated time: 5.56856 seconds.
528 The simulation took 24.9403 seconds (after parsing and platform setup)
529 24.0764 seconds were actual computation of the application
530 [5.568556] [smpi_kernel/INFO] More than 75% of the time was spent inside the application code.
531 You may want to use sampling functions or trace replay to reduce this.
533 So in our case (size 3000) the simulation ran for 25 seconds, and simulated time was 5.57s at the end.
534 Computation by itself took 24 seconds, and can quickly grow with larger sizes
535 (as computation is really performed, there will be variability between similar runs).
537 SMPI provides sampling macros in order to accelerate simulation by sampling iterations
538 of large computation loops, and skip computation after a certain amount of iterations,
539 or when the sampling is stable enough.
541 The two macros only slightly differ :
543 - ``SMPI_SAMPLE_GLOBAL`` : specified number of samples is produced by all processors
544 - ``SMPI_SAMPLE_LOCAL`` : each process executes a specified number of iterations
546 So if the size of the computed part varies between processes (imbalance),
547 it's safer to use the LOCAL one.
549 To use one of them, replacing the external for loop of the multiply routine:
553 for (int i = istart; i <= iend; ++i)
559 SMPI_SAMPLE_GLOBAL(int i = istart, i <= iend, ++i, 10, 0.005)
561 First three parameters are the ones from the loop, while the two last ones are for sampling.
562 They mean that at most 10 iterations will be performed, and that sampling phase can be exited
563 earlier if a certain stability is reached after less samples.
565 Now run the code again with various sizes and parameters and check the time taken for the
566 simulation, as well as the resulting simulated time.
568 .. code-block:: shell
570 [5.575691] [smpi_kernel/INFO] Simulated time: 5.57569 seconds.
571 The simulation took 1.23698 seconds (after parsing and platform setup)
572 0.0319454 seconds were actual computation of the application
574 In this case the simulation only took 1.2 seconds, while the simulated time
575 stayed almost identical.
577 Obviously the results of the computation will be altered as most of it is skipped,
578 so these macros cannot be used when results are critical for the application behavior
579 (convergence estimation for instance will be wrong on some codes).
582 Lab 4: Memory folding on large allocations
583 -------------------------------
585 Another issue that can be encountered when simulation with SMPI is lack of memory.
586 Indeed we are executing all MPI processes on a single node, which can lead to crashes.
587 We will use the DT benchmark of the NAS suite to illustrate how to avoid such issues.
589 With 85 processes and class C, the DT simulated benchmark will try to allocate 35GB of memory
590 , which may not be available on the node your are using.
592 To avoid this we can simply replace the largest calls to malloc and free by calls
593 to ``SMPI_SHARED_MALLOC`` and ``SMPI_SHARED_FREE``.
594 This means that all processes will share one single instance of this buffer.
595 As for sampling, results will be altered, and it shouldn't be used for control structures.
597 For DT example, there are three different calls to malloc in the file, and one of them is for a needed structure.
598 Find it and replace the two other ones by SMPI_SHARED_MALLOC (there is only one free to replace for both of them).
600 Once done, you can now run
602 .. code-block:: shell
604 $ make dt NPROCS=85 CLASS=C
606 $ smpirun -np 85 -platform ../cluster_backbone.xml bin/dt.C.x BH
609 And simulation should finish without swapping/crashing (Ignore the warning about the return value).
611 If control structures are also problematic, you can use ``SMPI_PARTIAL_SHARED_MALLOC(size, offsets, offsetscount)``
612 macro, which will shared only specific parts of the structure between processes,
613 and use specific memory for the important parts.
614 It can be freed afterwards with SMPI_SHARED_FREE.
619 You may also be interested in the `SMPI reference article
620 <https://hal.inria.fr/hal-01415484>`_ or these `introductory slides
621 <http://simgrid.org/tutorials/simgrid-smpi-101.pdf>`_. The `SMPI
622 reference documentation <SMPI_doc>`_ covers much more content than
625 Finally, we regularly use SimGrid in our teachings on MPI. This way,
626 our student can experiment with platforms that they do not have access
627 to, and the associated visualisation tools helps them to understand
628 their work. The whole material is available online, in a separate
629 project: the `SMPI CourseWare <https://simgrid.github.io/SMPI_CourseWare/>`_.
631 .. LocalWords: SimGrid