3 Simulating MPI Applications
4 ===========================
6 .. warning:: This document is still in early stage. You can try to
7 take this tutorial, but should not be surprised if things fall short.
8 It will be completed for the next release, v3.22, released by the end
14 SimGrid can not only :ref:`simulate algorithms <usecase_simalgo>`, but
15 it can also be used to execute real MPI applications on top of
16 virtual, simulated platforms with the SMPI module. Even complex
17 C/C++/F77/F90 applications should run out of the box in this
18 environment. In fact, almost all proxy apps provided by the `ExaScale
19 Project <https://proxyapps.exascaleproject.org/>`_ only require minor
20 modifications to `run on top of SMPI
21 <https://github.com/simgrid/SMPI-proxy-apps/>`_.
23 This setting permits to debug your MPI applications in a perfectly
24 reproducible setup, with no Heisenbugs. Enjoy the full Clairevoyance
25 provided by the simulator while running what-if analysis on platforms
26 that are still to be built! Several `production-grade MPI applications
27 <https://framagit.org/simgrid/SMPI-proxy-apps#full-scale-applications>`_
28 use SimGrid for their integration and performance testing.
30 MPI 2.2 is already partially covered: over 160 primitives are
31 supported. Some parts of the standard are still missing: MPI-IO, MPI3
32 collectives, spawning ranks, inter-communicators, and some others. If
33 one of the functions you use is still missing, please drop us an
34 email. We may find the time to implement it for you.
36 Multi-threading support is very limited in SMPI. Only funneled
37 applications are supported: at most one thread per rank can issue any
38 MPI calls. For better timing predictions, your application should even
39 be completely mono-threaded. Using OpenMP (or pthreads directly) may
40 greatly decrease SimGrid predictive power. That may still be OK if you
41 only plan to debug your application in a reproducible setup, without
42 any performance-related analysis.
47 In SMPI, communications are simulated while computations are
48 emulated. This means that while computations occur as they would in
49 the real systems, communication calls are intercepted and achived by
52 To start using SMPI, you just need to compile your application with
53 ``smpicc`` instead of ``mpicc``, or with ``smpiff`` instead of
54 ``mpiff``, or with ``smpicxx`` instead of ``mpicxx``. Then, the only
55 difference between the classical ``mpirun`` and the new ``smpirun`` is
56 that it requires a new parameter ``-platform`` with a file describing
57 the simulated platform on which your application shall run.
59 Internally, all ranks of your application are executed as threads of a
60 single unix process. That's not a problem if your application has
61 global variables, because ``smpirun`` loads one application instance
62 per MPI rank as if it was another dynamic library. Then, MPI
63 communication calls are implemented using SimGrid: data is exchanged
64 through memory copy, while the simulator's performance models are used
65 to predict the time taken by each communications. Any computations
66 occuring between two MPI calls are benchmarked, and the corresponding
67 time is reported into the simulator.
69 .. image:: /tuto_smpi/img/big-picture.svg
72 Describing Your Platform
73 ------------------------
75 As a SMPI user, you are supposed to provide a description of your
76 simulated platform, that is mostly a set of simulated hosts and network
77 links with some performance characteristics. SimGrid provides a plenty
78 of :ref:`documentation <platform>` and examples (in the
79 `examples/platforms <https://framagit.org/simgrid/simgrid/tree/master/examples/platforms>`_
80 source directory), and this section only shows a small set of introductory
83 Feel free to skip this section if you want to jump right away to usage
86 Simple Example with 3 hosts
87 ...........................
89 At the most basic level, you can describe your simulated platform as a
90 graph of hosts and network links. For instance:
92 .. image:: /tuto_smpi/3hosts.png
95 .. literalinclude:: /tuto_smpi/3hosts.xml
98 Note the way in which hosts, links, and routes are defined in
99 this XML. All hosts are defined with a speed (in Gflops), and links
100 with a latency (in us) and bandwidth (in MBytes per second). Other
101 units are possible and written as expected. Routes specify the list of
102 links encountered from one route to another. Routes are symmetrical by
105 Cluster with a Crossbar
106 .......................
108 A very common parallel computing platform is a homogeneous cluster in
109 which hosts are interconnected via a crossbar switch with as many
110 ports as hosts, so that any disjoint pairs of hosts can communicate
111 concurrently at full speed. For instance:
113 .. literalinclude:: ../../examples/platforms/cluster_crossbar.xml
117 One specifies a name prefix and suffix for each host, and then give an
118 integer range. In the example the cluster contains 65535 hosts (!),
119 named ``node-0.simgrid.org`` to ``node-65534.simgrid.org``. All hosts
120 have the same power (1 Gflop/sec) and are connected to the switch via
121 links with same bandwidth (125 MBytes/sec) and latency (50
128 Cluster with a Shared Backbone
129 ..............................
131 Another popular model for a parallel platform is that of a set of
132 homogeneous hosts connected to a shared communication medium, a
133 backbone, with some finite bandwidth capacity and on which
134 communicating host pairs can experience contention. For instance:
137 .. literalinclude:: ../../examples/platforms/cluster_backbone.xml
141 The only differences with the crossbar cluster above are the ``bb_bw``
142 and ``bb_lat`` attributes that specify the backbone characteristics
143 (here, a 500 microseconds latency and a 2.25 GByte/sec
144 bandwidth). This link is used for every communication within the
145 cluster. The route from ``node-0.simgrid.org`` to ``node-1.simgrid.org``
146 counts 3 links: the private link of ``node-0.simgrid.org``, the backbone
147 and the private link of ``node-1.simgrid.org``.
156 Many HPC facilities use torus clusters to reduce sharing and
157 performance loss on concurrent internal communications. Modeling this
158 in SimGrid is very easy. Simply add a ``topology="TORUS"`` attribute
159 to your cluster. Configure it with the ``topo_parameters="X,Y,Z"``
160 attribute, where ``X``, ``Y`` and ``Z`` are the dimension of your
163 .. image:: ../../examples/platforms/cluster_torus.svg
166 .. literalinclude:: ../../examples/platforms/cluster_torus.xml
169 Note that in this example, we used ``loopback_bw`` and
170 ``loopback_lat`` to specify the characteristics of the loopback link
171 of each node (i.e., the link allowing each node to communicate with
172 itself). We could have done so in previous example too. When no
173 loopback is given, the communication from a node to itself is handled
174 as if it were two distinct nodes: it goes twice through the private
175 link and through the backbone (if any).
180 This topology was introduced to reduce the amount of links in the
181 cluster (and thus reduce its price) while maintaining a high bisection
182 bandwidth and a relatively low diameter. To model this in SimGrid,
183 pass a ``topology="FAT_TREE"`` attribute to your cluster. The
184 ``topo_parameters=#levels;#downlinks;#uplinks;link count`` follows the
185 semantic introduced in the `Figure 1B of this article
186 <http://webee.eedev.technion.ac.il/wp-content/uploads/2014/08/publication_574.pdf>`_.
188 Here is the meaning of this example: ``2 ; 4,4 ; 1,2 ; 1,2``
190 - That's a two-level cluster (thus the initial ``2``).
191 - Routers are connected to 4 elements below them, regardless of its
192 level. Thus the ``4,4`` component that is used as
193 ``#downlinks``. This means that the hosts are grouped by 4 on a
194 given router, and that there is 4 level-1 routers (in the middle of
196 - Hosts are connected to only 1 router above them, while these routers
197 are connected to 2 routers above them (thus the ``1,2`` used as
199 - Hosts have only one link to their router while every path between a
200 level-1 routers and level-2 routers use 2 parallel links. Thus the
201 ``1,2`` that is used as ``link count``.
203 .. image:: ../../examples/platforms/cluster_fat_tree.svg
206 .. literalinclude:: ../../examples/platforms/cluster_fat_tree.xml
214 This topology was introduced to further reduce the amount of links
215 while maintaining a high bandwidth for local communications. To model
216 this in SimGrid, pass a ``topology="DRAGONFLY"`` attribute to your
217 cluster. It's based on the implementation of the topology used on
218 Cray XC systems, described in paper
219 `Cray Cascade: A scalable HPC system based on a Dragonfly network <https://dl.acm.org/citation.cfm?id=2389136>`_.
221 System description follows the format ``topo_parameters=#groups;#chassis;#routers;#nodes``
222 For example, ``3,4 ; 3,2 ; 3,1 ; 2``:
224 - ``3,4``: There are 3 groups with 4 links between each (blue level).
225 Links to nth group are attached to the nth router of the group
226 on our implementation.
227 - ``3,2``: In each group, there are 3 chassis with 2 links between each nth router
228 of each group (black level)
229 - ``3,1``: In each chassis, 3 routers are connected together with a single link
231 - ``2``: Each router has two nodes attached (single link)
233 .. image:: ../../examples/platforms/cluster_dragonfly.svg
236 .. literalinclude:: ../../examples/platforms/cluster_dragonfly.xml
242 We only glanced over the abilities offered by SimGrid to describe the
243 platform topology. Other networking zones model non-HPC platforms
244 (such as wide area networks, ISP network comprising set-top boxes, or
245 even your own routing schema). You can interconnect several networking
246 zones in your platform to form a tree of zones, that is both a time-
247 and memory-efficient representation of distributed platforms. Please
248 head to the dedicated :ref:`documentation <platform>` for more
254 It is time to start using SMPI yourself. For that, you first need to
255 install it somehow, and then you will need a MPI application to play with.
260 The easiest way to take the tutorial is to use the dedicated Docker
261 image. Once you `installed Docker itself
262 <https://docs.docker.com/install/>`_, simply do the following:
264 .. code-block:: shell
266 docker pull simgrid/tuto-smpi
267 docker run -it --rm --name simgrid --volume ~/smpi-tutorial:/source/tutorial simgrid/tuto-smpi bash
269 This will start a new container with all you need to take this
270 tutorial, and create a ``smpi-tutorial`` directory in your home on
271 your host machine that will be visible as ``/source/tutorial`` within the
272 container. You can then edit the files you want with your favorite
273 editor in ``~/smpi-tutorial``, and compile them within the
274 container to enjoy the provided dependencies.
278 Any change to the container out of ``/source/tutorial`` will be lost
279 when you log out of the container, so don't edit the other files!
281 All needed dependencies are already installed in this container
282 (SimGrid, the C/C++/Fortran compilers, make, pajeng and R). Vite being
283 only optional in this tutorial, it is not installed to reduce the
286 The container also include the example platform files from the
287 previous section as well as the source code of the NAS Parallel
288 Benchmarks. These files are available under
289 ``/source/simgrid-template-smpi`` in the image. You should copy it to
290 your working directory when you first log in:
292 .. code-block:: shell
294 cp -r /source/simgrid-template-smpi/* /source/tutorial
297 Using your Computer Natively
298 ............................
300 To take the tutorial on your machine, you first need to :ref:`install
301 SimGrid <install>`, the C/C++/Fortran compilers and also ``pajeng`` to
302 visualize the traces. You may want to install `Vite
303 <http://vite.gforge.inria.fr/>`_ to get a first glance at the
304 traces. The provided code template requires make to compile. On
305 Debian and Ubuntu for example, you can get them as follows:
307 .. code-block:: shell
309 sudo apt install simgrid pajeng make gcc g++ gfortran vite
311 To take this tutorial, you will also need the platform files from the
312 previous section as well as the source code of the NAS Parallel
313 Benchmarks. Just clone `this repository
314 <https://framagit.org/simgrid/simgrid-template-smpi>`_ to get them all:
316 .. code-block:: shell
318 git clone https://framagit.org/simgrid/simgrid-template-smpi.git
319 cd simgrid-template-smpi/
321 If you struggle with the compilation, then you should double check
322 your :ref:`SimGrid installation <install>`. On need, please refer to
323 the :ref:`Troubleshooting your Project Setup
324 <install_yours_troubleshooting>` section.
329 It is time to simulate your first MPI program. Use the simplistic
331 <https://framagit.org/simgrid/simgrid-template-smpi/raw/master/roundtrip.c?inline=false>`_
332 that comes with the template.
334 .. literalinclude:: /tuto_smpi/roundtrip.c
337 Compiling and Executing
338 .......................
340 Compiling the program is straightforward (double check your
341 :ref:`SimGrid installation <install>` if you get an error message):
344 .. code-block:: shell
346 $ smpicc -O3 roundtrip.c -o roundtrip
349 Once compiled, you can simulate the execution of this program on 16
350 nodes from the ``cluster_crossbar.xml`` platform as follows:
352 .. code-block:: shell
354 $ smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile ./roundtrip
356 - The ``-np 16`` option, just like in regular MPI, specifies the
357 number of MPI processes to use.
358 - The ``-hostfile cluster_hostfile`` option, just like in regular
359 MPI, specifies the host file. If you omit this option, ``smpirun``
360 will deploy the application on the first machines of your platform.
361 - The ``-platform cluster_crossbar.xml`` option, **which doesn't exist
362 in regular MPI**, specifies the platform configuration to be
364 - At the end of the line, one finds the executable name and
365 command-line arguments (if any -- roundtrip does not expect any arguments).
367 Feel free to tweak the content of the XML platform file and the
368 program to see the effect on the simulated execution time. It may be
369 easier to compare the executions with the extra option
370 ``--cfg=smpi/display_timing:yes``. Note that the simulation accounts
371 for realistic network protocol effects and MPI implementation
372 effects. As a result, you may see "unexpected behavior" like in the
373 real world (e.g., sending a message 1 byte larger may lead to
374 significant higher execution time).
376 Lab 1: Visualizing LU
377 ---------------------
379 We will now simulate a larger application: the LU benchmark of the NAS
380 suite. The version provided in the code template was modified to
381 compile with SMPI instead of the regular MPI. Compare the difference
382 between the original ``config/make.def.template`` and the
383 ``config/make.def`` that was adapted to SMPI. We use ``smpiff`` and
384 ``smpicc`` as compilers, and don't pass any additional library.
386 Now compile and execute the LU benchmark, class S (i.e., for `small
388 <https://www.nas.nasa.gov/publications/npb_problem_sizes.html>`_) with
391 .. code-block:: shell
393 $ make lu NPROCS=4 CLASS=S
395 $ smpirun -np 4 -platform ../cluster_backbone.xml bin/lu.S.4
398 To get a better understanding of what is going on, activate the
399 vizualization tracing, and convert the produced trace for later
402 .. code-block:: shell
404 smpirun -np 4 -platform ../cluster_backbone.xml -trace --cfg=tracing/filename:lu.S.4.trace bin/lu.S.4
405 pj_dump --ignore-incomplete-links lu.S.4.trace | grep State > lu.S.4.state.csv
407 You can then produce a Gantt Chart with the following R chunk. You can
408 either copy/paste it in a R session, or `turn it into a Rscript executable
409 <https://swcarpentry.github.io/r-novice-inflammation/05-cmdline/>`_ to
410 run it again and again.
417 df_state = read.csv("lu.S.4.state.csv", header=F, strip.white=T)
418 names(df_state) = c("Type", "Rank", "Container", "Start", "End", "Duration", "Level", "State");
419 df_state = df_state[!(names(df_state) %in% c("Type","Container","Level"))]
420 df_state$Rank = as.numeric(gsub("rank-","",df_state$Rank))
422 # Draw the Gantt Chart
423 gc = ggplot(data=df_state) + geom_rect(aes(xmin=Start, xmax=End, ymin=Rank, ymax=Rank+1,fill=State))
429 This produces a file called ``Rplots.pdf`` with the following
430 content. You can find more visualization examples `online
431 <http://simgrid.gforge.inria.fr/contrib/R_visualization.html>`_.
433 .. image:: /tuto_smpi/img/lu.S.4.png
436 Lab 2: Tracing and Replay of LU
437 -------------------------------
439 Now compile and execute the LU benchmark, class A, with 32 nodes.
441 .. code-block:: shell
443 $ make lu NPROCS=32 CLASS=A
445 This takes several minutes to to simulate, because all code from all
446 processes has to be really executed, and everything is serialized.
448 SMPI provides several methods to speed things up. One of them is to
449 capture a time independent trace of the running application, and
450 replay it on a different platform with the same amount of nodes. The
451 replay is much faster than live simulation, as the computations are
452 skipped (the application must be network-dependent for this to work).
454 You can even generate the trace during the live simulation as follows:
456 .. code-block:: shell
458 $ smpirun -trace-ti --cfg=tracing/filename:LU.A.32 -np 32 -platform ../cluster_backbone.xml bin/lu.A.32
460 The produced trace is composed of a file ``LU.A.32`` and a folder
461 ``LU.A.32_files``. You can replay this trace with SMPI thanks to ``smpirun``.
462 For example, the following command replays the trace on a different platform:
464 .. code-block:: shell
466 $ smpirun -np 32 -platform ../cluster_crossbar.xml -hostfile ../cluster_hostfile -replay LU.A.32
468 All the outputs are gone, as the application is not really simulated
469 here. Its trace is simply replayed. But if you visualize the live
470 simulation and the replay, you will see that the behavior is
471 unchanged. The simulation does not run much faster on this very
472 example, but this becomes very interesting when your application
473 is computationally hungry.
477 The commands should be separated and executed by some CI to make sure
478 the documentation is up-to-date.
480 Lab 3: Execution Sampling on EP
481 -------------------------------
483 The second method to speed up simulations is to sample the computation
484 parts in the code. This means that the person doing the simulation
485 needs to know the application and identify parts that are compute
486 intensive and take time, while being regular enough not to ruin
487 simulation accuracy. Furthermore there should not be any MPI calls
488 inside such parts of the code.
490 Use the EP benchmark, class B, 16 processes.
492 .. todo:: write this section, and the following ones.
497 You may also be interested in the `SMPI reference article
498 <https://hal.inria.fr/hal-01415484>`_ or these `introductory slides
499 <http://simgrid.org/tutorials/simgrid-smpi-101.pdf>`_. The `SMPI
500 reference documentation <SMPI_doc>`_ covers much more content than
503 Finally, we regularly use SimGrid in our teachings on MPI. This way,
504 our student can experiment with platforms that they do not have access
505 to, and the associated visualisation tools helps them to understand
506 their work. The whole material is available online, in a separate
507 project: the `SMPI CourseWare <https://simgrid.github.io/SMPI_CourseWare/>`_.
509 .. LocalWords: SimGrid