X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/blobdiff_plain/d23a79b10420561989224c456636d26e771672e8..8b2a80193ee04c5e996177003ba9ae27ced223a7:/docs/source/tuto_smpi.rst diff --git a/docs/source/tuto_smpi.rst b/docs/source/tuto_smpi.rst index 55beda2a42..e0a693cfaf 100644 --- a/docs/source/tuto_smpi.rst +++ b/docs/source/tuto_smpi.rst @@ -3,6 +3,11 @@ Simulating MPI Applications =========================== +.. warning:: This document is still in early stage. You can try to + take this tutorial, but should not be surprised if things fall short. + It will be completed for the next release, v3.22, released by the end + of 2018. + Discover SMPI ------------- @@ -49,7 +54,7 @@ To start using SMPI, you just need to compile your application with ``mpiff``, or with ``smpicxx`` instead of ``mpicxx``. Then, the only difference between the classical ``mpirun`` and the new ``smpirun`` is that it requires a new parameter ``-platform`` with a file describing -the virtual platform on which your application shall run. +the simulated platform on which your application shall run. Internally, all ranks of your application are executed as threads of a single unix process. That's not a problem if your application has @@ -68,7 +73,7 @@ Describing Your Platform ------------------------ As a SMPI user, you are supposed to provide a description of your -virtual platform, that is mostly a set of simulated hosts and network +simulated platform, that is mostly a set of simulated hosts and network links with some performance characteristics. SimGrid provides a plenty of :ref:`documentation ` and examples (in the `examples/platforms `_ @@ -209,15 +214,28 @@ Dragonfly Cluster This topology was introduced to further reduce the amount of links while maintaining a high bandwidth for local communications. To model this in SimGrid, pass a ``topology="DRAGONFLY"`` attribute to your -cluster. +cluster. It's based on the implementation of the topology used on +Cray XC systems, described in paper +`Cray Cascade: A scalable HPC system based on a Dragonfly network `_. + +System description follows the format ``topo_parameters=#groups;#chassis;#routers;#nodes`` +For example, ``3,4 ; 3,2 ; 3,1 ; 2``: + +- ``3,4``: There are 3 groups with 4 links between each (blue level). + Links to nth group are attached to the nth router of the group + on our implementation. +- ``3,2``: In each group, there are 3 chassis with 2 links between each nth router + of each group (black level) +- ``3,1``: In each chassis, 3 routers are connected together with a single link + (green level) +- ``2``: Each router has two nodes attached (single link) + +.. image:: ../../examples/platforms/cluster_dragonfly.svg + :align: center .. literalinclude:: ../../examples/platforms/cluster_dragonfly.xml :language: xml -.. todo:: - - Add the image, and the documuentation of the topo_parameters. - Final Word .......... @@ -347,11 +365,13 @@ nodes from the ``cluster_crossbar.xml`` platform as follows: command-line arguments (if any -- roundtrip does not expect any arguments). Feel free to tweak the content of the XML platform file and the -prorgam to see the effect on the simulated execution time. Note that -the simulation accounts for realistic network protocol effects and MPI -implementation effects. As a result, you may see "unexpected behavior" -like in the real world (e.g., sending a message 1 byte larger may lead -to significant higher execution time). +program to see the effect on the simulated execution time. It may be +easier to compare the executions with the extra option +``--cfg=smpi/display_timing:yes``. Note that the simulation accounts +for realistic network protocol effects and MPI implementation +effects. As a result, you may see "unexpected behavior" like in the +real world (e.g., sending a message 1 byte larger may lead to +significant higher execution time). Lab 1: Visualizing LU --------------------- @@ -363,14 +383,16 @@ between the original ``config/make.def.template`` and the ``config/make.def`` that was adapted to SMPI. We use ``smpiff`` and ``smpicc`` as compilers, and don't pass any additional library. -Now compile and execute the LU benchmark, class A (i.e., for small -data size) with 4 nodes. +Now compile and execute the LU benchmark, class S (i.e., for `small +data size +`_) with +4 nodes. .. code-block:: shell - $ make lu NPROCS=4 CLASS=A + $ make lu NPROCS=4 CLASS=S (compilation logs) - $ smpirun -np 4 -platform ../cluster_backbone.xml bin/lu.A.4 + $ smpirun -np 4 -platform ../cluster_backbone.xml bin/lu.S.4 (execution logs) To get a better understanding of what is going on, activate the @@ -379,8 +401,8 @@ use: .. code-block:: shell - smpirun -np 4 -platform ../cluster_backbone.xml -trace --cfg=tracing/filename:lu.A.4.trace bin/lu.A.4 - pj_dump --ignore-incomplete-links lu.A.4.trace | grep State > lu.A.4.state.csv + smpirun -np 4 -platform ../cluster_backbone.xml -trace --cfg=tracing/filename:lu.S.4.trace bin/lu.S.4 + pj_dump --ignore-incomplete-links lu.S.4.trace | grep State > lu.S.4.state.csv You can then produce a Gantt Chart with the following R chunk. You can either copy/paste it in a R session, or `turn it into a Rscript executable @@ -392,7 +414,7 @@ run it again and again. library(ggplot2) # Read the data - df_state = read.csv("lu.A.4.state.csv", header=F, strip.white=T) + df_state = read.csv("lu.S.4.state.csv", header=F, strip.white=T) names(df_state) = c("Type", "Rank", "Container", "Start", "End", "Duration", "Level", "State"); df_state = df_state[!(names(df_state) %in% c("Type","Container","Level"))] df_state$Rank = as.numeric(gsub("rank-","",df_state$Rank)) @@ -405,14 +427,86 @@ run it again and again. dev.off() This produces a file called ``Rplots.pdf`` with the following -content. You can find more examples of visualization in the `SimGrid -documentation `_. +content. You can find more visualization examples `online +`_. -.. image:: /tuto_smpi/img/lu.A.4.png +.. image:: /tuto_smpi/img/lu.S.4.png :align: center Lab 2: Tracing and Replay of LU ------------------------------- +Now compile and execute the LU benchmark, class A, with 32 nodes. + +.. code-block:: shell + + $ make lu NPROCS=32 CLASS=A + +This takes several minutes to to simulate, because all code from all +processes has to be really executed, and everything is serialized. + +SMPI provides several methods to speed things up. One of them is to +capture a time independent trace of the running application, and +replay it on a different platform with the same amount of nodes. The +replay is much faster than live simulation, as the computations are +skipped (the application must be network-dependent for this to work). + +You can even generate the trace during as live simulation, as follows: + +.. code-block:: shell + + $ smpirun -trace-ti --cfg=tracing/filename:LU.A.32 -np 32 -platform ../cluster_backbone.xml bin/lu.A.32 + +The produced trace is composed of a file ``LU.A.32`` and a folder +``LU.A.32_files``. To replay this with SMPI, you need to first compile +the provided ``smpi_replay.cpp`` file, that comes from +`simgrid/examples/smpi/replay +`_. + +.. code-block:: shell + + $ smpicxx ../replay.cpp -O3 -o ../smpi_replay + +Afterward, you can replay your trace in SMPI as follows: + + $ smpirun -np 32 -platform ../cluster_torus.xml -ext smpi_replay ../smpi_replay LU.A.32 + +All the outputs are gone, as the application is not really simulated +here. Its trace is simply replayed. But if you visualize the live +simulation and the replay, you will see that the behavior is +unchanged. The simulation does not run much faster on this very +example, but this becomes very interesting when your application +is computationally hungry. + +.. todo:: smpi_replay should be installed by SimGrid, and smpirun interface could be simplified here. + +Lab 3: Execution Sampling on EP +------------------------------- + +The second method to speed up simulations is to sample the computation +parts in the code. This means that the person doing the simulation +needs to know the application and identify parts that are compute +intensive and take time, while being regular enough not to ruin +simulation accuracy. Furthermore there should not be any MPI calls +inside such parts of the code. + +Use the EP benchmark, class B, 16 processes. + +.. todo:: write this section, and the following ones. + +Further Readings +---------------- + +You may also be interested in the `SMPI reference article +`_ or these `introductory slides +`_. The `SMPI +reference documentation `_ covers much more content than +this short tutorial. + +Finally, we regularly use SimGrid in our teachings on MPI. This way, +our student can experiment with platforms that they do not have access +to, and the associated visualisation tools helps them to understand +their work. The whole material is available online, in a separate +project: the `SMPI CourseWare `_. .. LocalWords: SimGrid