3 Simulating MPI Applications
4 ===========================
9 SimGrid can not only :ref:`simulate algorithms <usecase_simalgo>`, but
10 it can also be used to execute real MPI applications on top of
11 virtual, simulated platforms with the SMPI module. Even complex
12 C/C++/F77/F90 applications should run out of the box in this
13 environment. Almost all proxy apps provided by the `ExaScale
14 Project <https://proxyapps.exascaleproject.org/>`_ only require minor
15 modifications to `run on top of SMPI
16 <https://framagit.org/simgrid/SMPI-proxy-apps>`_.
18 This setting permits one to debug your MPI applications in a perfectly
19 reproducible setup, with no Heisenbugs. Enjoy the full Clairvoyance
20 provided by the simulator while running what-if analyses on platforms
21 that are still to be built! Several `production-grade MPI applications
22 <https://framagit.org/simgrid/SMPI-proxy-apps#full-scale-applications>`_
23 use SimGrid for their integration and performance testing.
25 MPI 2.2 is already partially covered: over 160 primitives are
26 supported. Some parts of the standard are still missing: MPI-IO, MPI3
27 collectives, spawning ranks, inter-communicators, and some others. If
28 one of the functions you use is still missing, please drop us an
29 email. We may find the time to implement it for you.
31 Multi-threading support is very limited in SMPI. Only funneled
32 applications are supported: at most one thread per rank can issue any
33 MPI calls. For better timing predictions, your application should even
34 be completely mono-threaded. Using OpenMP (or pthreads directly) may
35 greatly decrease SimGrid predictive power. That may still be OK if you
36 only plan to debug your application in a reproducible setup, without
37 any performance-related analysis.
42 In SMPI, communications are simulated while computations are
43 emulated. This means that while computations occur as they would in
44 the real systems, communication calls are intercepted and achieved by
47 To start using SMPI, you just need to compile your application with
48 ``smpicc`` instead of ``mpicc``, or with ``smpiff`` instead of
49 ``mpiff``, or with ``smpicxx`` instead of ``mpicxx``. Then, the only
50 difference between the classical ``mpirun`` and the new ``smpirun`` is
51 that it requires a new parameter ``-platform`` with a file describing
52 the simulated platform on which your application shall run.
54 Internally, all ranks of your application are executed as threads of a
55 single unix process. That's not a problem if your application has
56 global variables, because ``smpirun`` loads one application instance
57 per MPI rank as if it was another dynamic library. Then, MPI
58 communication calls are implemented using SimGrid: data is exchanged
59 through memory copy, while the simulator's performance models are used
60 to predict the time taken by each communication. Any computations
61 occurring between two MPI calls are benchmarked, and the corresponding
62 time is reported into the simulator.
64 .. image:: /tuto_smpi/img/big-picture.svg
67 Describing Your Platform
68 ------------------------
70 As an SMPI user, you are supposed to provide a description of your
71 simulated platform, which is mostly a set of simulated hosts and network
72 links with some performance characteristics. SimGrid provides plenty
73 of :ref:`documentation <platform>` and examples (in the
74 `examples/platforms <https://framagit.org/simgrid/simgrid/tree/master/examples/platforms>`_
75 source directory), and this section only shows a small set of introductory
78 Feel free to skip this section if you want to jump right away to usage
81 Simple Example with 3 hosts
82 ...........................
84 Imagine you want to describe a little platform with three hosts,
85 interconnected as follows:
87 .. image:: /tuto_smpi/3hosts.png
90 This can be done with the following platform file, which considers the
91 simulated platform as a graph of hosts and network links.
93 .. literalinclude:: /tuto_smpi/3hosts.xml
96 The elements basic elements (with :ref:`pf_tag_host` and
97 :ref:`pf_tag_link`) are described first, and then the routes between
98 any pair of hosts are explicitly given with :ref:`pf_tag_route`.
100 Any host must be given a computational speed in flops while links must
101 be given a latency and a bandwidth. You can write 1Gf for
102 1,000,000,000 flops (full list of units in the reference guide of
103 :ref:`pf_tag_host` and :ref:`pf_tag_link`).
105 Routes defined with :ref:`pf_tag_route` are symmetrical by default,
106 meaning that the list of traversed links from A to B is the same as
107 from B to A. Explicitly define non-symmetrical routes if you prefer.
109 Cluster with a Crossbar
110 .......................
112 A very common parallel computing platform is a homogeneous cluster in
113 which hosts are interconnected via a crossbar switch with as many
114 ports as hosts so that any disjoint pairs of hosts can communicate
115 concurrently at full speed. For instance:
117 .. literalinclude:: ../../examples/platforms/cluster_crossbar.xml
121 One specifies a name prefix and suffix for each host and then gives an
122 integer range. In the example, the cluster contains 65535 hosts (!),
123 named ``node-0.simgrid.org`` to ``node-65534.simgrid.org``. All hosts
124 have the same power (1 Gflop/sec) and are connected to the switch via
125 links with the same bandwidth (125 MBytes/sec) and latency (50
132 Cluster with a Shared Backbone
133 ..............................
135 Another popular model for a parallel platform is that of a set of
136 homogeneous hosts connected to a shared communication medium, a
137 backbone, with some finite bandwidth capacity, and on which
138 communicating host pairs can experience contention. For instance:
141 .. literalinclude:: ../../examples/platforms/cluster_backbone.xml
145 The only differences with the crossbar cluster above are the ``bb_bw``
146 and ``bb_lat`` attributes that specify the backbone characteristics
147 (here, a 500 microseconds latency and a 2.25 GByte/sec
148 bandwidth). This link is used for every communication within the
149 cluster. The route from ``node-0.simgrid.org`` to ``node-1.simgrid.org``
150 counts 3 links: the private link of ``node-0.simgrid.org``, the backbone,
151 and the private link of ``node-1.simgrid.org``.
160 Many HPC facilities use torus clusters to reduce sharing and
161 performance loss on concurrent internal communications. Modeling this
162 in SimGrid is very easy. Simply add a ``topology="TORUS"`` attribute
163 to your cluster. Configure it with the ``topo_parameters="X,Y,Z"``
164 attribute, where ``X``, ``Y``, and ``Z`` are the dimensions of your
167 .. image:: ../../examples/platforms/cluster_torus.svg
170 .. literalinclude:: ../../examples/platforms/cluster_torus.xml
173 Note that in this example, we used ``loopback_bw`` and
174 ``loopback_lat`` to specify the characteristics of the loopback link
175 of each node (i.e., the link allowing each node to communicate with
176 itself). We could have done so in the previous example too. When no
177 loopback is given, the communication from a node to itself is handled
178 as if it were two distinct nodes: it goes twice through the private
179 link and through the backbone (if any).
184 This topology was introduced to reduce the number of links in the
185 cluster (and thus reduce its price) while maintaining a high bisection
186 bandwidth and a relatively low diameter. To model this in SimGrid,
187 pass a ``topology="FAT_TREE"`` attribute to your cluster. The
188 ``topo_parameters=#levels;#downlinks;#uplinks;link count`` follows the
189 semantic introduced in `Figure 1B of this article
190 <http://webee.eedev.technion.ac.il/wp-content/uploads/2014/08/publication_574.pdf>`_.
192 Here is the meaning of this example: ``2 ; 4,4 ; 1,2 ; 1,2``
194 - That's a two-level cluster (thus the initial ``2``).
195 - Routers are connected to 4 elements below them, regardless of their
196 level. Thus the ``4,4`` component used as
197 ``#downlinks``. This means that the hosts are grouped by 4 on a
198 given router, and that there are 4 level-1 routers (in the middle of
200 - Hosts are connected to only 1 router above them, while these routers
201 are connected to 2 routers above them (thus the ``1,2`` used as
203 - Hosts have only one link to their router while every path between
204 level-1 routers and level-2 routers uses 2 parallel links. Thus the
205 ``1,2`` used as ``link count``.
207 .. image:: ../../examples/platforms/cluster_fat_tree.svg
210 .. literalinclude:: ../../examples/platforms/cluster_fat_tree.xml
218 This topology was introduced to further reduce the number of links
219 while maintaining a high bandwidth for local communications. To model
220 this in SimGrid, pass a ``topology="DRAGONFLY"`` attribute to your
221 cluster. It's based on the implementation of the topology used on
222 Cray XC systems, described in the paper
223 `Cray Cascade: A scalable HPC system based on a Dragonfly network <https://dl.acm.org/citation.cfm?id=2389136>`_.
225 System description follows the format ``topo_parameters=#groups;#chassis;#routers;#nodes``
226 For example, ``3,4 ; 3,2 ; 3,1 ; 2``:
228 - ``3,4``: There are 3 groups with 4 links between each (blue level).
229 Links to the nth group are attached to the nth router of the group
230 on our implementation.
231 - ``3,2``: In each group, there are 3 chassis with 2 links between each nth router
232 of each group (black level)
233 - ``3,1``: In each chassis, 3 routers are connected with a single link
235 - ``2``: Each router has two nodes attached (single link)
237 .. image:: ../../examples/platforms/cluster_dragonfly.svg
240 .. literalinclude:: ../../examples/platforms/cluster_dragonfly.xml
246 We only glanced over the abilities offered by SimGrid to describe the
247 platform topology. Other networking zones model non-HPC platforms
248 (such as wide area networks, ISP networks comprising set-top boxes, or
249 even your own routing schema). You can interconnect several networking
250 zones in your platform to form a tree of zones, that is both a time-
251 and memory-efficient representation of distributed platforms. Please
252 head to the dedicated :ref:`documentation <platform>` for more
258 It is time to start using SMPI yourself. For that, you first need to
259 install it somehow, and then you will need an MPI application to play with.
264 The easiest way to take the tutorial is to use the dedicated Docker
265 image. Once you `installed Docker itself
266 <https://docs.docker.com/install/>`_, simply do the following:
268 .. code-block:: shell
270 docker pull simgrid/tuto-smpi
271 docker run -it --rm --name simgrid --volume ~/smpi-tutorial:/source/tutorial simgrid/tuto-smpi bash
273 This will start a new container with all you need to take this
274 tutorial, and create a ``smpi-tutorial`` directory in your home on
275 your host machine that will be visible as ``/source/tutorial`` within the
276 container. You can then edit the files you want with your favorite
277 editor in ``~/smpi-tutorial``, and compile them within the
278 container to enjoy the provided dependencies.
282 Any change to the container out of ``/source/tutorial`` will be lost
283 when you log out of the container, so don't edit the other files!
285 All needed dependencies are already installed in this container
286 (SimGrid, the C/C++/Fortran compilers, make, pajeng, R and pajengr). Vite being
287 only optional in this tutorial, it is not installed to reduce the
290 The container also includes the example platform files from the
291 previous section as well as the source code of the NAS Parallel
292 Benchmarks. These files are available under
293 ``/source/simgrid-template-smpi`` in the image. You should copy it to
294 your working directory when you first log in:
296 .. code-block:: shell
298 cp -r /source/simgrid-template-smpi/* /source/tutorial
301 Using your Computer Natively
302 ............................
304 To take the tutorial on your machine, you first need to :ref:`install
305 SimGrid <install>`, the C/C++/Fortran compilers, and also ``pajeng`` to
306 visualize the traces. You may want to install `Vite
307 <http://vite.gforge.inria.fr/>`_ to get a first glance at the
308 traces. The provided code template requires ``make`` to compile. On
309 Debian and Ubuntu, you can get them as follows:
311 .. code-block:: shell
313 sudo apt install simgrid pajeng make gcc g++ gfortran python3 vite
315 For R analysis of the produced traces, you may want to install R
316 and the `pajengr <https://github.com/schnorr/pajengr#installation/>`_ package.
318 .. code-block:: shell
320 sudo apt install r-base r-cran-devtools cmake flex bison
321 Rscript -e "library(devtools); install_github('schnorr/pajengr');"
323 To take this tutorial, you will also need the platform files from the
324 previous section as well as the source code of the NAS Parallel
325 Benchmarks. Just clone `this repository
326 <https://framagit.org/simgrid/simgrid-template-smpi>`_ to get them all:
328 .. code-block:: shell
330 git clone https://framagit.org/simgrid/simgrid-template-smpi.git
331 cd simgrid-template-smpi/
333 If you struggle with the compilation, then you should double-check
334 your :ref:`SimGrid installation <install>`. On need, please refer to
335 the :ref:`Troubleshooting your Project Setup
336 <install_yours_troubleshooting>` section.
341 It is time to simulate your first MPI program. Use the simplistic
343 <https://framagit.org/simgrid/simgrid-template-smpi/raw/master/roundtrip.c?inline=false>`_
344 that comes with the template.
346 .. literalinclude:: /tuto_smpi/roundtrip.c
349 Compiling and Executing
350 .......................
352 Compiling the program is straightforward (double-check your
353 :ref:`SimGrid installation <install>` if you get an error message):
356 .. code-block:: shell
358 $ smpicc -O3 roundtrip.c -o roundtrip
361 Once compiled, you can simulate the execution of this program on 16
362 nodes from the ``cluster_crossbar.xml`` platform as follows:
364 .. code-block:: shell
366 $ smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile ./roundtrip
368 - The ``-np 16`` option, just like in regular MPI, specifies the
369 number of MPI processes to use.
370 - The ``-hostfile cluster_hostfile`` option, just like in regular
371 MPI, specifies the host file. If you omit this option, ``smpirun``
372 will deploy the application on the first machines of your platform.
373 - The ``-platform cluster_crossbar.xml`` option, **which doesn't exist
374 in regular MPI**, specifies the platform configuration to be
376 - At the end of the line, one finds the executable name and
377 command-line arguments (if any -- roundtrip does not expect any arguments).
379 Feel free to tweak the content of the XML platform file and the
380 program to see the effect on the simulated execution time. It may be
381 easier to compare the executions with the extra option
382 ``--cfg=smpi/display-timing:yes``. Note that the simulation accounts
383 for realistic network protocol effects and MPI implementation
384 effects. As a result, you may see "unexpected behavior" like in the
385 real world (e.g., sending a message 1 byte larger may lead to
386 significantly higher execution time).
388 Lab 1: Visualizing LU
389 ---------------------
391 We will now simulate a larger application: the LU benchmark of the NAS
392 suite. The version provided in the code template was modified to
393 compile with SMPI instead of the regular MPI. Compare the difference
394 between the original ``config/make.def.template`` and the
395 ``config/make.def`` that was adapted to SMPI. We use ``smpiff`` and
396 ``smpicc`` as compilers, and don't pass any additional library.
398 Now compile and execute the LU benchmark, class S (i.e., for `small
400 <https://www.nas.nasa.gov/publications/npb_problem_sizes.html>`_) with
403 .. code-block:: shell
405 $ make lu NPROCS=4 CLASS=S
407 $ smpirun -np 4 -platform ../cluster_backbone.xml bin/lu.S.4
410 To get a better understanding of what is going on, activate the
411 visualization tracing, and convert the produced trace for later
414 .. code-block:: shell
416 smpirun -np 4 -platform ../cluster_backbone.xml -trace --cfg=tracing/filename:lu.S.4.trace bin/lu.S.4
418 You can then produce a Gantt Chart with the following R chunk. You can
419 either copy/paste it in an R session, or `turn it into a Rscript executable
420 <https://swcarpentry.github.io/r-novice-inflammation/05-cmdline/>`_ to
421 run it again and again.
428 dta <- pajeng_read("lu.S.4.trace")
430 # Manipulate the data
432 # Remove some unnecessary columns for this example
433 select(-Type, -Imbrication) %>%
434 # Create the nice MPI rank and operations identifiers
435 mutate(Container = as.integer(gsub("rank-", "", Container)),
436 Value = gsub("^PMPI_", "MPI_", Value)) %>%
437 # Rename some columns so it can better fit MPI terminology
438 rename(Rank = Container,
439 Operation = Value) -> df.states
441 # Draw the Gantt Chart
444 # Each MPI operation is becoming a rectangle
445 geom_rect(aes(xmin=Start, xmax=End,
446 ymin=Rank, ymax=Rank + 1,
449 xlab("Time [seconds]") +
450 ylab("Rank [count]") +
451 theme_bw(base_size=14) +
453 plot.margin = unit(c(0,0,0,0), "cm"),
454 legend.margin = margin(t = 0, unit='cm'),
455 panel.grid = element_blank(),
456 legend.position = "top",
457 legend.justification = "left",
458 legend.box.spacing = unit(0, "pt"),
459 legend.box.margin = margin(0,0,0,0),
460 legend.title = element_text(size=10)) -> plot
462 # Save the plot in a PNG file (dimensions in inches)
468 This produces a file called ``smpi.png`` with the following
469 content. You can find more visualization examples `online
470 <https://simgrid.org/contrib/R_visualization.html>`_.
472 .. image:: /tuto_smpi/img/lu.S.4.png
475 Lab 2: Tracing and Replay of LU
476 -------------------------------
478 Now compile and execute the LU benchmark, class A, with 32 nodes.
480 .. code-block:: shell
482 $ make lu NPROCS=32 CLASS=A
484 This takes several minutes to simulate, because all code from all
485 processes has to be actually executed, and everything is serialized.
487 SMPI provides several methods to speed things up. One of them is to
488 capture a time-independent trace of the running application and
489 replay it on a different platform with the same amount of nodes. The
490 replay is much faster than live simulation, as the computations are
491 skipped (the application must be network-dependent for this to work).
493 You can even generate the trace during the live simulation as follows:
495 .. code-block:: shell
497 $ smpirun -trace-ti --cfg=tracing/filename:LU.A.32 -np 32 -platform ../cluster_backbone.xml bin/lu.A.32
499 The produced trace is composed of a file ``LU.A.32`` and a folder
500 ``LU.A.32_files``. You can replay this trace with SMPI thanks to ``smpirun``.
501 For example, the following command replays the trace on a different platform:
503 .. code-block:: shell
505 $ smpirun -np 32 -platform ../cluster_crossbar.xml -hostfile ../cluster_hostfile -replay LU.A.32
507 All the outputs are gone, as the application is not really simulated
508 here. Its trace is simply replayed. But if you visualize the live
509 simulation and the replay, you will see that the behavior is
510 unchanged. The simulation does not run much faster on this very
511 example, but this becomes very interesting when your application
512 is computationally hungry.
516 The commands should be separated and executed by some CI to make sure
517 the documentation is up-to-date.
519 Lab 3: Execution Sampling on Matrix Multiplication example
520 ----------------------------------------------------------
522 The second method to speed up simulations is to sample the computation
523 parts in the code. This means that the person doing the simulation
524 needs to know the application and identify parts that are compute-intensive
525 and take time while being regular enough not to ruin
526 simulation accuracy. Furthermore, there should not be any MPI calls
527 inside such parts of the code.
529 Use for this part the `gemm_mpi.cpp
530 <https://gitlab.com/PRACE-4IP/CodeVault/raw/master/hpc_kernel_samples/dense_linear_algebra/gemm/mpi/src/gemm_mpi.cpp>`_
531 example, which is provided by the `PRACE Codevault repository
532 <http://www.prace-ri.eu/prace-codevault/>`_.
534 The computing part of this example is the matrix multiplication routine
536 .. literalinclude:: /tuto_smpi/gemm_mpi.cpp
540 .. code-block:: shell
542 $ smpicxx -O3 gemm_mpi.cpp -o gemm
543 $ time smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile --cfg=smpi/display-timing:yes --cfg=smpi/running-power:1000000000 ./gemm
545 This should end quite quickly, as the size of each matrix is only 1000x1000.
546 But what happens if we want to simulate larger runs?
547 Replace the size by 2000, 3000, and try again.
549 The simulation time increases a lot, while there are no more MPI calls performed, only computation.
551 The ``--cfg=smpi/display-timing`` option gives more details about execution
552 and advises using sampling if the time spent in computing loops seems too high.
554 The ``--cfg=smpi/host-speed:1000000000`` option sets the speed of the processor used for
555 running the simulation. Here we say that its speed is the same as one of the
556 processors we are simulating (1Gf), so that 1 second of computation is injected
557 as 1 second in the simulation.
559 .. code-block:: shell
561 [5.568556] [smpi_kernel/INFO] Simulated time: 5.56856 seconds.
563 The simulation took 24.9403 seconds (after parsing and platform setup)
564 24.0764 seconds were actual computation of the application
565 [5.568556] [smpi_kernel/INFO] More than 75% of the time was spent inside the application code.
566 You may want to use sampling functions or trace replay to reduce this.
568 So in our case (size 3000), the simulation ran for 25 seconds, and the simulated time was 5.57s at the end.
569 Computation by itself took 24 seconds, and can quickly grow with larger sizes
570 (as computation is really performed, there will be variability between similar runs).
572 SMPI provides sampling macros to accelerate simulation by sampling iterations
573 of large computation loops, and skip computation after a certain amount of iterations,
574 or when the sampling is stable enough.
576 The two macros only slightly differ :
578 - ``SMPI_SAMPLE_GLOBAL`` : the specified number of samples is produced by all processors
579 - ``SMPI_SAMPLE_LOCAL`` : each process executes a specified number of iterations
581 So if the size of the computed part varies between processes (imbalance),
582 it's safer to use the LOCAL one.
584 To use one of them, replacing the external for loop of the multiply routine:
588 for (int i = istart; i <= iend; ++i)
594 SMPI_SAMPLE_GLOBAL(int i = istart, i <= iend, ++i, 10, 0.005)
596 The first three parameters are the ones from the loop, while the two last ones are for sampling.
597 They mean that at most 10 iterations will be performed and that the sampling phase can be exited
598 earlier if the expected stability is reached after fewer samples.
600 Now run the code again with various sizes and parameters and check the time taken for the
601 simulation, as well as the resulting simulated time.
603 .. code-block:: shell
605 [5.575691] [smpi_kernel/INFO] Simulated time: 5.57569 seconds.
606 The simulation took 1.23698 seconds (after parsing and platform setup)
607 0.0319454 seconds were actual computation of the application
609 In this case, the simulation only took 1.2 seconds, while the simulated time
610 remained almost identical.
612 The computation results will obviously be altered since most computations are skipped.
613 These macros thus cannot be used when results are critical for the application behavior
614 (convergence estimation for instance will be wrong on some codes).
617 Lab 4: Memory folding on large allocations
618 ------------------------------------------
620 Another issue that can be encountered when simulation with SMPI is lack of memory.
621 Indeed we are executing all MPI processes on a single node, which can lead to crashes.
622 We will use the DT benchmark of the NAS suite to illustrate how to avoid such issues.
624 With 85 processes and class C, the DT simulated benchmark will try to allocate 35GB of memory,
625 which may not be available on the node you are using.
627 To avoid this we can simply replace the largest calls to malloc and free by calls
628 to ``SMPI_SHARED_MALLOC`` and ``SMPI_SHARED_FREE``.
629 This means that all processes will share one single instance of this buffer.
630 As for sampling, results will be altered, and this should not be used for control structures.
632 For DT example, there are three different calls to malloc in the file, and one of them is for a needed structure.
633 Find it and replace the two other ones with ``SMPI_SHARED_MALLOC`` (there is only one free to replace for both of them).
635 Once done, you can now run
637 .. code-block:: shell
639 $ make dt NPROCS=85 CLASS=C
641 $ smpirun -np 85 -platform ../cluster_backbone.xml bin/dt.C.x BH
644 And simulation should finish without swapping/crashing (Ignore the warning about the return value).
646 If control structures are also problematic, you can use ``SMPI_PARTIAL_SHARED_MALLOC(size, offsets, offsetscount)``
647 macro, which shares only specific parts of the structure between processes,
648 and use specific memory for the important parts.
649 It can be freed afterward with SMPI_SHARED_FREE.
651 If allocations are performed with malloc or calloc, SMPI (from version 3.25) provides the option
652 ``--cfg=smpi/auto-shared-malloc-shared:n`` which will replace all allocations above size n bytes by
653 shared allocations. The value has to be carefully selected to avoid smaller control arrays,
654 containing data necessary for the completion of the run.
655 Try to run the (non modified) DT example again, with values going from 10 to 100,000 to show that
656 too small values can cause crashes.
658 A useful option to identify the largest allocations in the code is ``--cfg=smpi/display-allocs:yes`` (from 3.27).
659 It will display at the end of a (successful) run the largest allocations and their locations, helping pinpoint the
660 targets for sharing, or setting the threshold for automatic ones.
661 For DT, the process would be to run a smaller class of problems,
663 .. code-block:: shell
665 $ make dt NPROCS=21 CLASS=A
666 $ smpirun --cfg=smpi/display-allocs:yes -np 21 -platform ../cluster_backbone.xml bin/dt.A.x BH
670 .. code-block:: shell
672 [smpi_utils/INFO] Memory Usage: Simulated application allocated 198533192 bytes during its lifetime through malloc/calloc calls.
673 Largest allocation at once from a single process was 3553184 bytes, at dt.c:388. It was called 3 times during the whole simulation.
674 If this is too much, consider sharing allocations for computation buffers.
675 This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
677 And from there we can identify dt.c:388 as the main allocation, and the best target to convert to
678 shared mallocs for larger simulations. Furthermore, with 21 processes, we see that this particular
679 allocation and size was only called 3 times, which means that other processes are likely to allocate
680 less memory here (imbalance). Using 3553184 as a threshold value might be unwise, as most processes
681 wouldn't share memory, so a lower threshold would be advisable.
686 You may also be interested in the `SMPI reference article
687 <https://hal.inria.fr/hal-01415484>`_ or these `introductory slides
688 <http://simgrid.org/tutorials/simgrid-smpi-101.pdf>`_. The :ref:`SMPI
689 reference documentation <SMPI_doc>` covers much more content than
692 Finally, we regularly use SimGrid in our teachings on MPI. This way,
693 our student can experiment with platforms that they do not have access
694 to, and the associated visualization tools help them to understand
695 their work. The whole material is available online, in a separate
696 project: the `SMPI CourseWare <https://simgrid.github.io/SMPI_CourseWare/>`_.
698 .. LocalWords: SimGrid