2 @defgroup SMPI_API SMPI: Simulate real MPI applications
3 @brief Programming environment for the simulation of MPI applications
9 This programming environment enables the study of MPI application by
10 emulating them on top of the SimGrid simulator. This is particularly
11 interesting to study existing MPI applications within the comfort of
12 the simulator. The motivation for this work is detailed in the
13 reference article (available at http://hal.inria.fr/inria-00527150).
16 Our goal is to enable the study of **unmodified MPI applications**,
17 and even if some constructs and features are still missing, we
18 consider SMPI to be stable and usable in production. For **further
19 scalability**, you may modify your code to speed up your studies or
20 save memory space. Improved **simulation accuracy** requires some
21 specific care from you.
24 - @ref SMPI_use_compile
27 - @ref SMPI_use_colls_algos
28 - @ref SMPI_use_colls_tracing
30 - @ref SMPI_what_coverage
31 - @ref SMPI_what_globals
33 - @ref SMPI_adapting_size
34 - @ref SMPI_adapting_speed
38 @section SMPI_use Using SMPI
40 If you're absolutely new to MPI, you should first take our online
41 [SMPI CourseWare](https://simgrid.github.io/SMPI_CourseWare/), and/or
42 take a MPI course in your favorite university. If you already know
43 MPI, SMPI should sound very familiar to you: Use smpicc instead of
44 mpicc, and smpirun instead of mpirun, and you're almost set. Once you
45 get a virtual platform description (see @ref platform), you're good to
48 @subsection SMPI_use_compile Compiling your code
50 For that, simply use <tt>smpicc</tt> as a compiler just
51 like you use mpicc with other MPI implementations. This script
52 still calls your default compiler (gcc, clang, ...) and adds the right
53 compilation flags along the way.
55 Alas, some building infrastructures cannot cope with that and your
56 <tt>./configure</tt> may fail, reporting that the compiler is not
57 functional. If this happens, define the <tt>SMPI_PRETEND_CC</tt>
58 environment variable before running the configuration. Do not define
62 SMPI_PRETEND_CC=1 ./configure # here come the configure parameters
67 Again, make sure that SMPI_PRETEND_CC is not set when you actually
68 compile your application. It is just a work-around for some configure-scripts
69 and replaces some internals by "return 0;". Your simulation will not
70 work with this variable set!
72 @subsection SMPI_use_exec Executing your code on the simulator
74 Use the <tt>smpirun</tt> script as follows for that:
77 smpirun -hostfile my_hostfile.txt -platform my_platform.xml ./program -blah
80 - <tt>my_hostfile.txt</tt> is a classical MPI hostfile (that is, this
81 file lists the machines on which the processes must be dispatched, one
83 - <tt>my_platform.xml</tt> is a classical SimGrid platform file. Of
84 course, the hosts of the hostfile must exist in the provided
86 - <tt>./program</tt> is the MPI program to simulate, that you
87 compiled with <tt>smpicc</tt>
88 - <tt>-blah</tt> is a command-line parameter passed to this program.
90 <tt>smpirun</tt> accepts other parameters, such as <tt>-np</tt> if you
91 don't want to use all the hosts defined in the hostfile, <tt>-map</tt>
92 to display on which host each rank gets mapped of <tt>-trace</tt> to
93 activate the tracing during the simulation. You can get the full list
100 @subsection SMPI_use_colls Simulating collective operations
102 MPI collective operations are crucial to the performance of MPI
103 applications and must be carefully optimized according to many
104 parameters. Every existing implementation provides several algorithms
105 for each collective operation, and selects by default the best suited
106 one, depending on the sizes sent, the number of nodes, the
107 communicator, or the communication library being used. These
108 decisions are based on empirical results and theoretical complexity
109 estimation, and are very different between MPI implementations. In
110 most cases, the users can also manually tune the algorithm used for
111 each collective operation.
113 SMPI can simulate the behavior of several MPI implementations:
115 <a href="http://star-mpi.sourceforge.net/">STAR-MPI</a>, and
116 MVAPICH2. For that, it provides 115 collective algorithms and several
117 selector algorithms, that were collected directly in the source code
118 of the targeted MPI implementations.
120 You can switch the automatic selector through the
121 \c smpi/coll_selector configuration item. Possible values:
123 - <b>ompi</b>: default selection logic of OpenMPI (version 1.7)
124 - <b>mpich</b>: default selection logic of MPICH (version 3.0.4)
125 - <b>mvapich2</b>: selection logic of MVAPICH2 (version 1.9) tuned
126 on the Stampede cluster
127 - <b>impi</b>: preliminary version of an Intel MPI selector (version
128 4.1.3, also tuned for the Stampede cluster). Due the closed source
129 nature of Intel MPI, some of the algorithms described in the
130 documentation are not available, and are replaced by mvapich ones.
131 - <b>default</b>: legacy algorithms used in the earlier days of
132 SimGrid. Do not use for serious perform performance studies.
135 @subsubsection SMPI_use_colls_algos Available algorithms
137 You can also pick the algorithm used for each collective with the
138 corresponding configuration item. For example, to use the pairwise
139 alltoall algorithm, one should add \c --cfg=smpi/alltoall:pair to the
140 line. This will override the selector (if any) for this algorithm.
141 It means that the selected algorithm will be used
143 Warning: Some collective may require specific conditions to be
144 executed correctly (for instance having a communicator with a power of
145 two number of nodes only), which are currently not enforced by
146 Simgrid. Some crashes can be expected while trying these algorithms
147 with unusual sizes/parameters
151 Most of these are best described in <a href="http://www.cs.arizona.edu/~dkl/research/papers/ics06.pdf">STAR-MPI</a>
153 - default: naive one, by default
154 - ompi: use openmpi selector for the alltoall operations
155 - mpich: use mpich selector for the alltoall operations
156 - mvapich2: use mvapich2 selector for the alltoall operations
157 - impi: use intel mpi selector for the alltoall operations
158 - automatic (experimental): use an automatic self-benchmarking algorithm
159 - bruck: Described by Bruck et.al. in <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=642949">this paper</a>
160 - 2dmesh: organizes the nodes as a two dimensional mesh, and perform allgather
162 - 3dmesh: adds a third dimension to the previous algorithm
163 - rdb: recursive doubling : extends the mesh to a nth dimension, each one
165 - pair: pairwise exchange, only works for power of 2 procs, size-1 steps,
166 each process sends and receives from the same process at each step
167 - pair_light_barrier: same, with small barriers between steps to avoid
169 - pair_mpi_barrier: same, with MPI_Barrier used
170 - pair_one_barrier: only one barrier at the beginning
171 - ring: size-1 steps, at each step a process send to process (n+i)%size, and receives from (n-i)%size
172 - ring_light_barrier: same, with small barriers between some phases to avoid contention
173 - ring_mpi_barrier: same, with MPI_Barrier used
174 - ring_one_barrier: only one barrier at the beginning
175 - basic_linear: posts all receives and all sends,
176 starts the communications, and waits for all communication to finish
177 - mvapich2_scatter_dest: isend/irecv with scattered destinations, posting only a few messages at the same time
181 - default: naive one, by default
182 - ompi: use openmpi selector for the alltoallv operations
183 - mpich: use mpich selector for the alltoallv operations
184 - mvapich2: use mvapich2 selector for the alltoallv operations
185 - impi: use intel mpi selector for the alltoallv operations
186 - automatic (experimental): use an automatic self-benchmarking algorithm
187 - bruck: same as alltoall
188 - pair: same as alltoall
189 - pair_light_barrier: same as alltoall
190 - pair_mpi_barrier: same as alltoall
191 - pair_one_barrier: same as alltoall
192 - ring: same as alltoall
193 - ring_light_barrier: same as alltoall
194 - ring_mpi_barrier: same as alltoall
195 - ring_one_barrier: same as alltoall
196 - ompi_basic_linear: same as alltoall
200 - default: naive one, by default
201 - ompi: use openmpi selector for the gather operations
202 - mpich: use mpich selector for the gather operations
203 - mvapich2: use mvapich2 selector for the gather operations
204 - impi: use intel mpi selector for the gather operations
205 - automatic (experimental): use an automatic self-benchmarking algorithm
206 which will iterate over all implemented versions and output the best
207 - ompi_basic_linear: basic linear algorithm from openmpi, each process sends to the root
208 - ompi_binomial: binomial tree algorithm
209 - ompi_linear_sync: same as basic linear, but with a synchronization at the
210 beginning and message cut into two segments.
211 - mvapich2_two_level: SMP-aware version from MVAPICH. Gather first intra-node (defaults to mpich's gather), and then exchange with only one process/node. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
215 - default: naive one, by default
216 - ompi: use openmpi selector for the barrier operations
217 - mpich: use mpich selector for the barrier operations
218 - mvapich2: use mvapich2 selector for the barrier operations
219 - impi: use intel mpi selector for the barrier operations
220 - automatic (experimental): use an automatic self-benchmarking algorithm
221 - ompi_basic_linear: all processes send to root
222 - ompi_two_procs: special case for two processes
223 - ompi_bruck: nsteps = sqrt(size), at each step, exchange data with rank-2^k and rank+2^k
224 - ompi_recursivedoubling: recursive doubling algorithm
225 - ompi_tree: recursive doubling type algorithm, with tree structure
226 - ompi_doublering: double ring algorithm
227 - mvapich2_pair: pairwise algorithm
231 - default: naive one, by default
232 - ompi: use openmpi selector for the scatter operations
233 - mpich: use mpich selector for the scatter operations
234 - mvapich2: use mvapich2 selector for the scatter operations
235 - impi: use intel mpi selector for the scatter operations
236 - automatic (experimental): use an automatic self-benchmarking algorithm
237 - ompi_basic_linear: basic linear scatter
238 - ompi_binomial: binomial tree scatter
239 - mvapich2_two_level_direct: SMP aware algorithm, with an intra-node stage (default set to mpich selector), and then a basic linear inter node stage. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
240 - mvapich2_two_level_binomial: SMP aware algorithm, with an intra-node stage (default set to mpich selector), and then a binomial phase. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
244 - default: naive one, by default
245 - ompi: use openmpi selector for the reduce operations
246 - mpich: use mpich selector for the reduce operations
247 - mvapich2: use mvapich2 selector for the reduce operations
248 - impi: use intel mpi selector for the reduce operations
249 - automatic (experimental): use an automatic self-benchmarking algorithm
250 - arrival_pattern_aware: root exchanges with the first process to arrive
251 - binomial: uses a binomial tree
252 - flat_tree: uses a flat tree
253 - NTSL: Non-topology-specific pipelined linear-bcast function
254 0->1, 1->2 ,2->3, ....., ->last node: in a pipeline fashion, with segments
256 - scatter_gather: scatter then gather
257 - ompi_chain: openmpi reduce algorithms are built on the same basis, but the
258 topology is generated differently for each flavor
259 chain = chain with spacing of size/2, and segment size of 64KB
260 - ompi_pipeline: same with pipeline (chain with spacing of 1), segment size
261 depends on the communicator size and the message size
262 - ompi_binary: same with binary tree, segment size of 32KB
263 - ompi_in_order_binary: same with binary tree, enforcing order on the
265 - ompi_binomial: same with binomial algo (redundant with default binomial
267 - ompi_basic_linear: basic algorithm, each process sends to root
268 - mvapich2_knomial: k-nomial algorithm. Default factor is 4 (mvapich2 selector adapts it through tuning)
269 - mvapich2_two_level: SMP-aware reduce, with default set to mpich both for intra and inter communicators. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
270 - rab: <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a>'s reduce algorithm
274 - default: naive one, by default
275 - ompi: use openmpi selector for the allreduce operations
276 - mpich: use mpich selector for the allreduce operations
277 - mvapich2: use mvapich2 selector for the allreduce operations
278 - impi: use intel mpi selector for the allreduce operations
279 - automatic (experimental): use an automatic self-benchmarking algorithm
280 - lr: logical ring reduce-scatter then logical ring allgather
281 - rab1: variations of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: reduce_scatter then allgather
282 - rab2: variations of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: alltoall then allgather
283 - rab_rsag: variation of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: recursive doubling
284 reduce_scatter then recursive doubling allgather
285 - rdb: recursive doubling
286 - smp_binomial: binomial tree with smp: binomial intra
287 SMP reduce, inter reduce, inter broadcast then intra broadcast
288 - smp_binomial_pipeline: same with segment size = 4096 bytes
289 - smp_rdb: intra: binomial allreduce, inter: Recursive
290 doubling allreduce, intra: binomial broadcast
291 - smp_rsag: intra: binomial allreduce, inter: reduce-scatter,
292 inter:allgather, intra: binomial broadcast
293 - smp_rsag_lr: intra: binomial allreduce, inter: logical ring
294 reduce-scatter, logical ring inter:allgather, intra: binomial broadcast
295 - smp_rsag_rab: intra: binomial allreduce, inter: rab
296 reduce-scatter, rab inter:allgather, intra: binomial broadcast
297 - redbcast: reduce then broadcast, using default or tuned algorithms if specified
298 - ompi_ring_segmented: ring algorithm used by OpenMPI
299 - mvapich2_rs: rdb for small messages, reduce-scatter then allgather else
300 - mvapich2_two_level: SMP-aware algorithm, with mpich as intra algoritm, and rdb as inter (Change this behavior by using mvapich2 selector to use tuned values)
301 - rab: default <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> implementation
303 #### MPI_Reduce_scatter
305 - default: naive one, by default
306 - ompi: use openmpi selector for the reduce_scatter operations
307 - mpich: use mpich selector for the reduce_scatter operations
308 - mvapich2: use mvapich2 selector for the reduce_scatter operations
309 - impi: use intel mpi selector for the reduce_scatter operations
310 - automatic (experimental): use an automatic self-benchmarking algorithm
311 - ompi_basic_recursivehalving: recursive halving version from OpenMPI
312 - ompi_ring: ring version from OpenMPI
313 - mpich_pair: pairwise exchange version from MPICH
314 - mpich_rdb: recursive doubling version from MPICH
315 - mpich_noncomm: only works for power of 2 procs, recursive doubling for noncommutative ops
320 - default: naive one, by default
321 - ompi: use openmpi selector for the allgather operations
322 - mpich: use mpich selector for the allgather operations
323 - mvapich2: use mvapich2 selector for the allgather operations
324 - impi: use intel mpi selector for the allgather operations
325 - automatic (experimental): use an automatic self-benchmarking algorithm
326 - 2dmesh: see alltoall
327 - 3dmesh: see alltoall
328 - bruck: Described by Bruck et.al. in <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=642949">
329 Efficient algorithms for all-to-all communications in multiport message-passing systems</a>
330 - GB: Gather - Broadcast (uses tuned version if specified)
331 - loosely_lr: Logical Ring with grouping by core (hardcoded, default
333 - NTSLR: Non Topology Specific Logical Ring
334 - NTSLR_NB: Non Topology Specific Logical Ring, Non Blocking operations
337 - rhv: only power of 2 number of processes
339 - SMP_NTS: gather to root of each SMP, then every root of each SMP node
340 post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
341 using logical ring algorithm (hardcoded, default processes/SMP: 8)
342 - smp_simple: gather to root of each SMP, then every root of each SMP node
343 post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
344 using simple algorithm (hardcoded, default processes/SMP: 8)
345 - spreading_simple: from node i, order of communications is i -> i + 1, i ->
346 i + 2, ..., i -> (i + p -1) % P
347 - ompi_neighborexchange: Neighbor Exchange algorithm for allgather.
348 Described by Chen et.al. in <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1592302">Performance Evaluation of Allgather Algorithms on Terascale Linux Cluster with Fast Ethernet</a>
349 - mvapich2_smp: SMP aware algorithm, performing intra-node gather, inter-node allgather with one process/node, and bcast intra-node
354 - default: naive one, by default
355 - ompi: use openmpi selector for the allgatherv operations
356 - mpich: use mpich selector for the allgatherv operations
357 - mvapich2: use mvapich2 selector for the allgatherv operations
358 - impi: use intel mpi selector for the allgatherv operations
359 - automatic (experimental): use an automatic self-benchmarking algorithm
360 - GB: Gatherv - Broadcast (uses tuned version if specified, but only for
361 Bcast, gatherv is not tuned)
364 - ompi_neighborexchange: see allgather
365 - ompi_bruck: see allgather
366 - mpich_rdb: recursive doubling algorithm from MPICH
367 - mpich_ring: ring algorithm from MPICh - performs differently from the one from STAR-MPI
371 - default: naive one, by default
372 - ompi: use openmpi selector for the bcast operations
373 - mpich: use mpich selector for the bcast operations
374 - mvapich2: use mvapich2 selector for the bcast operations
375 - impi: use intel mpi selector for the bcast operations
376 - automatic (experimental): use an automatic self-benchmarking algorithm
377 - arrival_pattern_aware: root exchanges with the first process to arrive
378 - arrival_pattern_aware_wait: same with slight variation
379 - binomial_tree: binomial tree exchange
380 - flattree: flat tree exchange
381 - flattree_pipeline: flat tree exchange, message split into 8192 bytes pieces
382 - NTSB: Non-topology-specific pipelined binary tree with 8192 bytes pieces
383 - NTSL: Non-topology-specific pipelined linear with 8192 bytes pieces
384 - NTSL_Isend: Non-topology-specific pipelined linear with 8192 bytes pieces, asynchronous communications
385 - scatter_LR_allgather: scatter followed by logical ring allgather
386 - scatter_rdb_allgather: scatter followed by recursive doubling allgather
387 - arrival_scatter: arrival pattern aware scatter-allgather
388 - SMP_binary: binary tree algorithm with 8 cores/SMP
389 - SMP_binomial: binomial tree algorithm with 8 cores/SMP
390 - SMP_linear: linear algorithm with 8 cores/SMP
391 - ompi_split_bintree: binary tree algorithm from OpenMPI, with message split in 8192 bytes pieces
392 - ompi_pipeline: pipeline algorithm from OpenMPI, with message split in 128KB pieces
393 - mvapich2_inter_node: Inter node default mvapich worker
394 - mvapich2_intra_node: Intra node default mvapich worker
395 - mvapich2_knomial_intra_node: k-nomial intra node default mvapich worker. default factor is 4.
397 #### Automatic evaluation
399 (Warning: This is experimental and may be removed or crash easily)
401 An automatic version is available for each collective (or even as a selector). This specific
402 version will loop over all other implemented algorithm for this particular collective, and apply
403 them while benchmarking the time taken for each process. It will then output the quickest for
404 each process, and the global quickest. This is still unstable, and a few algorithms which need
405 specific number of nodes may crash.
407 #### Adding an algorithm
409 To add a new algorithm, one should check in the src/smpi/colls folder how other algorithms
410 are coded. Using plain MPI code inside Simgrid can't be done, so algorithms have to be
411 changed to use smpi version of the calls instead (MPI_Send will become smpi_mpi_send). Some functions may have different signatures than their MPI counterpart, please check the other algorithms or contact us using <a href="http://lists.gforge.inria.fr/mailman/listinfo/simgrid-devel">SimGrid developers mailing list</a>.
413 Example: adding a "pair" version of the Alltoall collective.
415 - Implement it in a file called alltoall-pair.c in the src/smpi/colls folder. This file should include colls_private.h.
417 - The name of the new algorithm function should be smpi_coll_tuned_alltoall_pair, with the same signature as MPI_Alltoall.
419 - Once the adaptation to SMPI code is done, add a reference to the file ("src/smpi/colls/alltoall-pair.c") in the SMPI_SRC part of the DefinePackages.cmake file inside buildtools/cmake, to allow the file to be built and distributed.
421 - To register the new version of the algorithm, simply add a line to the corresponding macro in src/smpi/colls/cools.h ( add a "COLL_APPLY(action, COLL_ALLTOALL_SIG, pair)" to the COLL_ALLTOALLS macro ). The algorithm should now be compiled and be selected when using --cfg=smpi/alltoall:pair at runtime.
423 - To add a test for the algorithm inside Simgrid's test suite, juste add the new algorithm name in the ALLTOALL_COLL list found inside teshsuite/smpi/CMakeLists.txt . When running ctest, a test for the new algorithm should be generated and executed. If it does not pass, please check your code or contact us.
425 - Please submit your patch for inclusion in SMPI, for example through a pull request on GitHub or directly per email.
427 @subsubsection SMPI_use_colls_tracing Tracing of internal communications
429 By default, the collective operations are traced as a unique operation
430 because tracing all point-to-point communications composing them could
431 result in overloaded, hard to interpret traces. If you want to debug
432 and compare collective algorithms, you should set the
433 \c tracing/smpi/internals configuration item to 1 instead of 0.
435 Here are examples of two alltoall collective algorithms runs on 16 nodes,
436 the first one with a ring algorithm, the second with a pairwise one:
439 <a href="smpi_simgrid_alltoall_ring_16.png" border=0><img src="smpi_simgrid_alltoall_ring_16.png" width="30%" border=0 align="center"></a>
440 <a href="smpi_simgrid_alltoall_pair_16.png" border=0><img src="smpi_simgrid_alltoall_pair_16.png" width="30%" border=0 align="center"></a>
444 @section SMPI_what What can run within SMPI?
446 You can run unmodified MPI applications (both C and Fortran) within
447 SMPI, provided that you only use MPI calls that we implemented. Global
448 variables should be handled correctly on Linux systems.
450 @subsection SMPI_what_coverage MPI coverage of SMPI
452 Our coverage of the interface is very decent, but still incomplete;
453 Given the size of the MPI standard, we may well never manage to
454 implement absolutely all existing primitives. Currently, we have a
455 very sparse support for one-sided communications, and almost none for
456 I/O primitives. But our coverage is still very decent: we pass a very
457 large amount of the MPICH coverage tests.
459 The full list of not yet implemented functions is documented in the
460 file @ref include/smpi/smpi.h, between two lines containing the
461 <tt>FIXME</tt> marker. If you really need a missing feature, please
462 get in touch with us: we can guide you though the SimGrid code to help
463 you implementing it, and we'd glad to integrate it in the main project
464 afterward if you contribute them back.
466 @subsection SMPI_what_globals Global variables
468 Concerning the globals, the problem comes from the fact that usually,
469 MPI processes run as real UNIX processes while they are all folded
470 into threads of a unique system process in SMPI. Global variables are
471 usually private to each MPI process while they become shared between
472 the processes in SMPI. This point is rather problematic, and currently
473 forces to modify your application to privatize the global variables.
475 We tried several techniques to work this around. We used to have a
476 script that privatized automatically the globals through static
477 analysis of the source code, but it was not robust enough to be used
478 in production. This issue, as well as several potential solutions, is
479 discussed in this article: "Automatic Handling of Global Variables for
480 Multi-threaded MPI Programs",
481 available at http://charm.cs.illinois.edu/newPapers/11-23/paper.pdf
482 (note that this article does not deal with SMPI but with a competing
483 solution called AMPI that suffers of the same issue).
485 SimGrid can duplicate and dynamically switch the .data and .bss
486 segments of the ELF process when switching the MPI ranks, allowing
487 each ranks to have its own copy of the global variables. This feature
488 is expected to work correctly on Linux and BSD, so smpirun activates
489 it by default. As no copy is involved, performance should not be
490 altered (but memory occupation will be higher).
492 If you want to turn it off, pass \c -no-privatize to smpirun. This may
493 be necessary if your application uses dynamic libraries as the global
494 variables of these libraries will not be privatized. You can fix this
495 by linking statically with these libraries (but NOT with libsimgrid,
496 as we need SimGrid's own global variables).
498 @section SMPI_adapting Adapting your MPI code for further scalability
500 As detailed in the reference article (available at
501 http://hal.inria.fr/inria-00527150), you may want to adapt your code
502 to improve the simulation performance. But these tricks may seriously
503 hinder the result quality (or even prevent the app to run) if used
504 wrongly. We assume that if you want to simulate an HPC application,
505 you know what you are doing. Don't prove us wrong!
507 @subsection SMPI_adapting_size Reducing your memory footprint
509 If you get short on memory (the whole app is executed on a single node when
510 simulated), you should have a look at the SMPI_SHARED_MALLOC and
511 SMPI_SHARED_FREE macros. It allows to share memory areas between processes: The
512 purpose of these macro is that the same line malloc on each process will point
513 to the exact same memory area. So if you have a malloc of 2M and you have 16
514 processes, this macro will change your memory consumption from 2M*16 to 2M
515 only. Only one block for all processes.
517 If your program is ok with a block containing garbage value because all
518 processes write and read to the same place without any kind of coordination,
519 then this macro can dramatically shrink your memory consumption. For example,
520 that will be very beneficial to a matrix multiplication code, as all blocks will
521 be stored on the same area. Of course, the resulting computations will useless,
522 but you can still study the application behavior this way.
524 Naturally, this won't work if your code is data-dependent. For example, a Jacobi
525 iterative computation depends on the result computed by the code to detect
526 convergence conditions, so turning them into garbage by sharing the same memory
527 area between processes does not seem very wise. You cannot use the
528 SMPI_SHARED_MALLOC macro in this case, sorry.
530 This feature is demoed by the example file
531 <tt>examples/smpi/NAS/dt.c</tt>
533 @subsection SMPI_adapting_speed Toward faster simulations
535 If your application is too slow, try using SMPI_SAMPLE_LOCAL,
536 SMPI_SAMPLE_GLOBAL and friends to indicate which computation loops can
537 be sampled. Some of the loop iterations will be executed to measure
538 their duration, and this duration will be used for the subsequent
539 iterations. These samples are done per processor with
540 SMPI_SAMPLE_LOCAL, and shared between all processors with
541 SMPI_SAMPLE_GLOBAL. Of course, none of this will work if the execution
542 time of your loop iteration are not stable.
544 This feature is demoed by the example file
545 <tt>examples/smpi/NAS/ep.c</tt>
547 @section SMPI_accuracy Ensuring accurate simulations
549 Out of the box, SimGrid may give you fairly accurate results, but
550 there is a plenty of factors that could go wrong and make your results
551 inaccurate or even plainly wrong. Actually, you can only get accurate
552 results of a nicely built model, including both the system hardware
553 and your application. Such models are hard to pass over and reuse in
554 other settings, because elements that are not relevant to an
555 application (say, the latency of point-to-point communications,
556 collective operation implementation details or CPU-network
557 interaction) may be irrelevant to another application. The dream of
558 the perfect model, encompassing every aspects is only a chimera, as
559 the only perfect model of the reality is the reality. If you go for
560 simulation, then you have to ignore some irrelevant aspects of the
561 reality, but which aspects are irrelevant is actually
562 application-dependent...
564 The only way to assess whether your settings provide accurate results
565 is to double-check these results. If possible, you should first run
566 the same experiment in simulation and in real life, gathering as much
567 information as you can. Try to understand the discrepancies in the
568 results that you observe between both settings (visualization can be
569 precious for that). Then, try to modify your model (of the platform,
570 of the collective operations) to reduce the most preeminent differences.
572 If the discrepancies come from the computing time, try adapting the \c
573 smpi/host-speed: reduce it if your simulation runs faster than in
574 reality. If the error come from the communication, then you need to
575 fiddle with your platform file.
577 Be inventive in your modeling. Don't be afraid if the names given by
578 SimGrid does not match the real names: we got very good results by
579 modeling multicore/GPU machines with a set of separate hosts
580 interconnected with very fast networks (but don't trust your model
581 because it has the right names in the right place either).
583 Finally, you may want to check [this
584 article](https://hal.inria.fr/hal-00907887) on the classical pitfalls
585 in modeling distributed systems.
590 /** @example include/smpi/smpi.h */