X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/blobdiff_plain/8f58249ee1e3d4fff121c049fc018bf6fa9555a8..a805c48862448771e5c0b108e9a150ba0a54ccc9:/docs/source/Tutorial_MPI_Applications.rst diff --git a/docs/source/Tutorial_MPI_Applications.rst b/docs/source/Tutorial_MPI_Applications.rst index a2638d2293..b486608b97 100644 --- a/docs/source/Tutorial_MPI_Applications.rst +++ b/docs/source/Tutorial_MPI_Applications.rst @@ -186,8 +186,8 @@ cluster (and thus reduce its price) while maintaining a high bisection bandwidth and a relatively low diameter. To model this in SimGrid, pass a ``topology="FAT_TREE"`` attribute to your cluster. The ``topo_parameters=#levels;#downlinks;#uplinks;link count`` follows the -semantic introduced in `Figure 1B of this article -`_. +semantic introduced in `Figure 1(b) of this article +`_. Here is the meaning of this example: ``2 ; 4,4 ; 1,2 ; 1,2`` @@ -268,7 +268,7 @@ image. Once you `installed Docker itself .. code-block:: console $ docker pull simgrid/tuto-smpi - $ docker run -it --rm --name simgrid --volume ~/smpi-tutorial:/source/tutorial simgrid/tuto-smpi bash + $ docker run --user $UID:$GID -it --rm --name simgrid --volume ~/smpi-tutorial:/source/tutorial simgrid/tuto-smpi bash This will start a new container with all you need to take this tutorial, and create a ``smpi-tutorial`` directory in your home on @@ -544,7 +544,7 @@ The computing part of this example is the matrix multiplication routine .. code-block:: console $ smpicxx -O3 gemm_mpi.cpp -o gemm - $ time smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile --cfg=smpi/display-timing:yes --cfg=smpi/running-power:1000000000 ./gemm + $ time smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile --cfg=smpi/display-timing:yes --cfg=smpi/host-speed:1000000000 ./gemm This should end quite quickly, as the size of each matrix is only 1000x1000. But what happens if we want to simulate larger runs? @@ -653,7 +653,7 @@ and use specific memory for the important parts. It can be freed afterward with SMPI_SHARED_FREE. If allocations are performed with malloc or calloc, SMPI (from version 3.25) provides the option -``--cfg=smpi/auto-shared-malloc-shared:n`` which will replace all allocations above size n bytes by +``--cfg=smpi/auto-shared-malloc-thresh:n`` which will replace all allocations above size n bytes by shared allocations. The value has to be carefully selected to avoid smaller control arrays, containing data necessary for the completion of the run. Try to run the (non modified) DT example again, with values going from 10 to 100,000 to show that