X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/blobdiff_plain/fbcf6ab31cae1988be858f9f894dafe529c575d7..45a1ba5b81958f1dc2d23bc6ad4dccbdeeaf64fe:/docs/source/Tutorial_MPI_Applications.rst diff --git a/docs/source/Tutorial_MPI_Applications.rst b/docs/source/Tutorial_MPI_Applications.rst index a1b93b5036..b88ddbc237 100644 --- a/docs/source/Tutorial_MPI_Applications.rst +++ b/docs/source/Tutorial_MPI_Applications.rst @@ -13,11 +13,11 @@ C/C++/F77/F90 applications should run out of the box in this environment. In fact, almost all proxy apps provided by the `ExaScale Project `_ only require minor modifications to `run on top of SMPI -`_. +`_. -This setting permits to debug your MPI applications in a perfectly -reproducible setup, with no Heisenbugs. Enjoy the full Clairevoyance -provided by the simulator while running what-if analysis on platforms +This setting permits one to debug your MPI applications in a perfectly +reproducible setup, with no Heisenbugs. Enjoy the full Clairvoyance +provided by the simulator while running what-if analyses on platforms that are still to be built! Several `production-grade MPI applications `_ use SimGrid for their integration and performance testing. @@ -41,7 +41,7 @@ How does it work? In SMPI, communications are simulated while computations are emulated. This means that while computations occur as they would in -the real systems, communication calls are intercepted and achived by +the real systems, communication calls are intercepted and achieved by the simulator. To start using SMPI, you just need to compile your application with @@ -58,7 +58,7 @@ per MPI rank as if it was another dynamic library. Then, MPI communication calls are implemented using SimGrid: data is exchanged through memory copy, while the simulator's performance models are used to predict the time taken by each communications. Any computations -occuring between two MPI calls are benchmarked, and the corresponding +occurring between two MPI calls are benchmarked, and the corresponding time is reported into the simulator. .. image:: /tuto_smpi/img/big-picture.svg @@ -95,12 +95,16 @@ simulated platform as a graph of hosts and network links. The elements basic elements (with :ref:`pf_tag_host` and :ref:`pf_tag_link`) are described first, and then the routes between -any pair of hosts are explicitely given with :ref:`pf_tag_route`. Any -host must be given a computational speed (in flops) while links must -be given a latency (in seconds) and a bandwidth (in bytes per -second). Note that you can write 1Gflops instead of 1000000000flops, -and similar. Last point: :ref:`pf_tag_route`s are symmetrical by -default (but this can be changed). +any pair of hosts are explicitly given with :ref:`pf_tag_route`. + +Any host must be given a computational speed in flops while links must +be given a latency and a bandwidth. You can write 1Gf for +1,000,000,000 flops (full list of units in the reference guide of +:ref:`pf_tag_host` and :ref:`pf_tag_link`). + +Routes defined with :ref:`pf_tag_route` are symmetrical by default, +meaning that the list of traversed links from A to B is the same as +from B to A. Explicitly define non-symmetrical routes if you prefer. Cluster with a Crossbar ....................... @@ -622,8 +626,8 @@ Further Readings You may also be interested in the `SMPI reference article `_ or these `introductory slides -`_. The `SMPI -reference documentation `_ covers much more content than +`_. The :ref:`SMPI +reference documentation ` covers much more content than this short tutorial. Finally, we regularly use SimGrid in our teachings on MPI. This way,