X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/blobdiff_plain/c20b20602a3b551b39aa7910874cca47c45430ab..65fb54463298816c1902c1a9167bf82d6bdb1339:/examples/msg/actions/mpi_actions.tesh diff --git a/examples/msg/actions/mpi_actions.tesh b/examples/msg/actions/mpi_actions.tesh deleted file mode 100644 index 001fee7768..0000000000 --- a/examples/msg/actions/mpi_actions.tesh +++ /dev/null @@ -1,42 +0,0 @@ -# A little tesh file testing most MPI-related actions - -! output sort 19 -$ ${bindir:=.}/mpi_actions --log=actions.thres=verbose ${srcdir:=.}/../../platforms/small_platform_fatpipe.xml mpi_deployment_split.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" -> WARNING: THIS BINARY IS KINDA DEPRECATED -> This example is still relevant if you want to learn about MSG-based trace replay, but if you want to simulate MPI-like traces, you should use the newer version that is in the examples/smpi/replay directory instead. -> [ 20.703314] (1:p0@Tremblay) p0 recv p1 20.703314 -> [ 20.703314] (2:p1@Ruby) p1 send p0 1e10 20.703314 -> [ 20.703314] (1:p0@Tremblay) p0 compute 12 0.000000 -> [ 32.703314] (2:p1@Ruby) p1 sleep 12 12.000000 -> [ 32.703314] (0:maestro@) Simulation time 32.7033 - -! output sort 19 -$ ${bindir:=.}/mpi_actions --log=actions.thres=verbose ${srcdir:=.}/../../platforms/small_platform_fatpipe.xml mpi_deployment.xml mpi_actions.txt "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" -> WARNING: THIS BINARY IS KINDA DEPRECATED -> This example is still relevant if you want to learn about MSG-based trace replay, but if you want to simulate MPI-like traces, you should use the newer version that is in the examples/smpi/replay directory instead. -> [ 0.000000] (1:p0@Tremblay) p0 comm_size 3 0.000000 -> [ 1.037020] (1:p0@Tremblay) p0 bcast 5e8 1.037020 -> [ 1.037020] (2:p1@Ruby) p1 bcast 5e8 1.037020 -> [ 1.037020] (3:p2@Perl) p2 bcast 5e8 1.037020 -> [ 1.082894] (1:p0@Tremblay) p0 compute 4.5E6 0.045874 -> [ 1.123670] (1:p0@Tremblay) p0 compute 4E6 0.040777 -> [ 1.149156] (1:p0@Tremblay) p0 compute 2.5E6 0.025485 -> [ 1.149156] (2:p1@Ruby) p1 Irecv p0 0.000000 -> [ 1.149156] (3:p2@Perl) p2 Irecv p1 0.000000 -> [ 3.221244] (1:p0@Tremblay) p0 send p1 1e9 2.072088 -> [ 6.246256] (3:p2@Perl) p2 compute 5e8 5.097100 -> [ 11.343355] (2:p1@Ruby) p1 compute 1e9 10.194200 -> [ 11.343355] (2:p1@Ruby) p1 wait 0.000000 -> [ 11.343355] (2:p1@Ruby) p1 Isend p2 1e9 0.000000 -> [ 13.415443] (1:p0@Tremblay) p0 compute 1e9 10.194200 -> [ 13.415443] (3:p2@Perl) p2 wait 7.169187 -> [ 14.452463] (2:p1@Ruby) p1 reduce 5e8 5e8 1.037020 -> [ 14.452463] (3:p2@Perl) p2 reduce 5e8 5e8 1.037020 -> [ 19.549562] (1:p0@Tremblay) p0 reduce 5e8 5e8 6.134119 -> [ 19.549562] (2:p1@Ruby) p1 compute 5e8 5.097100 -> [ 19.549562] (3:p2@Perl) p2 compute 5e8 5.097100 -> [ 24.646662] (1:p0@Tremblay) p0 compute 5e8 5.097100 -> [ 31.817801] (0:maestro@) Simulation time 31.8178 -> [ 31.817801] (1:p0@Tremblay) p0 allReduce 5e8 5e8 7.171139 -> [ 31.817801] (2:p1@Ruby) p1 allReduce 5e8 5e8 7.171139 -> [ 31.817801] (3:p2@Perl) p2 allReduce 5e8 5e8 7.171139