X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/blobdiff_plain/8e91e3324e3c27ba75a8c781cca897f35220272a..fea2606dff029fec63088d8e3d9f42925a67efea:/examples/msg/actions/actions.tesh diff --git a/examples/msg/actions/actions.tesh b/examples/msg/actions/actions.tesh index e1eb299d91..bce14deee9 100644 --- a/examples/msg/actions/actions.tesh +++ b/examples/msg/actions/actions.tesh @@ -1,87 +1,42 @@ # A little tesh file testing most MPI-related actions - -! output sort -$ ./actions --log=actions.thres=verbose homogeneous_3_hosts.xml deployment_split.xml --log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n -> [ 0.000000] (0:@) Using raw contexts. Because the glibc is just not good enough for us. -> [500.005200] (1:p0@host0) p0 recv p1 500.005200 -> [500.005200] (2:p1@host1) p1 send p0 1e10 500.005200 -> [500.005201] (1:p0@host0) p0 compute 12 0.000001 -> [512.005200] (0:@) Simulation time 512.005 -> [512.005200] (2:p1@host1) p1 sleep 12 12.000000 - -! output sort -$ ./actions --log=actions.thres=verbose homogeneous_3_hosts.xml deployment.xml actions_allReduce.txt --log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n -> [ 0.000000] (0:@) Using raw contexts. Because the glibc is just not good enough for us. -> [ 0.000000] (1:p0@host0) p0 comm_size 3 0.000000 -> [100.010400] (1:p0@host0) p0 allReduce 5e8 5e8 100.010400 -> [100.010400] (2:p1@host1) p1 allReduce 5e8 5e8 100.010400 -> [100.010400] (3:p2@host2) p2 allReduce 5e8 5e8 100.010400 -> [150.010400] (0:@) Simulation time 150.01 -> [150.010400] (1:p0@host0) p0 compute 5e8 50.000000 -> [150.010400] (2:p1@host1) p1 compute 5e8 50.000000 -> [150.010400] (3:p2@host2) p2 compute 5e8 50.000000 - -! output sort -$ ./actions --log=actions.thres=verbose homogeneous_3_hosts.xml deployment.xml actions_barrier.txt --log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n -> [ 0.000000] (0:@) Using raw contexts. Because the glibc is just not good enough for us. -> [ 0.000000] (1:p0@host0) p0 comm_size 3 0.000000 -> [ 0.000000] (2:p1@host1) p1 comm_size 3 0.000000 -> [ 0.000000] (3:p2@host2) p2 comm_size 3 0.000000 -> [ 0.505200] (1:p0@host0) p0 send p1 1E7 0.505200 -> [ 0.505200] (2:p1@host1) p1 recv p0 0.505200 -> [ 0.905200] (2:p1@host1) p1 compute 4E6 0.400000 -> [ 0.905200] (3:p2@host2) p2 compute 4E6 0.400000 -> [ 0.955200] (0:@) Simulation time 0.9552 -> [ 0.955200] (1:p0@host0) p0 compute 4.5E6 0.450000 - -! output sort -$ ./actions --log=actions.thres=verbose homogeneous_3_hosts.xml deployment.xml actions_bcast.txt --log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n -> [ 0.000000] (0:@) Using raw contexts. Because the glibc is just not good enough for us. -> [ 0.000000] (1:p0@host0) p0 comm_size 3 0.000000 -> [ 25.005200] (1:p0@host0) p0 bcast 5e8 25.005200 -> [ 25.005200] (2:p1@host1) p1 bcast 5e8 25.005200 -> [ 25.005200] (3:p2@host2) p2 bcast 5e8 25.005200 -> [ 45.005200] (2:p1@host1) p1 compute 2e8 20.000000 -> [ 75.005200] (1:p0@host0) p0 compute 5e8 50.000000 -> [ 75.005200] (3:p2@host2) p2 compute 5e8 50.000000 -> [100.010400] (1:p0@host0) p0 bcast 5e8 25.005200 -> [100.010400] (2:p1@host1) p1 bcast 5e8 55.005200 -> [100.010400] (3:p2@host2) p2 bcast 5e8 25.005200 -> [120.010400] (2:p1@host1) p1 compute 2e8 20.000000 -> [150.010400] (1:p0@host0) p0 compute 5e8 50.000000 -> [150.010400] (3:p2@host2) p2 compute 5e8 50.000000 -> [175.015600] (2:p1@host1) p1 reduce 5e8 5e8 55.005200 -> [175.015600] (3:p2@host2) p2 reduce 5e8 5e8 25.005200 -> [225.015600] (0:@) Simulation time 225.016 -> [225.015600] (1:p0@host0) p0 reduce 5e8 5e8 75.005200 - -! output sort -$ ./actions --log=actions.thres=verbose homogeneous_3_hosts.xml deployment.xml actions_reduce.txt --log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n -> [ 0.000000] (0:@) Using raw contexts. Because the glibc is just not good enough for us. -> [ 0.000000] (1:p0@host0) p0 comm_size 3 0.000000 -> [ 25.005200] (2:p1@host1) p1 reduce 5e8 5e8 25.005200 -> [ 25.005200] (3:p2@host2) p2 reduce 5e8 5e8 25.005200 -> [ 75.005200] (1:p0@host0) p0 reduce 5e8 5e8 75.005200 -> [ 75.005200] (2:p1@host1) p1 compute 5e8 50.000000 -> [ 75.005200] (3:p2@host2) p2 compute 5e8 50.000000 -> [125.005200] (0:@) Simulation time 125.005 -> [125.005200] (1:p0@host0) p0 compute 5e8 50.000000 - -! output sort -$ ./actions --log=actions.thres=verbose homogeneous_3_hosts.xml deployment.xml actions_with_isend.txt --log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n -> [ 0.000000] (0:@) Using raw contexts. Because the glibc is just not good enough for us. -> [ 0.000000] (2:p1@host1) p1 Irecv p0 0.000000 -> [ 0.000000] (3:p2@host2) p2 Irecv p1 0.000000 -> [ 50.000000] (3:p2@host2) p2 compute 5e8 50.000000 -> [ 50.005200] (1:p0@host0) p0 send p1 1e9 50.005200 -> [100.000000] (2:p1@host1) p1 compute 1e9 100.000000 -> [100.000000] (2:p1@host1) p1 wait 0.000000 -> [150.005200] (1:p0@host0) p0 compute 1e9 100.000000 -> [150.005200] (2:p1@host1) p1 send p2 1e9 50.005200 -> [150.005200] (3:p2@host2) p2 wait 100.005200 -> [150.005200] (3:p2@host2) p2 Isend p0 1e9 0.000000 -> [200.005200] (3:p2@host2) p2 compute 5e8 50.000000 -> [200.010400] (0:@) Simulation time 200.01 -> [200.010400] (1:p0@host0) p0 recv p2 50.005200 - +! output sort 19 +$ ${bindir:=.}/actions --log=actions.thres=verbose ${srcdir:=.}/../../platforms/small_platform_fatpipe.xml deployment_split.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" +> WARNING: THIS BINARY IS KINDA DEPRECATED +> This example is still relevant if you want to learn about MSG-based trace replay, but if you want to simulate MPI-like traces, you should use the newer version that is in the examples/smpi/replay directory instead. +> [ 20.703314] (1:p0@Tremblay) p0 recv p1 20.703314 +> [ 20.703314] (2:p1@Ruby) p1 send p0 1e10 20.703314 +> [ 20.703314] (1:p0@Tremblay) p0 compute 12 0.000000 +> [ 32.703314] (2:p1@Ruby) p1 sleep 12 12.000000 +> [ 32.703314] (0:maestro@) Simulation time 32.7033 + +! output sort 19 +$ ${bindir:=.}/actions --log=actions.thres=verbose ${srcdir:=.}/../../platforms/small_platform_fatpipe.xml deployment.xml mpi_actions_shared.txt "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" +> WARNING: THIS BINARY IS KINDA DEPRECATED +> This example is still relevant if you want to learn about MSG-based trace replay, but if you want to simulate MPI-like traces, you should use the newer version that is in the examples/smpi/replay directory instead. +> [ 0.000000] (1:p0@Tremblay) p0 comm_size 3 0.000000 +> [ 1.037020] (1:p0@Tremblay) p0 bcast 5e8 1.037020 +> [ 1.037020] (2:p1@Ruby) p1 bcast 5e8 1.037020 +> [ 1.037020] (3:p2@Perl) p2 bcast 5e8 1.037020 +> [ 1.082894] (1:p0@Tremblay) p0 compute 4.5E6 0.045874 +> [ 1.123670] (1:p0@Tremblay) p0 compute 4E6 0.040777 +> [ 1.149156] (1:p0@Tremblay) p0 compute 2.5E6 0.025485 +> [ 1.149156] (2:p1@Ruby) p1 Irecv p0 0.000000 +> [ 1.149156] (3:p2@Perl) p2 Irecv p1 0.000000 +> [ 3.221244] (1:p0@Tremblay) p0 send p1 1e9 2.072088 +> [ 6.246256] (3:p2@Perl) p2 compute 5e8 5.097100 +> [ 11.343355] (2:p1@Ruby) p1 compute 1e9 10.194200 +> [ 11.343355] (2:p1@Ruby) p1 wait 0.000000 +> [ 11.343355] (2:p1@Ruby) p1 Isend p2 1e9 0.000000 +> [ 13.415443] (1:p0@Tremblay) p0 compute 1e9 10.194200 +> [ 13.415443] (3:p2@Perl) p2 wait 7.169187 +> [ 14.452463] (2:p1@Ruby) p1 reduce 5e8 5e8 1.037020 +> [ 14.452463] (3:p2@Perl) p2 reduce 5e8 5e8 1.037020 +> [ 19.549562] (1:p0@Tremblay) p0 reduce 5e8 5e8 6.134119 +> [ 19.549562] (2:p1@Ruby) p1 compute 5e8 5.097100 +> [ 19.549562] (3:p2@Perl) p2 compute 5e8 5.097100 +> [ 24.646662] (1:p0@Tremblay) p0 compute 5e8 5.097100 +> [ 31.817801] (0:maestro@) Simulation time 31.8178 +> [ 31.817801] (1:p0@Tremblay) p0 allReduce 5e8 5e8 7.171139 +> [ 31.817801] (2:p1@Ruby) p1 allReduce 5e8 5e8 7.171139 +> [ 31.817801] (3:p2@Perl) p2 allReduce 5e8 5e8 7.171139