-! output sort
-$ ./actions --log=actions.thres=verbose homogeneous_3_hosts.xml deployment.xml actions_bcast.txt --log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n
-> [ 0.000000] (1:p0@host0) p0 comm_size 3 0.000000
-> [ 25.010400] (2:p1@host1) p1 bcast 5e8 25.010400
-> [ 25.015600] (1:p0@host0) p0 bcast 5e8 25.015600
-> [ 25.015600] (3:p2@host2) p2 bcast 5e8 25.015600
-> [ 45.010400] (2:p1@host1) p1 compute 2e8 20.000000
-> [ 75.015600] (1:p0@host0) p0 compute 5e8 50.000000
-> [ 75.015600] (3:p2@host2) p2 compute 5e8 50.000000
-> [100.026000] (2:p1@host1) p1 bcast 5e8 55.015600
-> [100.031200] (1:p0@host0) p0 bcast 5e8 25.015600
-> [100.031200] (3:p2@host2) p2 bcast 5e8 25.015600
-> [120.026000] (2:p1@host1) p1 compute 2e8 20.000000
-> [150.031200] (1:p0@host0) p0 compute 5e8 50.000000
-> [150.031200] (3:p2@host2) p2 compute 5e8 50.000000
-> [175.036400] (2:p1@host1) p1 reduce 5e8 5e8 55.010400
-> [175.036400] (3:p2@host2) p2 reduce 5e8 5e8 25.005200
-> [225.036712] (0:@) Simulation time 225.037
-> [225.036712] (1:p0@host0) p0 reduce 5e8 5e8 75.005512
-
-! output sort
-$ ./actions --log=actions.thres=verbose homogeneous_3_hosts.xml deployment.xml actions_reduce.txt --log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n
-> [ 0.000000] (1:p0@host0) p0 comm_size 3 0.000000
-> [ 25.005200] (2:p1@host1) p1 reduce 5e8 5e8 25.005200
-> [ 25.005200] (3:p2@host2) p2 reduce 5e8 5e8 25.005200
-> [ 75.005200] (2:p1@host1) p1 compute 5e8 50.000000
-> [ 75.005200] (3:p2@host2) p2 compute 5e8 50.000000
-> [ 75.005512] (1:p0@host0) p0 reduce 5e8 5e8 75.005512
-> [125.005512] (0:@) Simulation time 125.006
-> [125.005512] (1:p0@host0) p0 compute 5e8 50.000000
-
-! output sort
-$ ./actions --log=actions.thres=verbose homogeneous_3_hosts.xml deployment.xml actions_with_isend.txt --log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n
-> [ 0.000000] (2:p1@host1) p1 Irecv p0 0.000000
-> [ 0.000000] (3:p2@host2) p2 Irecv p1 0.000000
-> [ 50.000000] (3:p2@host2) p2 compute 5e8 50.000000
-> [ 50.005200] (1:p0@host0) p0 send p1 1e9 50.005200
-> [100.000000] (2:p1@host1) p1 compute 1e9 100.000000
-> [100.000156] (2:p1@host1) p1 wait 0.000156
-> [150.005200] (1:p0@host0) p0 compute 1e9 100.000000
-> [150.005356] (2:p1@host1) p1 send p2 1e9 50.005200
-> [150.005512] (3:p2@host2) p2 wait 100.005512
-> [150.005512] (3:p2@host2) p2 Isend p0 1e9 0.000000
-> [200.005512] (3:p2@host2) p2 compute 5e8 50.000000
-> [200.010712] (0:@) Simulation time 200.011
-> [200.010712] (1:p0@host0) p0 recv p2 50.005512
+! output sort 19
+$ ${bindir:=.}/actions --log=actions.thres=verbose ${srcdir:=.}/../../platforms/small_platform_fatpipe.xml deployment.xml actions_reduce.txt "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+> WARNING: THIS BINARY IS KINDA DEPRECATED
+> This example is still relevant if you want to learn about MSG-based trace replay, but if you want to simulate MPI-like traces, you should use the newer version that is in the examples/smpi/replay directory instead.
+> [ 0.000000] (1:p0@Tremblay) p0 comm_size 3 0.000000
+> [ 1.037020] (2:p1@Ruby) p1 reduce 5e8 5e8 1.037020
+> [ 1.037020] (3:p2@Perl) p2 reduce 5e8 5e8 1.037020
+> [ 6.134119] (2:p1@Ruby) p1 compute 5e8 5.097100
+> [ 6.134119] (1:p0@Tremblay) p0 reduce 5e8 5e8 6.134119
+> [ 6.134119] (3:p2@Perl) p2 compute 5e8 5.097100
+> [ 11.231219] (1:p0@Tremblay) p0 compute 5e8 5.097100
+> [ 11.231219] (0:maestro@) Simulation time 11.2312