-> [ 0.000000] (1:p0@host0) p0 comm_size 3 0.000000
-> [ 0.547742] (1:p0@host0) p0 bcast 5e8 0.547742
-> [ 0.547742] (2:p1@host1) p1 bcast 5e8 0.547742
-> [ 0.547742] (3:p2@host2) p2 bcast 5e8 0.547742
-> [ 20.547742] (2:p1@host1) p1 compute 2e8 20.000000
-> [ 50.547742] (1:p0@host0) p0 compute 5e8 50.000000
-> [ 50.547742] (3:p2@host2) p2 compute 5e8 50.000000
-> [ 51.095484] (1:p0@host0) p0 bcast 5e8 0.547742
-> [ 51.095484] (2:p1@host1) p1 bcast 5e8 30.547742
-> [ 51.095484] (3:p2@host2) p2 bcast 5e8 0.547742
-> [ 71.095484] (2:p1@host1) p1 compute 2e8 20.000000
-> [101.095484] (1:p0@host0) p0 compute 5e8 50.000000
-> [101.095484] (3:p2@host2) p2 compute 5e8 50.000000
-> [101.643226] (2:p1@host1) p1 reduce 5e8 5e8 30.547742
-> [101.643226] (3:p2@host2) p2 reduce 5e8 5e8 0.547742
-> [151.643226] (0:@) Simulation time 151.643
-> [151.643226] (1:p0@host0) p0 reduce 5e8 5e8 50.547742
+> This example is still relevant if you want to learn about MSG-based trace replay, but if you want to simulate MPI-like traces, you should use the newer version that is in the examples/smpi/replay directory instead.
+> [ 0.000000] (1:p0@Tremblay) p0 comm_size 3 0.000000
+> [ 1.037020] (2:p1@Ruby) p1 bcast 5e8 1.037020
+> [ 1.037020] (3:p2@Perl) p2 bcast 5e8 1.037020
+> [ 1.037020] (1:p0@Tremblay) p0 bcast 5e8 1.037020
+> [ 3.075860] (2:p1@Ruby) p1 compute 2e8 2.038840
+> [ 6.134119] (1:p0@Tremblay) p0 compute 5e8 5.097100
+> [ 6.134119] (3:p2@Perl) p2 compute 5e8 5.097100
+> [ 7.171139] (2:p1@Ruby) p1 bcast 5e8 4.095279
+> [ 7.171139] (3:p2@Perl) p2 bcast 5e8 1.037020
+> [ 7.171139] (1:p0@Tremblay) p0 bcast 5e8 1.037020
+> [ 9.209979] (2:p1@Ruby) p1 compute 2e8 2.038840
+> [ 12.268239] (1:p0@Tremblay) p0 compute 5e8 5.097100
+> [ 12.268239] (3:p2@Perl) p2 compute 5e8 5.097100
+> [ 13.305258] (2:p1@Ruby) p1 reduce 5e8 5e8 4.095279
+> [ 13.305258] (3:p2@Perl) p2 reduce 5e8 5e8 1.037020
+> [ 18.402358] (1:p0@Tremblay) p0 reduce 5e8 5e8 6.134119
+> [ 18.402358] (0:@) Simulation time 18.4024