! expect return 2
! timeout 30
! output display
-$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-bugged1-liveness ${platfdir:=.}/small_platform.xml 1 --log=xbt_cfg.thresh:warning "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --cfg=contexts/factory:ucontext --cfg=model-check/visited:100 --cfg=contexts/stack-size:256 --cfg=model-check/property:promela_bugged1_liveness
+$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true ${bindir:=.}/s4u-mc-bugged1-liveness ${platfdir:=.}/small_platform.xml 1 --log=xbt_cfg.thresh:warning "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --cfg=contexts/factory:ucontext --cfg=model-check/visited:100 --cfg=contexts/stack-size:256 --cfg=model-check/property:promela_bugged1_liveness
> [ 0.000000] (0:maestro@) Check the liveness property promela_bugged1_liveness
> [ 0.000000] (2:client@Boivin) Ask the request
> [ 0.000000] (3:client@Fafard) Ask the request
! expect return 2
! timeout 20
! output display
-$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-bugged1-liveness ${platfdir:=.}/small_platform.xml 0 --log=xbt_cfg.thresh:warning "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --cfg=contexts/factory:ucontext --cfg=contexts/stack-size:256 --cfg=model-check/property:promela_bugged1_liveness
+$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true ${bindir:=.}/s4u-mc-bugged1-liveness ${platfdir:=.}/small_platform.xml 0 --log=xbt_cfg.thresh:warning "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --cfg=contexts/factory:ucontext --cfg=contexts/stack-size:256 --cfg=model-check/property:promela_bugged1_liveness
> [ 0.000000] (0:maestro@) Check the liveness property promela_bugged1_liveness
> [ 0.000000] (2:client@Boivin) Ask the request
> [ 0.000000] (3:client@Fafard) Ask the request
> [ 0.000000] (1:server@HostA) OK
> [ 0.000000] (4:client@HostD) Sent!
> [ 0.000000] (2:client@HostB) Sent!
+> [ 0.000000] (1:server@HostA) OK
+> [ 0.000000] (3:client@HostC) Sent!
+> [ 0.000000] (4:client@HostD) Sent!
+> [ 0.000000] (2:client@HostB) Sent!
+> [ 0.000000] (1:server@HostA) OK
+> [ 0.000000] (4:client@HostD) Sent!
+> [ 0.000000] (3:client@HostC) Sent!
+> [ 0.000000] (2:client@HostB) Sent!
+> [ 0.000000] (3:client@HostC) Sent!
+> [ 0.000000] (1:server@HostA) OK
+> [ 0.000000] (4:client@HostD) Sent!
+> [ 0.000000] (2:client@HostB) Sent!
+> [ 0.000000] (4:client@HostD) Sent!
+> [ 0.000000] (1:server@HostA) OK
+> [ 0.000000] (3:client@HostC) Sent!
+> [ 0.000000] (2:client@HostB) Sent!
+> [ 0.000000] (4:client@HostD) Sent!
+> [ 0.000000] (3:client@HostC) Sent!
+> [ 0.000000] (1:server@HostA) OK
+> [ 0.000000] (2:client@HostB) Sent!
+> [ 0.000000] (3:client@HostC) Sent!
+> [ 0.000000] (1:server@HostA) OK
+> [ 0.000000] (4:client@HostD) Sent!
+> [ 0.000000] (2:client@HostB) Sent!
+> [ 0.000000] (3:client@HostC) Sent!
+> [ 0.000000] (1:server@HostA) OK
+> [ 0.000000] (4:client@HostD) Sent!
+> [ 0.000000] (2:client@HostB) Sent!
+> [ 0.000000] (1:server@HostA) OK
+> [ 0.000000] (3:client@HostC) Sent!
+> [ 0.000000] (4:client@HostD) Sent!
+> [ 0.000000] (2:client@HostB) Sent!
+> [ 0.000000] (1:server@HostA) OK
+> [ 0.000000] (4:client@HostD) Sent!
> [ 0.000000] (3:client@HostC) Sent!
> [ 0.000000] (2:client@HostB) Sent!
> [ 0.000000] (3:client@HostC) Sent!
> [ 0.000000] (1:server@HostA) OK
> [ 0.000000] (4:client@HostD) Sent!
> [ 0.000000] (2:client@HostB) Sent!
+> [ 0.000000] (4:client@HostD) Sent!
+> [ 0.000000] (1:server@HostA) OK
+> [ 0.000000] (3:client@HostC) Sent!
+> [ 0.000000] (2:client@HostB) Sent!
+> [ 0.000000] (4:client@HostD) Sent!
> [ 0.000000] (3:client@HostC) Sent!
+> [ 0.000000] (1:server@HostA) OK
+> [ 0.000000] (2:client@HostB) Sent!
+> [ 0.000000] (3:client@HostC) Sent!
+> [ 0.000000] (1:server@HostA) OK
+> [ 0.000000] (4:client@HostD) Sent!
> [ 0.000000] (2:client@HostB) Sent!
> [ 0.000000] (0:maestro@) **************************
> [ 0.000000] (0:maestro@) *** PROPERTY NOT VALID ***
> [ 0.000000] (0:maestro@) 3: iSend(mbox=0)
> [ 0.000000] (0:maestro@) 1: WaitComm(from 3 to 1, mbox=0, no timeout)
> [ 0.000000] (0:maestro@) You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;2;1;1;2;4;1;1;3;1'
-> [ 0.000000] (0:maestro@) DFS exploration ended. 22 unique states visited; 4 backtracks (56 transition replays, 30 states visited overall)
\ No newline at end of file
+> [ 0.000000] (0:maestro@) DFS exploration ended. 59 unique states visited; 14 backtracks (192 transition replays, 119 states visited overall)
\ No newline at end of file
! expect return 1
! timeout 20
-$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-bugged2 ${platfdir:=.}/model_checker_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --log=xbt_cfg.thresh:warning --cfg=contexts/stack-size:256
-> [ 0.000000] (0:maestro@) Start a DFS exploration. Reduction is: dpor.
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 1 2
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) Second pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 1
-> [ 0.000000] (1:server@HostA) First pair received: 2 2
-> [ 0.000000] (0:maestro@) **************************
-> [ 0.000000] (0:maestro@) *** PROPERTY NOT VALID ***
-> [ 0.000000] (0:maestro@) **************************
-> [ 0.000000] (0:maestro@) Counter-example execution trace:
-> [ 0.000000] (0:maestro@) 1: iRecv(mbox=0)
-> [ 0.000000] (0:maestro@) 3: iSend(mbox=0)
-> [ 0.000000] (0:maestro@) 1: WaitComm(from 3 to 1, mbox=0, no timeout)
-> [ 0.000000] (0:maestro@) 3: WaitComm(from 3 to 1, mbox=0, no timeout)
-> [ 0.000000] (0:maestro@) 1: iRecv(mbox=0)
-> [ 0.000000] (0:maestro@) 3: iSend(mbox=0)
-> [ 0.000000] (0:maestro@) 1: WaitComm(from 3 to 1, mbox=0, no timeout)
-> [ 0.000000] (0:maestro@) You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;3;1;3;1;3;1'
-> [ 0.000000] (0:maestro@) DFS exploration ended. 1006 unique states visited; 350 backtracks (5319 transition replays, 3963 states visited overall)
+$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true ${bindir:=.}/s4u-mc-bugged2 ${platfdir:=.}/model_checker_platform.xml --log=root.thresh:critical --log=mc.thresh:info
+> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor.
+> [0.000000] [mc_ModelChecker/INFO] **************************
+> [0.000000] [mc_ModelChecker/INFO] *** PROPERTY NOT VALID ***
+> [0.000000] [mc_ModelChecker/INFO] **************************
+> [0.000000] [mc_ModelChecker/INFO] Counter-example execution trace:
+> [0.000000] [mc_ModelChecker/INFO] 1: iRecv(mbox=0)
+> [0.000000] [mc_ModelChecker/INFO] 3: iSend(mbox=0)
+> [0.000000] [mc_ModelChecker/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout)
+> [0.000000] [mc_ModelChecker/INFO] 1: iRecv(mbox=0)
+> [0.000000] [mc_ModelChecker/INFO] 3: WaitComm(from 3 to 1, mbox=0, no timeout)
+> [0.000000] [mc_ModelChecker/INFO] 3: iSend(mbox=0)
+> [0.000000] [mc_ModelChecker/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout)
+> [0.000000] [mc_ModelChecker/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;3;1;1;3;3;1'
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 2091 unique states visited; 529 backtracks (8359 transition replays, 5739 states visited overall)
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 15 unique states visited; 5 backtracks (32 transition replays, 13 states visited overall)
+> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent!
+> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK
+> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent!
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 169 unique states visited; 29 backtracks (261 transition replays, 64 states visited overall)
\ No newline at end of file
! expect return 1
! timeout 20
-$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-failing-assert ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --log=xbt_cfg.thresh:warning
+$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true ${bindir:=.}/s4u-mc-failing-assert ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --log=xbt_cfg.thresh:warning
> [ 0.000000] (0:maestro@) Start a DFS exploration. Reduction is: dpor.
> [ 0.000000] (2:client1@Bourassa) Sent!
> [ 0.000000] (1:server@Boivin) OK
> [ 0.000000] (3:client2@Fafard) Sent!
-> [ 0.000000] (2:client1@Bourassa) Sent!
+> [ 0.000000] (1:server@Boivin) OK
+> [ 0.000000] (3:client2@Fafard) Sent!
> [ 0.000000] (2:client1@Bourassa) Sent!
> [ 0.000000] (1:server@Boivin) OK
> [ 0.000000] (3:client2@Fafard) Sent!
> [ 0.000000] (2:client1@Bourassa) Sent!
+> [ 0.000000] (1:server@Boivin) OK
+> [ 0.000000] (3:client2@Fafard) Sent!
+> [ 0.000000] (1:server@Boivin) OK
+> [ 0.000000] (3:client2@Fafard) Sent!
> [ 0.000000] (0:maestro@) **************************
> [ 0.000000] (0:maestro@) *** PROPERTY NOT VALID ***
> [ 0.000000] (0:maestro@) **************************
> [ 0.000000] (0:maestro@) 2: iSend(mbox=0)
> [ 0.000000] (0:maestro@) 1: WaitComm(from 2 to 1, mbox=0, no timeout)
> [ 0.000000] (0:maestro@) You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;3;1;1;2;1'
-> [ 0.000000] (0:maestro@) DFS exploration ended. 18 unique states visited; 4 backtracks (36 transition replays, 14 states visited overall)
+> [ 0.000000] (0:maestro@) DFS exploration ended. 29 unique states visited; 5 backtracks (49 transition replays, 15 states visited overall)
\ No newline at end of file
> [Checker] BARRIER_WAIT(barrier: 0) (state=3)
> [Checker] BARRIER_WAIT(barrier: 0) (state=4)
> [Checker] Dependent Transitions:
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1)
+> [Checker] BARRIER_WAIT(barrier: 0) (state=4)
+> [Checker] Dependent Transitions:
> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2)
> [Checker] BARRIER_WAIT(barrier: 0) (state=3)
> [Checker] INDEPENDENT Transitions:
> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1)
> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2)
-> [Checker] Backtracking from 1;2
-> [Checker] DFS exploration ended. 5 unique states visited; 2 backtracks (7 transition replays, 1 states visited overall)
+> [Checker] Execute 2: BARRIER_ASYNC_LOCK(barrier: 0) (stack depth: 1, state: 1, 0 interleaves)
+> [Checker] Execute 1: BARRIER_ASYNC_LOCK(barrier: 0) (stack depth: 2, state: 6, 0 interleaves)
+> [Checker] Execute 1: BARRIER_WAIT(barrier: 0) (stack depth: 3, state: 7, 0 interleaves)
+> [Checker] Execute 2: BARRIER_WAIT(barrier: 0) (stack depth: 4, state: 8, 0 interleaves)
+> [Checker] Execution came to an end at 2;1;1;2;0 (state: 9, depth: 5)
+> [Checker] Backtracking from 2;1;1;2;0
+> [Checker] INDEPENDENT Transitions:
+> [Checker] BARRIER_WAIT(barrier: 0) (state=7)
+> [Checker] BARRIER_WAIT(barrier: 0) (state=8)
+> [Checker] Dependent Transitions:
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=6)
+> [Checker] BARRIER_WAIT(barrier: 0) (state=8)
+> [Checker] Dependent Transitions:
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1)
+> [Checker] BARRIER_WAIT(barrier: 0) (state=7)
+> [Checker] INDEPENDENT Transitions:
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1)
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=6)
+> [Checker] DFS exploration ended. 9 unique states visited; 2 backtracks (10 transition replays, 0 states visited overall)
$ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt="[Checker]%e%m%n" -- ${bindir:=.}/s4u-synchro-barrier 3 --log=s4u_test.thres:critical --log=root.fmt="[App%e%e%e%e]%e%m%n"
> [Checker] Start a DFS exploration. Reduction is: dpor.
> [Checker] INDEPENDENT Transitions:
> [Checker] BARRIER_WAIT(barrier: 0) (state=4)
> [Checker] BARRIER_WAIT(barrier: 0) (state=6)
+> [Checker] Dependent Transitions:
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2)
+> [Checker] BARRIER_WAIT(barrier: 0) (state=6)
> [Checker] INDEPENDENT Transitions:
> [Checker] BARRIER_WAIT(barrier: 0) (state=4)
> [Checker] BARRIER_WAIT(barrier: 0) (state=5)
> [Checker] INDEPENDENT Transitions:
> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1)
> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=3)
-> [Checker] Backtracking from 1;2;3
> [Checker] INDEPENDENT Transitions:
> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1)
> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2)
-> [Checker] DFS exploration ended. 7 unique states visited; 2 backtracks (10 transition replays, 2 states visited overall)
+> [Checker] Execute 3: BARRIER_ASYNC_LOCK(barrier: 0) (stack depth: 2, state: 2, 0 interleaves)
+> [Checker] Execute 2: BARRIER_ASYNC_LOCK(barrier: 0) (stack depth: 3, state: 8, 0 interleaves)
+> [Checker] Execute 1: BARRIER_WAIT(barrier: 0) (stack depth: 4, state: 9, 0 interleaves)
+> [Checker] Execute 2: BARRIER_WAIT(barrier: 0) (stack depth: 5, state: 10, 0 interleaves)
+> [Checker] Execute 3: BARRIER_WAIT(barrier: 0) (stack depth: 6, state: 11, 0 interleaves)
+> [Checker] Execution came to an end at 1;3;2;1;2;3;0 (state: 12, depth: 7)
+> [Checker] Backtracking from 1;3;2;1;2;3;0
+> [Checker] INDEPENDENT Transitions:
+> [Checker] BARRIER_WAIT(barrier: 0) (state=10)
+> [Checker] BARRIER_WAIT(barrier: 0) (state=11)
+> [Checker] INDEPENDENT Transitions:
+> [Checker] BARRIER_WAIT(barrier: 0) (state=9)
+> [Checker] BARRIER_WAIT(barrier: 0) (state=11)
+> [Checker] Dependent Transitions:
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=8)
+> [Checker] BARRIER_WAIT(barrier: 0) (state=11)
+> [Checker] INDEPENDENT Transitions:
+> [Checker] BARRIER_WAIT(barrier: 0) (state=9)
+> [Checker] BARRIER_WAIT(barrier: 0) (state=10)
+> [Checker] Dependent Transitions:
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2)
+> [Checker] BARRIER_WAIT(barrier: 0) (state=10)
+> [Checker] Dependent Transitions:
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=8)
+> [Checker] BARRIER_WAIT(barrier: 0) (state=9)
+> [Checker] INDEPENDENT Transitions:
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2)
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=8)
+> [Checker] INDEPENDENT Transitions:
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1)
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=8)
+> [Checker] INDEPENDENT Transitions:
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1)
+> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2)
+> [Checker] DFS exploration ended. 12 unique states visited; 2 backtracks (14 transition replays, 1 states visited overall)
\ No newline at end of file
> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 6, 0 interleaves)
> [Checker] Execution came to an end at 1;1;1;2;2;2;0 (state: 7, depth: 7)
> [Checker] Backtracking from 1;1;1;2;2;2;0
+> [Checker] Dependent Transitions:
+> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3)
+> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=6)
+> [Checker] Dependent Transitions:
+> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3)
+> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=5)
> [Checker] INDEPENDENT Transitions:
> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3)
> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4)
> [Checker] Dependent Transitions:
> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=1)
> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4)
-> [Checker] Execute 2: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 1, state: 1, 0 interleaves)
-> [Checker] Execute 1: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 2, state: 8, 0 interleaves)
-> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 3, state: 9, 0 interleaves)
-> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: 1) (stack depth: 4, state: 10, 0 interleaves)
-> [Checker] Execute 1: MUTEX_WAIT(mutex: 0, owner: 1) (stack depth: 5, state: 11, 0 interleaves)
-> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 12, 0 interleaves)
-> [Checker] Execution came to an end at 2;1;2;2;1;1;0 (state: 13, depth: 7)
-> [Checker] Backtracking from 2;1;2;2;1;1;0
-> [Checker] Dependent Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=10)
-> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=11)
-> [Checker] Backtracking from 2;1;2;2
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=8)
-> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=9)
-> [Checker] Dependent Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=1)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=8)
-> [Checker] DFS exploration ended. 13 unique states visited; 3 backtracks (18 transition replays, 3 states visited overall)
-
-$ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt="[Checker]%e%m%n" -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:2 --log=s4u_test.thres:critical --log=root.fmt="[App%e%e%e%e]%e%m%n"
-> [App ] Configuration change: Set 'actors' to '2'
-> [Checker] Start a DFS exploration. Reduction is: dpor.
-> [Checker] Execute 1: MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (stack depth: 1, state: 1, 0 interleaves)
-> [Checker] Execute 1: MUTEX_WAIT(mutex: 0, owner: 1) (stack depth: 2, state: 2, 0 interleaves)
-> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 3, state: 3, 0 interleaves)
-> [Checker] Execute 2: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 4, state: 4, 0 interleaves)
-> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 5, state: 5, 0 interleaves)
-> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 6, 0 interleaves)
-> [Checker] Execute 3: MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (stack depth: 7, state: 7, 0 interleaves)
-> [Checker] Execute 3: MUTEX_WAIT(mutex: 1, owner: 3) (stack depth: 8, state: 8, 0 interleaves)
-> [Checker] Execute 3: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 9, state: 9, 0 interleaves)
-> [Checker] Execute 4: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 10, state: 10, 0 interleaves)
-> [Checker] Execute 4: MUTEX_WAIT(mutex: 1, owner: 4) (stack depth: 11, state: 11, 0 interleaves)
-> [Checker] Execute 4: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 12, state: 12, 0 interleaves)
-> [Checker] Execution came to an end at 1;1;1;2;2;2;3;3;3;4;4;4;0 (state: 13, depth: 13)
-> [Checker] Backtracking from 1;1;1;2;2;2;3;3;3;4;4;4;0
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 1, owner: -1) (state=9)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=10)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_WAIT(mutex: 1, owner: 3) (state=8)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=10)
-> [Checker] Dependent Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=10)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=6)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=5)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=2)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=1)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7)
-> [Checker] Execute 4: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 7, state: 7, 0 interleaves)
-> [Checker] Execute 3: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 8, state: 14, 0 interleaves)
-> [Checker] Execute 4: MUTEX_WAIT(mutex: 1, owner: 4) (stack depth: 9, state: 15, 0 interleaves)
-> [Checker] Execute 4: MUTEX_UNLOCK(mutex: 1, owner: 3) (stack depth: 10, state: 16, 0 interleaves)
-> [Checker] Execute 3: MUTEX_WAIT(mutex: 1, owner: 3) (stack depth: 11, state: 17, 0 interleaves)
-> [Checker] Execute 3: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 12, state: 18, 0 interleaves)
-> [Checker] Execution came to an end at 1;1;1;2;2;2;4;3;4;4;3;3;0 (state: 19, depth: 13)
-> [Checker] Backtracking from 1;1;1;2;2;2;4;3;4;4;3;3;0
+> [Checker] Execute 2: MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (stack depth: 3, state: 3, 0 interleaves)
+> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: 2) (stack depth: 4, state: 8, 0 interleaves)
+> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 5, state: 9, 0 interleaves)
+> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 10, 0 interleaves)
+> [Checker] Execution came to an end at 1;1;2;1;2;2;0 (state: 11, depth: 7)
+> [Checker] Backtracking from 1;1;2;1;2;2;0
> [Checker] Dependent Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 1, owner: 3) (state=16)
-> [Checker] MUTEX_WAIT(mutex: 1, owner: 3) (state=17)
-> [Checker] Backtracking from 1;1;1;2;2;2;4;3;4;4
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=14)
-> [Checker] MUTEX_WAIT(mutex: 1, owner: 4) (state=15)
+> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 2) (state=8)
+> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=10)
> [Checker] Dependent Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=14)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=6)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=5)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=2)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=1)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7)
+> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 2) (state=8)
+> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=9)
> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4)
+> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=3)
+> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 2) (state=8)
> [Checker] INDEPENDENT Transitions:
> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=2)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4)
+> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=3)
> [Checker] Dependent Transitions:
> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=1)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4)
+> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=3)
> [Checker] Execute 2: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 1, state: 1, 0 interleaves)
-> [Checker] Execute 1: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 2, state: 20, 0 interleaves)
-> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 3, state: 21, 0 interleaves)
-> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: 1) (stack depth: 4, state: 22, 0 interleaves)
-> [Checker] Execute 1: MUTEX_WAIT(mutex: 0, owner: 1) (stack depth: 5, state: 23, 0 interleaves)
-> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 24, 0 interleaves)
-> [Checker] Execute 3: MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (stack depth: 7, state: 25, 0 interleaves)
-> [Checker] Execute 3: MUTEX_WAIT(mutex: 1, owner: 3) (stack depth: 8, state: 26, 0 interleaves)
-> [Checker] Execute 3: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 9, state: 27, 0 interleaves)
-> [Checker] Execute 4: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 10, state: 28, 0 interleaves)
-> [Checker] Execute 4: MUTEX_WAIT(mutex: 1, owner: 4) (stack depth: 11, state: 29, 0 interleaves)
-> [Checker] Execute 4: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 12, state: 30, 0 interleaves)
-> [Checker] Execution came to an end at 2;1;2;2;1;1;3;3;3;4;4;4;0 (state: 31, depth: 13)
-> [Checker] Backtracking from 2;1;2;2;1;1;3;3;3;4;4;4;0
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 1, owner: -1) (state=27)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=28)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_WAIT(mutex: 1, owner: 3) (state=26)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=28)
-> [Checker] Dependent Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=28)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=24)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=23)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=22)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=21)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=20)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=1)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25)
-> [Checker] Execute 4: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 7, state: 25, 0 interleaves)
-> [Checker] Execute 3: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 8, state: 32, 0 interleaves)
-> [Checker] Execute 4: MUTEX_WAIT(mutex: 1, owner: 4) (stack depth: 9, state: 33, 0 interleaves)
-> [Checker] Execute 4: MUTEX_UNLOCK(mutex: 1, owner: 3) (stack depth: 10, state: 34, 0 interleaves)
-> [Checker] Execute 3: MUTEX_WAIT(mutex: 1, owner: 3) (stack depth: 11, state: 35, 0 interleaves)
-> [Checker] Execute 3: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 12, state: 36, 0 interleaves)
-> [Checker] Execution came to an end at 2;1;2;2;1;1;4;3;4;4;3;3;0 (state: 37, depth: 13)
-> [Checker] Backtracking from 2;1;2;2;1;1;4;3;4;4;3;3;0
+> [Checker] Execute 1: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 2, state: 12, 0 interleaves)
+> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 3, state: 13, 0 interleaves)
+> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: 1) (stack depth: 4, state: 14, 0 interleaves)
+> [Checker] Execute 1: MUTEX_WAIT(mutex: 0, owner: 1) (stack depth: 5, state: 15, 0 interleaves)
+> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 16, 0 interleaves)
+> [Checker] Execution came to an end at 2;1;2;2;1;1;0 (state: 17, depth: 7)
+> [Checker] Backtracking from 2;1;2;2;1;1;0
> [Checker] Dependent Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 1, owner: 3) (state=34)
-> [Checker] MUTEX_WAIT(mutex: 1, owner: 3) (state=35)
-> [Checker] Backtracking from 2;1;2;2;1;1;4;3;4;4
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=32)
-> [Checker] MUTEX_WAIT(mutex: 1, owner: 4) (state=33)
+> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=14)
+> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=16)
> [Checker] Dependent Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=32)
+> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=14)
+> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=15)
> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=24)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25)
+> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=12)
+> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=14)
> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=23)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=22)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=21)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=20)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=1)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25)
-> [Checker] Dependent Transitions:
-> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=22)
-> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=23)
-> [Checker] Backtracking from 2;1;2;2
-> [Checker] INDEPENDENT Transitions:
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=20)
-> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=21)
+> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=12)
+> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=13)
> [Checker] Dependent Transitions:
> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=1)
-> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=20)
-> [Checker] DFS exploration ended. 37 unique states visited; 7 backtracks (76 transition replays, 33 states visited overall)
+> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=12)
+> [Checker] DFS exploration ended. 17 unique states visited; 3 backtracks (21 transition replays, 2 states visited overall)
+
+$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:2 --log=s4u_test.thres:critical
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true'
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'actors' to '2'
+> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor.
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 128 unique states visited; 26 backtracks (296 transition replays, 143 states visited overall)
-$ ${bindir:=.}/../../../bin/simgrid-mc -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:3 --log=s4u_test.thres:critical
+$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:3 --log=s4u_test.thres:critical
+> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'actors' to '3'
> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor.
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 85 unique states visited; 15 backtracks (240 transition replays, 141 states visited overall)
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 4645 unique states visited; 1082 backtracks (18004 transition replays, 12278 states visited overall)
#!/usr/bin/env tesh
$ ${bindir:=.}/s4u-synchro-mutex
-> [Jupiter:worker:(1) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a scoped_lock
+> [Jupiter:worker:(1) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a lock_guard
> [Jupiter:worker:(1) 0.000000] [s4u_test/INFO] I'm done, good bye
-> [Jupiter:worker:(3) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a scoped_lock
+> [Jupiter:worker:(3) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a lock_guard
> [Jupiter:worker:(3) 0.000000] [s4u_test/INFO] I'm done, good bye
-> [Jupiter:worker:(5) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a scoped_lock
+> [Jupiter:worker:(5) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a lock_guard
> [Jupiter:worker:(5) 0.000000] [s4u_test/INFO] I'm done, good bye
-> [Jupiter:worker:(7) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a scoped_lock
+> [Jupiter:worker:(7) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a lock_guard
> [Jupiter:worker:(7) 0.000000] [s4u_test/INFO] I'm done, good bye
-> [Jupiter:worker:(9) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a scoped_lock
+> [Jupiter:worker:(9) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a lock_guard
> [Jupiter:worker:(9) 0.000000] [s4u_test/INFO] I'm done, good bye
-> [Jupiter:worker:(11) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a scoped_lock
+> [Jupiter:worker:(11) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a lock_guard
> [Jupiter:worker:(11) 0.000000] [s4u_test/INFO] I'm done, good bye
> [Tremblay:worker:(2) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a regular lock
> [Tremblay:worker:(2) 0.000000] [s4u_test/INFO] I'm done, good bye
#!/usr/bin/env tesh
-$ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt="[Checker]%e%m%n" -- ${bindir:=.}/s4u-synchro-semaphore --log=sem_test.thres:critical --log=root.fmt="[App%e%e%e%e]%e%m%n"
+$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true --log=mc_dfs.thres:info --log=root.fmt="[Checker]%e%m%n" -- ${bindir:=.}/s4u-synchro-semaphore --log=sem_test.thres:critical --log=root.fmt="[App%e%e%e%e]%e%m%n"
+> [Checker] Configuration change: Set 'model-check/sleep-set' to 'true'
> [Checker] Start a DFS exploration. Reduction is: dpor.
-> [Checker] Execute 1: SEM_ASYNC_LOCK(semaphore: 0) (stack depth: 1, state: 1, 0 interleaves)
-> [Checker] Execute 1: SEM_WAIT(semaphore: 0, granted: yes) (stack depth: 2, state: 2, 0 interleaves)
-> [Checker] Execute 1: SEM_UNLOCK(semaphore: 1) (stack depth: 3, state: 3, 0 interleaves)
-> [Checker] Execute 1: SEM_ASYNC_LOCK(semaphore: 0) (stack depth: 4, state: 4, 0 interleaves)
-> [Checker] Execute 2: SEM_ASYNC_LOCK(semaphore: 1) (stack depth: 5, state: 5, 0 interleaves)
-> [Checker] Execute 2: SEM_WAIT(semaphore: 1, granted: yes) (stack depth: 6, state: 6, 0 interleaves)
-> [Checker] Execute 2: SEM_UNLOCK(semaphore: 0) (stack depth: 7, state: 7, 0 interleaves)
-> [Checker] Execute 1: SEM_WAIT(semaphore: 0, granted: yes) (stack depth: 8, state: 8, 0 interleaves)
-> [Checker] Execute 1: SEM_UNLOCK(semaphore: 1) (stack depth: 9, state: 9, 0 interleaves)
-> [Checker] Execute 1: SEM_ASYNC_LOCK(semaphore: 0) (stack depth: 10, state: 10, 0 interleaves)
-> [Checker] Execute 2: SEM_ASYNC_LOCK(semaphore: 1) (stack depth: 11, state: 11, 0 interleaves)
-> [Checker] Execute 2: SEM_WAIT(semaphore: 1, granted: yes) (stack depth: 12, state: 12, 0 interleaves)
-> [Checker] Execute 2: SEM_UNLOCK(semaphore: 0) (stack depth: 13, state: 13, 0 interleaves)
-> [Checker] Execute 1: SEM_WAIT(semaphore: 0, granted: yes) (stack depth: 14, state: 14, 0 interleaves)
-> [Checker] Execute 1: SEM_UNLOCK(semaphore: 1) (stack depth: 15, state: 15, 0 interleaves)
-> [Checker] Execute 1: SEM_ASYNC_LOCK(semaphore: 0) (stack depth: 16, state: 16, 0 interleaves)
-> [Checker] Execute 2: SEM_ASYNC_LOCK(semaphore: 1) (stack depth: 17, state: 17, 0 interleaves)
-> [Checker] Execute 2: SEM_WAIT(semaphore: 1, granted: yes) (stack depth: 18, state: 18, 0 interleaves)
-> [Checker] Execute 2: SEM_UNLOCK(semaphore: 0) (stack depth: 19, state: 19, 0 interleaves)
-> [Checker] Execute 1: SEM_WAIT(semaphore: 0, granted: yes) (stack depth: 20, state: 20, 0 interleaves)
-> [Checker] Execute 1: SEM_UNLOCK(semaphore: 1) (stack depth: 21, state: 21, 0 interleaves)
-> [Checker] Execute 2: SEM_ASYNC_LOCK(semaphore: 1) (stack depth: 22, state: 22, 0 interleaves)
-> [Checker] Execute 2: SEM_WAIT(semaphore: 1, granted: yes) (stack depth: 23, state: 23, 0 interleaves)
-> [Checker] Execute 2: SEM_UNLOCK(semaphore: 0) (stack depth: 24, state: 24, 0 interleaves)
-> [Checker] Execution came to an end at 1;1;1;1;2;2;2;1;1;1;2;2;2;1;1;1;2;2;2;1;1;2;2;2;0 (state: 25, depth: 25)
-> [Checker] Backtracking from 1;1;1;1;2;2;2;1;1;1;2;2;2;1;1;1;2;2;2;1;1;2;2;2;0
-> [Checker] INDEPENDENT Transitions:
-> [Checker] SEM_UNLOCK(semaphore: 1) (state=21)
-> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=22)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=20)
-> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=22)
-> [Checker] Dependent Transitions:
-> [Checker] SEM_UNLOCK(semaphore: 0) (state=19)
-> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=20)
-> [Checker] Backtracking from 1;1;1;1;2;2;2;1;1;1;2;2;2;1;1;1;2;2;2
-> [Checker] INDEPENDENT Transitions:
-> [Checker] SEM_ASYNC_LOCK(semaphore: 0) (state=16)
-> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=17)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] SEM_UNLOCK(semaphore: 1) (state=15)
-> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=17)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=14)
-> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=17)
-> [Checker] Dependent Transitions:
-> [Checker] SEM_UNLOCK(semaphore: 0) (state=13)
-> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=14)
-> [Checker] Backtracking from 1;1;1;1;2;2;2;1;1;1;2;2;2
-> [Checker] INDEPENDENT Transitions:
-> [Checker] SEM_ASYNC_LOCK(semaphore: 0) (state=10)
-> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=11)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] SEM_UNLOCK(semaphore: 1) (state=9)
-> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=11)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=8)
-> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=11)
-> [Checker] Dependent Transitions:
-> [Checker] SEM_UNLOCK(semaphore: 0) (state=7)
-> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=8)
-> [Checker] Backtracking from 1;1;1;1;2;2;2
-> [Checker] INDEPENDENT Transitions:
-> [Checker] SEM_ASYNC_LOCK(semaphore: 0) (state=4)
-> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=5)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] SEM_UNLOCK(semaphore: 1) (state=3)
-> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=5)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=2)
-> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=5)
-> [Checker] INDEPENDENT Transitions:
-> [Checker] SEM_ASYNC_LOCK(semaphore: 0) (state=1)
-> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=5)
-> [Checker] DFS exploration ended. 25 unique states visited; 4 backtracks (64 transition replays, 36 states visited overall)
+> [Checker] DFS exploration ended. 33 unique states visited; 9 backtracks (125 transition replays, 84 states visited overall)
> [0.000000] [mc_comm_determinism/INFO] The recv communications pattern of the actor 0 is different! Different source for communication #1
> [0.000000] [mc_comm_determinism/INFO] Send-deterministic : Yes
> [0.000000] [mc_comm_determinism/INFO] Recv-deterministic : No
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 36 unique states visited; 13 backtracks (97 transition replays, 49 states visited overall)
\ No newline at end of file
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 242 unique states visited; 68 backtracks (612 transition replays, 303 states visited overall)
\ No newline at end of file
> rank 0 recv the data
> rank 1 recv the data
> Sent 0 to rank 1
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 7 unique states visited; 2 backtracks (10 transition replays, 2 states visited overall)
+> Sent 1 to rank 0
+> rank 0 recv the data
+> rank 1 recv the data
+> Sent 0 to rank 1
+> Sent 1 to rank 0
+> rank 1 recv the data
+> rank 0 recv the data
+> Sent 1 to rank 0
+> Sent 0 to rank 1
+> rank 0 recv the data
+> rank 1 recv the data
+> Sent 1 to rank 0
+> Sent 0 to rank 1
+> rank 0 recv the data
+> rank 1 recv the data
+> Sent 1 to rank 0
+> Sent 0 to rank 1
+> rank 1 recv the data
+> rank 0 recv the data
+> Sent 1 to rank 0
+> Sent 0 to rank 1
+> rank 0 recv the data
+> rank 1 recv the data
+> Sent 1 to rank 0
+> Sent 0 to rank 1
+> rank 1 recv the data
+> rank 0 recv the data
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 30 unique states visited; 8 backtracks (56 transition replays, 19 states visited overall)
p Testing the paranoid model
! timeout 60
> [0.000000] [mc_global/INFO] Counter-example execution trace:
> [0.000000] [mc_global/INFO] 1: iSend(mbox=2)
> [0.000000] [mc_global/INFO] 2: iSend(mbox=0)
-> [0.000000] [mc_Session/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;2'
+> [0.000000] [mc_global/INFO] 0:
+> [0.000000] [mc_Session/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;2;0'
> [0.000000] [mc_dfs/INFO] DFS exploration ended. 3 unique states visited; 1 backtracks (3 transition replays, 0 states visited overall)
> Execution failed with code 3.
> The thread 0 is terminating.
> The thread 1 is terminating.
> User's main is terminating.
+> The thread 0 is terminating.
+> The thread 1 is terminating.
+> User's main is terminating.
> The thread 1 is terminating.
> The thread 0 is terminating.
> User's main is terminating.
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 17 unique states visited; 3 backtracks (22 transition replays, 3 states visited overall)
-
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 23 unique states visited; 3 backtracks (27 transition replays, 2 states visited overall)
> The thread 0 is terminating.
> The thread 1 is terminating.
> User's main is terminating.
+> The thread 0 is terminating.
+> The thread 1 is terminating.
+> User's main is terminating.
+> The thread 0 is terminating.
+> The thread 1 is terminating.
+> User's main is terminating.
> [0.000000] [mc_global/INFO] **************************
> [0.000000] [mc_global/INFO] *** DEADLOCK DETECTED ***
> [0.000000] [mc_global/INFO] **************************
> [0.000000] [mc_global/INFO] 2: MUTEX_ASYNC_LOCK(mutex: 1, owner: 3)
> [0.000000] [mc_global/INFO] 3: MUTEX_WAIT(mutex: 1, owner: 3)
> [0.000000] [mc_global/INFO] 3: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2)
-> [0.000000] [mc_Session/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'2;2;3;2;3;3'
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 19 unique states visited; 2 backtracks (22 transition replays, 2 states visited overall)
+> [0.000000] [mc_global/INFO] 0:
+> [0.000000] [mc_Session/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'2;2;3;2;3;3;0'
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 38 unique states visited; 4 backtracks (52 transition replays, 11 states visited overall)
\ No newline at end of file
unsigned int do_consider()
{
if (max_consider_ <= times_considered_ + 1)
- set_done();
+ mark_done();
return times_considered_++;
}
unsigned int get_times_considered() const { return times_considered_; }
this->state_ = InterleavingType::todo;
this->times_considered_ = 0;
}
- void set_done() { this->state_ = InterleavingType::done; }
+ void mark_done() { this->state_ = InterleavingType::done; }
inline Transition* get_transition(unsigned times_considered)
{
}
}
+State::State(const RemoteApp& remote_app, const State* previous_state)
+ : default_transition_(std::make_unique<Transition>()), num_(++expended_states_)
+{
+
+ remote_app.get_actors_status(actors_to_run_);
+
+ transition_ = default_transition_.get();
+
+ /* Stateful model checking */
+ if ((_sg_mc_checkpoint > 0 && (num_ % _sg_mc_checkpoint == 0)) || _sg_mc_termination) {
+ system_state_ = std::make_shared<simgrid::mc::Snapshot>(num_);
+ }
+
+ /* For each actor in the previous sleep set, keep it if it is not dependent with current transition.
+ * And if we kept it and the actor is enabled in this state, mark the actor as already done, so that
+ * it is not explored*/
+ for (auto& [aid, transition] : previous_state->get_sleep_set()) {
+
+ if (not previous_state->get_transition()->depends(&transition)) {
+
+ sleep_set_.emplace(aid, transition);
+ if (actors_to_run_.count(aid) != 0) {
+ XBT_DEBUG("Actor %ld will not be explored, for it is in the sleep set", aid);
+
+ actors_to_run_.at(aid).mark_done();
+ }
+ } else
+ XBT_DEBUG("Transition >>%s<< removed from the sleep set because it was dependent with >>%s<<",
+ transition.to_string().c_str(), previous_state->get_transition()->to_string().c_str());
+ }
+}
+
std::size_t State::count_todo() const
{
return boost::range::count_if(this->actors_to_run_, [](auto& pair) { return pair.second.is_todo(); });
void State::mark_all_enabled_todo()
{
for (auto const& [aid, _] : this->get_actors_list()) {
- if (this->is_actor_enabled(aid)) {
+ if (this->is_actor_enabled(aid) and not is_actor_done(aid)) {
this->mark_todo(aid);
}
}
XBT_DEBUG("Search for an actor to run. %zu actors to consider", actors_to_run_.size());
for (auto const& [aid, actor] : actors_to_run_) {
/* Only consider actors (1) marked as interleaving by the checker and (2) currently enabled in the application */
- if (not actor.is_todo() || not actor.is_enabled())
+ if (not actor.is_todo() || not actor.is_enabled() || actor.is_done()) {
+
+ if (not actor.is_todo())
+ XBT_DEBUG("Can't run actor %ld because it is not todo", aid);
+
+ if (not actor.is_enabled())
+ XBT_DEBUG("Can't run actor %ld because it is not enabled", aid);
+
+ if (actor.is_done())
+ XBT_DEBUG("Can't run actor %ld because it has already been done", aid);
+
continue;
+ }
return aid;
}
/** Snapshot of system state (if needed) */
std::shared_ptr<Snapshot> system_state_;
+ /* Sleep sets are composed of the actor and the corresponding transition that made it being added to the sleep
+ * set. With this information, it is check whether it should be removed from it or not when exploring a new
+ * transition */
+ std::map<aid_t, Transition> sleep_set_;
+
public:
explicit State(const RemoteApp& remote_app);
-
+ explicit State(const RemoteApp& remote_app, const State* previous_state);
/* Returns a positive number if there is another transition to pick, or -1 if not */
aid_t next_transition() const;
std::size_t count_todo() const;
void mark_todo(aid_t actor) { actors_to_run_.at(actor).mark_todo(); }
void mark_all_enabled_todo();
- bool is_done(aid_t actor) const { return actors_to_run_.at(actor).is_done(); }
+ bool is_actor_done(aid_t actor) const { return actors_to_run_.at(actor).is_done(); }
Transition* get_transition() const;
void set_transition(Transition* t) { transition_ = t; }
std::map<aid_t, ActorState> const& get_actors_list() const { return actors_to_run_; }
Snapshot* get_system_state() const { return system_state_.get(); }
void set_system_state(std::shared_ptr<Snapshot> state) { system_state_ = std::move(state); }
+ std::map<aid_t, Transition> const& get_sleep_set() const { return sleep_set_; }
+ void add_sleep_set(Transition* t) {sleep_set_.insert_or_assign(t->aid_, Transition(t->type_, t->aid_, t->times_considered_)); }
+
/* Returns the total amount of states created so far (for statistics) */
static long get_expanded_states() { return expended_states_; }
};
if (next < 0) { // If there is no more transition in the current state, backtrack.
XBT_DEBUG("There remains %lu actors, but none to interleave (depth %zu).", state->get_actor_count(),
stack_.size() + 1);
-
+
if (state->get_actor_count() == 0) {
mc_model_checker->finalize_app();
XBT_VERB("Execution came to an end at %s (state: %ld, depth: %zu)", get_record_trace().to_string().c_str(),
state->get_num(), stack_.size());
+
}
+
this->backtrack();
continue;
}
+ if (_sg_mc_sleep_set) {
+ XBT_VERB("Sleep set actually containing:");
+ for (auto & [aid, transition] : state->get_sleep_set()) {
+
+ XBT_VERB(" <%ld,%s>", aid, transition.to_string().c_str());
+
+ }
+ }
/* Actually answer the request: let's execute the selected request (MCed does one step) */
state->execute_next(next);
on_transition_execute_signal(state->get_transition(), get_remote_app());
state->get_transition()->to_string().c_str(), stack_.size(), state->get_num(), state->count_todo());
/* Create the new expanded state (copy the state of MCed into our MCer data) */
- auto next_state = std::make_unique<State>(get_remote_app());
+ std::unique_ptr<State> next_state;
+
+ /* If we want sleep set reduction, pass the old state to the new state so it can
+ * both copy the sleep set and eventually removes things from it locally */
+ if (_sg_mc_sleep_set)
+ next_state = std::make_unique<State>(get_remote_app(), state);
+ else
+ next_state = std::make_unique<State>(get_remote_app());
+
on_state_creation_signal(next_state.get(), get_remote_app());
+
if (_sg_mc_termination)
this->check_non_termination(next_state.get());
if (visited_state_ == nullptr) {
/* Get an enabled process and insert it in the interleave set of the next state */
for (auto const& [aid, _] : next_state->get_actors_list()) {
- if (next_state->is_actor_enabled(aid)) {
+ if (next_state->is_actor_enabled(aid) and not next_state->is_actor_done(aid)) {
next_state->mark_todo(aid);
if (reduction_mode_ == ReductionMode::dpor)
break; // With DPOR, we take the first enabled transition
backtrack_count_++;
XBT_VERB("Backtracking from %s", get_record_trace().to_string().c_str());
on_backtracking_signal(get_remote_app());
- stack_.pop_back();
-
get_remote_app().check_deadlock();
+ /* We may backtrack from somewhere either because it's leaf, or because every enabled process are in done/sleep set.
+ * In the first case, we need to remove the last transition corresponding to the Finalize */
+ if (stack_.back()->get_transition()->aid_ == 0)
+ stack_.pop_back();
+
/* Traverse the stack backwards until a state with a non empty interleave set is found, deleting all the states that
* have it empty in the way. For each deleted state, check if the request that has generated it (from its
- * predecessor state), depends on any other previous request executed before it. If it does then add it to the
- * interleave set of the state that executed that previous request. */
+ * predecessor state) depends on any other previous request executed before it on another process. If there exists one,
+ * find the more recent, and add its process to the interleave set. If the process is not enabled at this point,
+ * then add every enabled process to the interleave */
bool found_backtracking_point = false;
while (not stack_.empty() && not found_backtracking_point) {
std::unique_ptr<State> state = std::move(stack_.back());
+
stack_.pop_back();
+
+ XBT_DEBUG("Marking Transition >>%s<< of process %ld done and adding it to the sleep set", state->get_transition()->to_string().c_str(), state->get_transition()->aid_);
+ state->add_sleep_set(state->get_transition()); // Actors are marked done when they are considerd in ActorState
+
if (reduction_mode_ == ReductionMode::dpor) {
aid_t issuer_id = state->get_transition()->aid_;
for (auto i = stack_.rbegin(); i != stack_.rend(); ++i) {
if (state->get_transition()->aid_ == prev_state->get_transition()->aid_) {
XBT_DEBUG("Simcall >>%s<< and >>%s<< with same issuer %ld", state->get_transition()->to_string().c_str(),
prev_state->get_transition()->to_string().c_str(), issuer_id);
- break;
+ continue;
} else if (prev_state->get_transition()->depends(state->get_transition())) {
XBT_VERB("Dependent Transitions:");
XBT_VERB(" %s (state=%ld)", prev_state->get_transition()->to_string().c_str(), prev_state->get_num());
XBT_VERB(" %s (state=%ld)", state->get_transition()->to_string().c_str(), state->get_num());
- if (not prev_state->is_done(issuer_id))
- prev_state->mark_todo(issuer_id);
- else
- XBT_DEBUG("Actor %ld is in done set", issuer_id);
+ if (prev_state->is_actor_enabled(issuer_id)){
+ if (not prev_state->is_actor_done(issuer_id))
+ prev_state->mark_todo(issuer_id);
+ else
+ XBT_DEBUG("Actor %ld is already in done set: no need to explore it again", issuer_id);
+ } else {
+ XBT_DEBUG("Actor %ld is not enabled: DPOR may be failing. To stay sound, we are marking every enabled transition as todo", issuer_id);
+ prev_state->mark_all_enabled_todo();
+ }
break;
} else {
XBT_VERB("INDEPENDENT Transitions:");
}
}
- if (state->count_todo() == 0) { // Empty interleaving set
+ if (state->count_todo() == 0) { // Empty interleaving set: exploration at this level is over
XBT_DEBUG("Delete state %ld at depth %zu", state->get_num(), stack_.size() + 1);
} else {
- XBT_DEBUG("Back-tracking to state %ld at depth %zu", state->get_num(), stack_.size() + 1);
- stack_.push_back(std::move(state)); // Put it back on the stack
+ XBT_DEBUG("Back-tracking to state %ld at depth %zu: %lu transitions left to be explored", state->get_num(),
+ stack_.size() + 1, state->count_todo());
+ stack_.push_back(std::move(state)); // Put it back on the stack so we can explore the next transition of the interleave
found_backtracking_point = true;
}
}
xbt_die("configuration option 'model-check/reduction' can only take 'none' or 'dpor' as a value");
}};
+simgrid::config::Flag<bool> _sg_mc_sleep_set{
+ "model-check/sleep-set", "Whether to enable the use of sleep-set in the reduction algorithm", false,
+ [](bool) { _mc_cfg_cb_check("value to enable/disable the use of sleep-set in the reduction algorithm"); }};
+
simgrid::config::Flag<int> _sg_mc_checkpoint{
"model-check/checkpoint", "Specify the amount of steps between checkpoints during stateful model-checking "
"(default: 0 => stateless verification). If value=1, one checkpoint is saved for each "
extern "C" XBT_PUBLIC int _sg_mc_max_visited_states;
extern XBT_PRIVATE simgrid::config::Flag<std::string> _sg_mc_dot_output_file;
extern XBT_PRIVATE simgrid::config::Flag<bool> _sg_mc_termination;
+extern XBT_PUBLIC simgrid::config::Flag<bool> _sg_mc_sleep_set;
#endif
}
bool CommWaitTransition::depends(const Transition* other) const
{
- if (aid_ == other->aid_)
- return false;
-
if (other->type_ < type_)
return other->depends(this);
}
bool CommTestTransition::depends(const Transition* other) const
{
- if (aid_ == other->aid_)
- return false;
if (other->type_ < type_)
return other->depends(this);
}
bool CommRecvTransition::depends(const Transition* other) const
{
- if (aid_ == other->aid_)
- return false;
-
if (other->type_ < type_)
return other->depends(this);
bool CommSendTransition::depends(const Transition* other) const
{
- if (aid_ == other->aid_)
- return false;
-
if (other->type_ < type_)
return other->depends(this);
public:
std::string to_string(bool verbose) const override;
RandomTransition(aid_t issuer, int times_considered, std::stringstream& stream);
- bool depends(const Transition* other) const override { return false; } // Independent with any other transition
+ bool depends(const Transition* other) const override { return aid_ == other->aid_; } // Independent with any other transition
};
} // namespace simgrid::mc
> [0.000000]: [ker_lmm/DEBUG] '92'(1.000000) : 0.373508
> [0.000000]: [ker_lmm/DEBUG] '91'(1.000000) : 0.240523
> [0.000000]: [ker_lmm/DEBUG] '100'(0.000000) : 0.000000
-> 10x One shot execution time for a total of 10 constraints, 10 variables with 4 active constraint each, concurrency in [2,6] and max concurrency share 2
+> 10x One shot execution time for a total of 10 constraints, 10 variables with 4 active constraint each, concurrency in [2,6] and max concurrency share 2
\ No newline at end of file
# Smpi Allreduce collectives tests
p Test allreduce
-$ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/../../../smpi_script/bin/smpirun -wrapper "${bindir:=.}/../../../bin/simgrid-mc" -map -hostfile ../hostfile_coll -platform ${platfdir:=.}/small_platform.xml -np 4 --log=xbt_cfg.thres:critical ${bindir:=.}/coll-allreduce-with-leaks --log=smpi_config.thres:warning --cfg=smpi/display-allocs:yes --cfg=smpi/simulate-computation:no --log=smpi_coll.thres:error --log=smpi_mpi.thres:error --log=smpi_pmpi.thres:error --cfg=smpi/list-leaks:10 --log=no_loc
+$ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/../../../smpi_script/bin/smpirun -wrapper "${bindir:=.}/../../../bin/simgrid-mc" -map -hostfile ../hostfile_coll -platform ${platfdir:=.}/small_platform.xml --cfg=model-check/sleep-set:true -np 4 --log=xbt_cfg.thres:critical ${bindir:=.}/coll-allreduce-with-leaks --log=smpi_config.thres:warning --cfg=smpi/display-allocs:yes --cfg=smpi/simulate-computation:no --log=smpi_coll.thres:error --log=smpi_mpi.thres:error --cfg=smpi/list-leaks:10 --log=no_loc
> [0.000000] [smpi/INFO] [rank 0] -> Tremblay
> [0.000000] [smpi/INFO] [rank 1] -> Tremblay
> [0.000000] [smpi/INFO] [rank 2] -> Tremblay
> If this is too much, consider sharing allocations for computation buffers.
> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
>
-> [0.000000] [mc_dfs/INFO] DFS exploration ended. 33 unique states visited; 7 backtracks (140 transition replays, 101 states visited overall)
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles:
+> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm
+> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group
+> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers:
+> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28
+> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls.
+> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation.
+> If this is too much, consider sharing allocations for computation buffers.
+> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary)
+>
+> [0.000000] [mc_dfs/INFO] DFS exploration ended. 616 unique states visited; 167 backtracks (3773 transition replays, 2991 states visited overall)