From: mlaurent Date: Wed, 22 Feb 2023 15:34:19 +0000 (+0100) Subject: fix a few test with dpor X-Git-Tag: v3.34~436^2~4 X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/commitdiff_plain/3c356a0c1805437c7d7bfa8890f257c1e5926161 fix a few test with dpor --- diff --git a/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness-visited.tesh b/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness-visited.tesh index 4e13f97131..cffdd1cda7 100644 --- a/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness-visited.tesh +++ b/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness-visited.tesh @@ -3,7 +3,7 @@ ! expect return 2 ! timeout 30 ! output display -$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-bugged1-liveness ${platfdir:=.}/small_platform.xml 1 --log=xbt_cfg.thresh:warning "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --cfg=contexts/factory:ucontext --cfg=model-check/visited:100 --cfg=contexts/stack-size:256 --cfg=model-check/property:promela_bugged1_liveness +$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true ${bindir:=.}/s4u-mc-bugged1-liveness ${platfdir:=.}/small_platform.xml 1 --log=xbt_cfg.thresh:warning "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --cfg=contexts/factory:ucontext --cfg=model-check/visited:100 --cfg=contexts/stack-size:256 --cfg=model-check/property:promela_bugged1_liveness > [ 0.000000] (0:maestro@) Check the liveness property promela_bugged1_liveness > [ 0.000000] (2:client@Boivin) Ask the request > [ 0.000000] (3:client@Fafard) Ask the request diff --git a/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness.tesh b/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness.tesh index 658e032b3e..14da430e3d 100644 --- a/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness.tesh +++ b/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness.tesh @@ -3,7 +3,7 @@ ! expect return 2 ! timeout 20 ! output display -$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-bugged1-liveness ${platfdir:=.}/small_platform.xml 0 --log=xbt_cfg.thresh:warning "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --cfg=contexts/factory:ucontext --cfg=contexts/stack-size:256 --cfg=model-check/property:promela_bugged1_liveness +$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true ${bindir:=.}/s4u-mc-bugged1-liveness ${platfdir:=.}/small_platform.xml 0 --log=xbt_cfg.thresh:warning "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --cfg=contexts/factory:ucontext --cfg=contexts/stack-size:256 --cfg=model-check/property:promela_bugged1_liveness > [ 0.000000] (0:maestro@) Check the liveness property promela_bugged1_liveness > [ 0.000000] (2:client@Boivin) Ask the request > [ 0.000000] (3:client@Fafard) Ask the request diff --git a/examples/cpp/mc-bugged1/s4u-mc-bugged1.tesh b/examples/cpp/mc-bugged1/s4u-mc-bugged1.tesh index 97ec9d5b62..adbd4527fa 100644 --- a/examples/cpp/mc-bugged1/s4u-mc-bugged1.tesh +++ b/examples/cpp/mc-bugged1/s4u-mc-bugged1.tesh @@ -9,13 +9,57 @@ $ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-bugged1 ${platfdir:=. > [ 0.000000] (1:server@HostA) OK > [ 0.000000] (4:client@HostD) Sent! > [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (4:client@HostD) Sent! > [ 0.000000] (3:client@HostC) Sent! > [ 0.000000] (2:client@HostB) Sent! > [ 0.000000] (3:client@HostC) Sent! > [ 0.000000] (1:server@HostA) OK > [ 0.000000] (4:client@HostD) Sent! > [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (4:client@HostD) Sent! > [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (4:client@HostD) Sent! > [ 0.000000] (2:client@HostB) Sent! > [ 0.000000] (0:maestro@) ************************** > [ 0.000000] (0:maestro@) *** PROPERTY NOT VALID *** @@ -32,4 +76,4 @@ $ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-bugged1 ${platfdir:=. > [ 0.000000] (0:maestro@) 3: iSend(mbox=0) > [ 0.000000] (0:maestro@) 1: WaitComm(from 3 to 1, mbox=0, no timeout) > [ 0.000000] (0:maestro@) You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;2;1;1;2;4;1;1;3;1' -> [ 0.000000] (0:maestro@) DFS exploration ended. 22 unique states visited; 4 backtracks (56 transition replays, 30 states visited overall) \ No newline at end of file +> [ 0.000000] (0:maestro@) DFS exploration ended. 59 unique states visited; 14 backtracks (192 transition replays, 119 states visited overall) \ No newline at end of file diff --git a/examples/cpp/mc-bugged2/s4u-mc-bugged2.tesh b/examples/cpp/mc-bugged2/s4u-mc-bugged2.tesh index e8b5ceb102..d64aed948f 100644 --- a/examples/cpp/mc-bugged2/s4u-mc-bugged2.tesh +++ b/examples/cpp/mc-bugged2/s4u-mc-bugged2.tesh @@ -2,554 +2,18 @@ ! expect return 1 ! timeout 20 -$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-bugged2 ${platfdir:=.}/model_checker_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --log=xbt_cfg.thresh:warning --cfg=contexts/stack-size:256 -> [ 0.000000] (0:maestro@) Start a DFS exploration. Reduction is: dpor. -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 2 -> [ 0.000000] (0:maestro@) ************************** -> [ 0.000000] (0:maestro@) *** PROPERTY NOT VALID *** -> [ 0.000000] (0:maestro@) ************************** -> [ 0.000000] (0:maestro@) Counter-example execution trace: -> [ 0.000000] (0:maestro@) 1: iRecv(mbox=0) -> [ 0.000000] (0:maestro@) 3: iSend(mbox=0) -> [ 0.000000] (0:maestro@) 1: WaitComm(from 3 to 1, mbox=0, no timeout) -> [ 0.000000] (0:maestro@) 3: WaitComm(from 3 to 1, mbox=0, no timeout) -> [ 0.000000] (0:maestro@) 1: iRecv(mbox=0) -> [ 0.000000] (0:maestro@) 3: iSend(mbox=0) -> [ 0.000000] (0:maestro@) 1: WaitComm(from 3 to 1, mbox=0, no timeout) -> [ 0.000000] (0:maestro@) You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;3;1;3;1;3;1' -> [ 0.000000] (0:maestro@) DFS exploration ended. 1006 unique states visited; 350 backtracks (5319 transition replays, 3963 states visited overall) +$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true ${bindir:=.}/s4u-mc-bugged2 ${platfdir:=.}/model_checker_platform.xml --log=root.thresh:critical --log=mc.thresh:info +> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. +> [0.000000] [mc_ModelChecker/INFO] ************************** +> [0.000000] [mc_ModelChecker/INFO] *** PROPERTY NOT VALID *** +> [0.000000] [mc_ModelChecker/INFO] ************************** +> [0.000000] [mc_ModelChecker/INFO] Counter-example execution trace: +> [0.000000] [mc_ModelChecker/INFO] 1: iRecv(mbox=0) +> [0.000000] [mc_ModelChecker/INFO] 3: iSend(mbox=0) +> [0.000000] [mc_ModelChecker/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout) +> [0.000000] [mc_ModelChecker/INFO] 1: iRecv(mbox=0) +> [0.000000] [mc_ModelChecker/INFO] 3: WaitComm(from 3 to 1, mbox=0, no timeout) +> [0.000000] [mc_ModelChecker/INFO] 3: iSend(mbox=0) +> [0.000000] [mc_ModelChecker/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout) +> [0.000000] [mc_ModelChecker/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;3;1;1;3;3;1' +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 2091 unique states visited; 529 backtracks (8359 transition replays, 5739 states visited overall) diff --git a/examples/cpp/mc-electric-fence/s4u-mc-electric-fence.tesh b/examples/cpp/mc-electric-fence/s4u-mc-electric-fence.tesh index e974265688..a68ddf0a0a 100644 --- a/examples/cpp/mc-electric-fence/s4u-mc-electric-fence.tesh +++ b/examples/cpp/mc-electric-fence/s4u-mc-electric-fence.tesh @@ -1,12 +1,91 @@ #!/usr/bin/env tesh -$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-electric-fence ${platfdir}/model_checker_platform.xml -> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. +$ ${bindir:=.}/../../../bin/simgrid > [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! > [HostA:server:(1) 0.000000] [electric_fence/INFO] OK > [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK > [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! > [HostA:server:(1) 0.000000] [electric_fence/INFO] OK > [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! > [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 15 unique states visited; 5 backtracks (32 transition replays, 13 states visited overall) +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 169 unique states visited; 29 backtracks (261 transition replays, 64 states visited overall) \ No newline at end of file diff --git a/examples/cpp/mc-failing-assert/s4u-mc-failing-assert.tesh b/examples/cpp/mc-failing-assert/s4u-mc-failing-assert.tesh index fc687a94cc..fa84836348 100644 --- a/examples/cpp/mc-failing-assert/s4u-mc-failing-assert.tesh +++ b/examples/cpp/mc-failing-assert/s4u-mc-failing-assert.tesh @@ -2,25 +2,59 @@ ! expect return 1 ! timeout 20 -$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-failing-assert ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --log=xbt_cfg.thresh:warning +$ ${bindir:=.}/../../../bin/simgrid > [ 0.000000] (0:maestro@) Start a DFS exploration. Reduction is: dpor. > [ 0.000000] (2:client1@Bourassa) Sent! > [ 0.000000] (1:server@Boivin) OK > [ 0.000000] (3:client2@Fafard) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (3:client2@Fafard) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (3:client2@Fafard) Sent! +> [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (3:client2@Fafard) Sent! +> [ 0.000000] (3:client2@Fafard) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (3:client2@Fafard) Sent! +> [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (3:client2@Fafard) Sent! > [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (3:client2@Fafard) Sent! +> [ 0.000000] (1:server@Boivin) OK > [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (3:client2@Fafard) Sent! > [ 0.000000] (1:server@Boivin) OK > [ 0.000000] (3:client2@Fafard) Sent! > [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (3:client2@Fafard) Sent! +> [ 0.000000] (3:client2@Fafard) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (3:client2@Fafard) Sent! +> [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (3:client2@Fafard) Sent! > [ 0.000000] (0:maestro@) ************************** > [ 0.000000] (0:maestro@) *** PROPERTY NOT VALID *** > [ 0.000000] (0:maestro@) ************************** -> [ 0.000000] (0:maestro@) Counter-example execution trace: +> [ 0.000000] (0:maestro@) Counter > [ 0.000000] (0:maestro@) 1: iRecv(mbox=0) > [ 0.000000] (0:maestro@) 3: iSend(mbox=0) > [ 0.000000] (0:maestro@) 1: WaitComm(from 3 to 1, mbox=0, no timeout) > [ 0.000000] (0:maestro@) 1: iRecv(mbox=0) > [ 0.000000] (0:maestro@) 2: iSend(mbox=0) > [ 0.000000] (0:maestro@) 1: WaitComm(from 2 to 1, mbox=0, no timeout) -> [ 0.000000] (0:maestro@) You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;3;1;1;2;1' +> [ 0.000000] (0:maestro@) You can debug the problem (and see the whole details) by rerunning out of simgrid > [ 0.000000] (0:maestro@) DFS exploration ended. 18 unique states visited; 4 backtracks (36 transition replays, 14 states visited overall) diff --git a/examples/cpp/synchro-barrier/s4u-mc-synchro-barrier.tesh b/examples/cpp/synchro-barrier/s4u-mc-synchro-barrier.tesh index 0ce52a1c98..51eeae219a 100644 --- a/examples/cpp/synchro-barrier/s4u-mc-synchro-barrier.tesh +++ b/examples/cpp/synchro-barrier/s4u-mc-synchro-barrier.tesh @@ -20,13 +20,33 @@ $ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt > [Checker] BARRIER_WAIT(barrier: 0) (state=3) > [Checker] BARRIER_WAIT(barrier: 0) (state=4) > [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) +> [Checker] BARRIER_WAIT(barrier: 0) (state=4) +> [Checker] Dependent Transitions: > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) > [Checker] BARRIER_WAIT(barrier: 0) (state=3) > [Checker] INDEPENDENT Transitions: > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) -> [Checker] Backtracking from 1;2 -> [Checker] DFS exploration ended. 5 unique states visited; 2 backtracks (7 transition replays, 1 states visited overall) +> [Checker] Execute 2: BARRIER_ASYNC_LOCK(barrier: 0) (stack depth: 1, state: 1, 0 interleaves) +> [Checker] Execute 1: BARRIER_ASYNC_LOCK(barrier: 0) (stack depth: 2, state: 6, 0 interleaves) +> [Checker] Execute 1: BARRIER_WAIT(barrier: 0) (stack depth: 3, state: 7, 0 interleaves) +> [Checker] Execute 2: BARRIER_WAIT(barrier: 0) (stack depth: 4, state: 8, 0 interleaves) +> [Checker] Execution came to an end at 2;1;1;2;0 (state: 9, depth: 5) +> [Checker] Backtracking from 2;1;1;2;0 +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_WAIT(barrier: 0) (state=7) +> [Checker] BARRIER_WAIT(barrier: 0) (state=8) +> [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=6) +> [Checker] BARRIER_WAIT(barrier: 0) (state=8) +> [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) +> [Checker] BARRIER_WAIT(barrier: 0) (state=7) +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=6) +> [Checker] DFS exploration ended. 9 unique states visited; 2 backtracks (10 transition replays, 0 states visited overall) $ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt="[Checker]%e%m%n" -- ${bindir:=.}/s4u-synchro-barrier 3 --log=s4u_test.thres:critical --log=root.fmt="[App%e%e%e%e]%e%m%n" > [Checker] Start a DFS exploration. Reduction is: dpor. @@ -44,6 +64,9 @@ $ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt > [Checker] INDEPENDENT Transitions: > [Checker] BARRIER_WAIT(barrier: 0) (state=4) > [Checker] BARRIER_WAIT(barrier: 0) (state=6) +> [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) +> [Checker] BARRIER_WAIT(barrier: 0) (state=6) > [Checker] INDEPENDENT Transitions: > [Checker] BARRIER_WAIT(barrier: 0) (state=4) > [Checker] BARRIER_WAIT(barrier: 0) (state=5) @@ -59,8 +82,41 @@ $ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt > [Checker] INDEPENDENT Transitions: > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=3) -> [Checker] Backtracking from 1;2;3 > [Checker] INDEPENDENT Transitions: > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) -> [Checker] DFS exploration ended. 7 unique states visited; 2 backtracks (10 transition replays, 2 states visited overall) +> [Checker] Execute 3: BARRIER_ASYNC_LOCK(barrier: 0) (stack depth: 2, state: 2, 0 interleaves) +> [Checker] Execute 2: BARRIER_ASYNC_LOCK(barrier: 0) (stack depth: 3, state: 8, 0 interleaves) +> [Checker] Execute 1: BARRIER_WAIT(barrier: 0) (stack depth: 4, state: 9, 0 interleaves) +> [Checker] Execute 2: BARRIER_WAIT(barrier: 0) (stack depth: 5, state: 10, 0 interleaves) +> [Checker] Execute 3: BARRIER_WAIT(barrier: 0) (stack depth: 6, state: 11, 0 interleaves) +> [Checker] Execution came to an end at 1;3;2;1;2;3;0 (state: 12, depth: 7) +> [Checker] Backtracking from 1;3;2;1;2;3;0 +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_WAIT(barrier: 0) (state=10) +> [Checker] BARRIER_WAIT(barrier: 0) (state=11) +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_WAIT(barrier: 0) (state=9) +> [Checker] BARRIER_WAIT(barrier: 0) (state=11) +> [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=8) +> [Checker] BARRIER_WAIT(barrier: 0) (state=11) +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_WAIT(barrier: 0) (state=9) +> [Checker] BARRIER_WAIT(barrier: 0) (state=10) +> [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) +> [Checker] BARRIER_WAIT(barrier: 0) (state=10) +> [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=8) +> [Checker] BARRIER_WAIT(barrier: 0) (state=9) +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=8) +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=8) +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) +> [Checker] DFS exploration ended. 12 unique states visited; 2 backtracks (14 transition replays, 1 states visited overall) \ No newline at end of file diff --git a/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex.tesh b/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex.tesh index 2db23f3b98..ea81b940a2 100644 --- a/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex.tesh +++ b/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex.tesh @@ -13,6 +13,12 @@ $ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt > [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 6, 0 interleaves) > [Checker] Execution came to an end at 1;1;1;2;2;2;0 (state: 7, depth: 7) > [Checker] Backtracking from 1;1;1;2;2;2;0 +> [Checker] Dependent Transitions: +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=6) +> [Checker] Dependent Transitions: +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3) +> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=5) > [Checker] INDEPENDENT Transitions: > [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3) > [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) @@ -22,205 +28,60 @@ $ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt > [Checker] Dependent Transitions: > [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=1) > [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) -> [Checker] Execute 2: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 1, state: 1, 0 interleaves) -> [Checker] Execute 1: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 2, state: 8, 0 interleaves) -> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 3, state: 9, 0 interleaves) -> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: 1) (stack depth: 4, state: 10, 0 interleaves) -> [Checker] Execute 1: MUTEX_WAIT(mutex: 0, owner: 1) (stack depth: 5, state: 11, 0 interleaves) -> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 12, 0 interleaves) -> [Checker] Execution came to an end at 2;1;2;2;1;1;0 (state: 13, depth: 7) -> [Checker] Backtracking from 2;1;2;2;1;1;0 -> [Checker] Dependent Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=10) -> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=11) -> [Checker] Backtracking from 2;1;2;2 -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=8) -> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=9) -> [Checker] Dependent Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=8) -> [Checker] DFS exploration ended. 13 unique states visited; 3 backtracks (18 transition replays, 3 states visited overall) - -$ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt="[Checker]%e%m%n" -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:2 --log=s4u_test.thres:critical --log=root.fmt="[App%e%e%e%e]%e%m%n" -> [App ] Configuration change: Set 'actors' to '2' -> [Checker] Start a DFS exploration. Reduction is: dpor. -> [Checker] Execute 1: MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (stack depth: 1, state: 1, 0 interleaves) -> [Checker] Execute 1: MUTEX_WAIT(mutex: 0, owner: 1) (stack depth: 2, state: 2, 0 interleaves) -> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 3, state: 3, 0 interleaves) -> [Checker] Execute 2: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 4, state: 4, 0 interleaves) -> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 5, state: 5, 0 interleaves) -> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 6, 0 interleaves) -> [Checker] Execute 3: MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (stack depth: 7, state: 7, 0 interleaves) -> [Checker] Execute 3: MUTEX_WAIT(mutex: 1, owner: 3) (stack depth: 8, state: 8, 0 interleaves) -> [Checker] Execute 3: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 9, state: 9, 0 interleaves) -> [Checker] Execute 4: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 10, state: 10, 0 interleaves) -> [Checker] Execute 4: MUTEX_WAIT(mutex: 1, owner: 4) (stack depth: 11, state: 11, 0 interleaves) -> [Checker] Execute 4: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 12, state: 12, 0 interleaves) -> [Checker] Execution came to an end at 1;1;1;2;2;2;3;3;3;4;4;4;0 (state: 13, depth: 13) -> [Checker] Backtracking from 1;1;1;2;2;2;3;3;3;4;4;4;0 -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 1, owner: -1) (state=9) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=10) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 1, owner: 3) (state=8) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=10) -> [Checker] Dependent Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=10) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=6) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=5) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=2) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] Execute 4: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 7, state: 7, 0 interleaves) -> [Checker] Execute 3: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 8, state: 14, 0 interleaves) -> [Checker] Execute 4: MUTEX_WAIT(mutex: 1, owner: 4) (stack depth: 9, state: 15, 0 interleaves) -> [Checker] Execute 4: MUTEX_UNLOCK(mutex: 1, owner: 3) (stack depth: 10, state: 16, 0 interleaves) -> [Checker] Execute 3: MUTEX_WAIT(mutex: 1, owner: 3) (stack depth: 11, state: 17, 0 interleaves) -> [Checker] Execute 3: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 12, state: 18, 0 interleaves) -> [Checker] Execution came to an end at 1;1;1;2;2;2;4;3;4;4;3;3;0 (state: 19, depth: 13) -> [Checker] Backtracking from 1;1;1;2;2;2;4;3;4;4;3;3;0 +> [Checker] Execute 2: MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (stack depth: 3, state: 3, 0 interleaves) +> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: 2) (stack depth: 4, state: 8, 0 interleaves) +> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 5, state: 9, 0 interleaves) +> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 10, 0 interleaves) +> [Checker] Execution came to an end at 1;1;2;1;2;2;0 (state: 11, depth: 7) +> [Checker] Backtracking from 1;1;2;1;2;2;0 > [Checker] Dependent Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 1, owner: 3) (state=16) -> [Checker] MUTEX_WAIT(mutex: 1, owner: 3) (state=17) -> [Checker] Backtracking from 1;1;1;2;2;2;4;3;4;4 -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=14) -> [Checker] MUTEX_WAIT(mutex: 1, owner: 4) (state=15) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 2) (state=8) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=10) > [Checker] Dependent Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=14) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=6) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=5) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=2) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 2) (state=8) +> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=9) > [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) +> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=3) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 2) (state=8) > [Checker] INDEPENDENT Transitions: > [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=2) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) +> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=3) > [Checker] Dependent Transitions: > [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) +> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=3) > [Checker] Execute 2: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 1, state: 1, 0 interleaves) -> [Checker] Execute 1: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 2, state: 20, 0 interleaves) -> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 3, state: 21, 0 interleaves) -> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: 1) (stack depth: 4, state: 22, 0 interleaves) -> [Checker] Execute 1: MUTEX_WAIT(mutex: 0, owner: 1) (stack depth: 5, state: 23, 0 interleaves) -> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 24, 0 interleaves) -> [Checker] Execute 3: MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (stack depth: 7, state: 25, 0 interleaves) -> [Checker] Execute 3: MUTEX_WAIT(mutex: 1, owner: 3) (stack depth: 8, state: 26, 0 interleaves) -> [Checker] Execute 3: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 9, state: 27, 0 interleaves) -> [Checker] Execute 4: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 10, state: 28, 0 interleaves) -> [Checker] Execute 4: MUTEX_WAIT(mutex: 1, owner: 4) (stack depth: 11, state: 29, 0 interleaves) -> [Checker] Execute 4: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 12, state: 30, 0 interleaves) -> [Checker] Execution came to an end at 2;1;2;2;1;1;3;3;3;4;4;4;0 (state: 31, depth: 13) -> [Checker] Backtracking from 2;1;2;2;1;1;3;3;3;4;4;4;0 -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 1, owner: -1) (state=27) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=28) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 1, owner: 3) (state=26) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=28) -> [Checker] Dependent Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=28) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=24) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=23) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=22) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=21) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=20) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] Execute 4: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 7, state: 25, 0 interleaves) -> [Checker] Execute 3: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 8, state: 32, 0 interleaves) -> [Checker] Execute 4: MUTEX_WAIT(mutex: 1, owner: 4) (stack depth: 9, state: 33, 0 interleaves) -> [Checker] Execute 4: MUTEX_UNLOCK(mutex: 1, owner: 3) (stack depth: 10, state: 34, 0 interleaves) -> [Checker] Execute 3: MUTEX_WAIT(mutex: 1, owner: 3) (stack depth: 11, state: 35, 0 interleaves) -> [Checker] Execute 3: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 12, state: 36, 0 interleaves) -> [Checker] Execution came to an end at 2;1;2;2;1;1;4;3;4;4;3;3;0 (state: 37, depth: 13) -> [Checker] Backtracking from 2;1;2;2;1;1;4;3;4;4;3;3;0 +> [Checker] Execute 1: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 2, state: 12, 0 interleaves) +> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 3, state: 13, 0 interleaves) +> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: 1) (stack depth: 4, state: 14, 0 interleaves) +> [Checker] Execute 1: MUTEX_WAIT(mutex: 0, owner: 1) (stack depth: 5, state: 15, 0 interleaves) +> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 16, 0 interleaves) +> [Checker] Execution came to an end at 2;1;2;2;1;1;0 (state: 17, depth: 7) +> [Checker] Backtracking from 2;1;2;2;1;1;0 > [Checker] Dependent Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 1, owner: 3) (state=34) -> [Checker] MUTEX_WAIT(mutex: 1, owner: 3) (state=35) -> [Checker] Backtracking from 2;1;2;2;1;1;4;3;4;4 -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=32) -> [Checker] MUTEX_WAIT(mutex: 1, owner: 4) (state=33) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=14) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=16) > [Checker] Dependent Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=32) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=14) +> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=15) > [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=24) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) +> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=12) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=14) > [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=23) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=22) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=21) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=20) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) -> [Checker] Dependent Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=22) -> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=23) -> [Checker] Backtracking from 2;1;2;2 -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=20) -> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=21) +> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=12) +> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=13) > [Checker] Dependent Transitions: > [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=20) -> [Checker] DFS exploration ended. 37 unique states visited; 7 backtracks (76 transition replays, 33 states visited overall) +> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=12) +> [Checker] DFS exploration ended. 17 unique states visited; 3 backtracks (21 transition replays, 2 states visited overall) + +$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:2 --log=s4u_test.thres:critical +> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true' +> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'actors' to '2' +> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 128 unique states visited; 26 backtracks (296 transition replays, 143 states visited overall) -$ ${bindir:=.}/../../../bin/simgrid-mc -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:3 --log=s4u_test.thres:critical +$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:3 --log=s4u_test.thres:critical +> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true' > [0.000000] [xbt_cfg/INFO] Configuration change: Set 'actors' to '3' > [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 85 unique states visited; 15 backtracks (240 transition replays, 141 states visited overall) +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 4645 unique states visited; 1082 backtracks (18004 transition replays, 12278 states visited overall) diff --git a/examples/cpp/synchro-semaphore/s4u-mc-synchro-semaphore.tesh b/examples/cpp/synchro-semaphore/s4u-mc-synchro-semaphore.tesh index 1fe9bddd3f..54e641e529 100644 --- a/examples/cpp/synchro-semaphore/s4u-mc-synchro-semaphore.tesh +++ b/examples/cpp/synchro-semaphore/s4u-mc-synchro-semaphore.tesh @@ -1,79 +1,6 @@ #!/usr/bin/env tesh -$ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt="[Checker]%e%m%n" -- ${bindir:=.}/s4u-synchro-semaphore --log=sem_test.thres:critical --log=root.fmt="[App%e%e%e%e]%e%m%n" +$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true --log=mc_dfs.thres:info --log=root.fmt="[Checker]%e%m%n" -- ${bindir:=.}/s4u-synchro-semaphore --log=sem_test.thres:critical --log=root.fmt="[App%e%e%e%e]%e%m%n" +> [Checker] Configuration change: Set 'model-check/sleep-set' to 'true' > [Checker] Start a DFS exploration. Reduction is: dpor. -> [Checker] Execute 1: SEM_ASYNC_LOCK(semaphore: 0) (stack depth: 1, state: 1, 0 interleaves) -> [Checker] Execute 1: SEM_WAIT(semaphore: 0, granted: yes) (stack depth: 2, state: 2, 0 interleaves) -> [Checker] Execute 1: SEM_UNLOCK(semaphore: 1) (stack depth: 3, state: 3, 0 interleaves) -> [Checker] Execute 1: SEM_ASYNC_LOCK(semaphore: 0) (stack depth: 4, state: 4, 0 interleaves) -> [Checker] Execute 2: SEM_ASYNC_LOCK(semaphore: 1) (stack depth: 5, state: 5, 0 interleaves) -> [Checker] Execute 2: SEM_WAIT(semaphore: 1, granted: yes) (stack depth: 6, state: 6, 0 interleaves) -> [Checker] Execute 2: SEM_UNLOCK(semaphore: 0) (stack depth: 7, state: 7, 0 interleaves) -> [Checker] Execute 1: SEM_WAIT(semaphore: 0, granted: yes) (stack depth: 8, state: 8, 0 interleaves) -> [Checker] Execute 1: SEM_UNLOCK(semaphore: 1) (stack depth: 9, state: 9, 0 interleaves) -> [Checker] Execute 1: SEM_ASYNC_LOCK(semaphore: 0) (stack depth: 10, state: 10, 0 interleaves) -> [Checker] Execute 2: SEM_ASYNC_LOCK(semaphore: 1) (stack depth: 11, state: 11, 0 interleaves) -> [Checker] Execute 2: SEM_WAIT(semaphore: 1, granted: yes) (stack depth: 12, state: 12, 0 interleaves) -> [Checker] Execute 2: SEM_UNLOCK(semaphore: 0) (stack depth: 13, state: 13, 0 interleaves) -> [Checker] Execute 1: SEM_WAIT(semaphore: 0, granted: yes) (stack depth: 14, state: 14, 0 interleaves) -> [Checker] Execute 1: SEM_UNLOCK(semaphore: 1) (stack depth: 15, state: 15, 0 interleaves) -> [Checker] Execute 1: SEM_ASYNC_LOCK(semaphore: 0) (stack depth: 16, state: 16, 0 interleaves) -> [Checker] Execute 2: SEM_ASYNC_LOCK(semaphore: 1) (stack depth: 17, state: 17, 0 interleaves) -> [Checker] Execute 2: SEM_WAIT(semaphore: 1, granted: yes) (stack depth: 18, state: 18, 0 interleaves) -> [Checker] Execute 2: SEM_UNLOCK(semaphore: 0) (stack depth: 19, state: 19, 0 interleaves) -> [Checker] Execute 1: SEM_WAIT(semaphore: 0, granted: yes) (stack depth: 20, state: 20, 0 interleaves) -> [Checker] Execute 1: SEM_UNLOCK(semaphore: 1) (stack depth: 21, state: 21, 0 interleaves) -> [Checker] Execute 2: SEM_ASYNC_LOCK(semaphore: 1) (stack depth: 22, state: 22, 0 interleaves) -> [Checker] Execute 2: SEM_WAIT(semaphore: 1, granted: yes) (stack depth: 23, state: 23, 0 interleaves) -> [Checker] Execute 2: SEM_UNLOCK(semaphore: 0) (stack depth: 24, state: 24, 0 interleaves) -> [Checker] Execution came to an end at 1;1;1;1;2;2;2;1;1;1;2;2;2;1;1;1;2;2;2;1;1;2;2;2;0 (state: 25, depth: 25) -> [Checker] Backtracking from 1;1;1;1;2;2;2;1;1;1;2;2;2;1;1;1;2;2;2;1;1;2;2;2;0 -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_UNLOCK(semaphore: 1) (state=21) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=22) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=20) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=22) -> [Checker] Dependent Transitions: -> [Checker] SEM_UNLOCK(semaphore: 0) (state=19) -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=20) -> [Checker] Backtracking from 1;1;1;1;2;2;2;1;1;1;2;2;2;1;1;1;2;2;2 -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_ASYNC_LOCK(semaphore: 0) (state=16) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=17) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_UNLOCK(semaphore: 1) (state=15) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=17) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=14) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=17) -> [Checker] Dependent Transitions: -> [Checker] SEM_UNLOCK(semaphore: 0) (state=13) -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=14) -> [Checker] Backtracking from 1;1;1;1;2;2;2;1;1;1;2;2;2 -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_ASYNC_LOCK(semaphore: 0) (state=10) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=11) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_UNLOCK(semaphore: 1) (state=9) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=11) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=8) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=11) -> [Checker] Dependent Transitions: -> [Checker] SEM_UNLOCK(semaphore: 0) (state=7) -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=8) -> [Checker] Backtracking from 1;1;1;1;2;2;2 -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_ASYNC_LOCK(semaphore: 0) (state=4) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=5) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_UNLOCK(semaphore: 1) (state=3) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=5) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=2) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=5) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_ASYNC_LOCK(semaphore: 0) (state=1) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=5) -> [Checker] DFS exploration ended. 25 unique states visited; 4 backtracks (64 transition replays, 36 states visited overall) +> [Checker] DFS exploration ended. 33 unique states visited; 9 backtracks (125 transition replays, 84 states visited overall) diff --git a/examples/smpi/mc/only_send_deterministic.tesh b/examples/smpi/mc/only_send_deterministic.tesh index 957f042544..447496d1ce 100644 --- a/examples/smpi/mc/only_send_deterministic.tesh +++ b/examples/smpi/mc/only_send_deterministic.tesh @@ -10,4 +10,4 @@ $ ../../../smpi_script/bin/smpirun -wrapper "${bindir:=.}/../../../bin/simgrid-m > [0.000000] [mc_comm_determinism/INFO] The recv communications pattern of the actor 0 is different! Different source for communication #1 > [0.000000] [mc_comm_determinism/INFO] Send-deterministic : Yes > [0.000000] [mc_comm_determinism/INFO] Recv-deterministic : No -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 36 unique states visited; 13 backtracks (97 transition replays, 49 states visited overall) \ No newline at end of file +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 242 unique states visited; 68 backtracks (612 transition replays, 303 states visited overall) \ No newline at end of file diff --git a/examples/smpi/mc/sendsend.tesh b/examples/smpi/mc/sendsend.tesh index 4f73c5d3bc..251b6bd193 100644 --- a/examples/smpi/mc/sendsend.tesh +++ b/examples/smpi/mc/sendsend.tesh @@ -26,6 +26,7 @@ $ ../../../smpi_script/bin/smpirun -quiet -wrapper "${bindir:=.}/../../../bin/si > [0.000000] [mc_global/INFO] Counter-example execution trace: > [0.000000] [mc_global/INFO] 1: iSend(mbox=2) > [0.000000] [mc_global/INFO] 2: iSend(mbox=0) -> [0.000000] [mc_Session/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;2' +> [0.000000] [mc_global/INFO] 0: +> [0.000000] [mc_Session/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;2;0' > [0.000000] [mc_dfs/INFO] DFS exploration ended. 3 unique states visited; 1 backtracks (3 transition replays, 0 states visited overall) > Execution failed with code 3. diff --git a/examples/sthread/pthread-mc-mutex-simple.tesh b/examples/sthread/pthread-mc-mutex-simple.tesh index 1aba081db7..6c9b91c68b 100644 --- a/examples/sthread/pthread-mc-mutex-simple.tesh +++ b/examples/sthread/pthread-mc-mutex-simple.tesh @@ -9,8 +9,10 @@ $ ${bindir:=.}/../../bin/simgrid-mc --cfg=model-check/setenv:LD_PRELOAD=${libdir > The thread 0 is terminating. > The thread 1 is terminating. > User's main is terminating. +> The thread 0 is terminating. +> The thread 1 is terminating. +> User's main is terminating. > The thread 1 is terminating. > The thread 0 is terminating. > User's main is terminating. -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 17 unique states visited; 3 backtracks (22 transition replays, 3 states visited overall) - +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 23 unique states visited; 3 backtracks (27 transition replays, 2 states visited overall) diff --git a/examples/sthread/pthread-mc-mutex-simpledeadlock.tesh b/examples/sthread/pthread-mc-mutex-simpledeadlock.tesh index 72324f2a60..c633f469dd 100644 --- a/examples/sthread/pthread-mc-mutex-simpledeadlock.tesh +++ b/examples/sthread/pthread-mc-mutex-simpledeadlock.tesh @@ -12,6 +12,12 @@ $ ${bindir:=.}/../../bin/simgrid-mc --cfg=model-check/setenv:LD_PRELOAD=${libdir > The thread 0 is terminating. > The thread 1 is terminating. > User's main is terminating. +> The thread 0 is terminating. +> The thread 1 is terminating. +> User's main is terminating. +> The thread 0 is terminating. +> The thread 1 is terminating. +> User's main is terminating. > [0.000000] [mc_global/INFO] ************************** > [0.000000] [mc_global/INFO] *** DEADLOCK DETECTED *** > [0.000000] [mc_global/INFO] ************************** @@ -27,5 +33,6 @@ $ ${bindir:=.}/../../bin/simgrid-mc --cfg=model-check/setenv:LD_PRELOAD=${libdir > [0.000000] [mc_global/INFO] 2: MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) > [0.000000] [mc_global/INFO] 3: MUTEX_WAIT(mutex: 1, owner: 3) > [0.000000] [mc_global/INFO] 3: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) -> [0.000000] [mc_Session/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'2;2;3;2;3;3' -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 19 unique states visited; 2 backtracks (22 transition replays, 2 states visited overall) +> [0.000000] [mc_global/INFO] 0: +> [0.000000] [mc_Session/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'2;2;3;2;3;3;0' +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 38 unique states visited; 4 backtracks (52 transition replays, 11 states visited overall) \ No newline at end of file diff --git a/src/mc/api/State.cpp b/src/mc/api/State.cpp index 98c229f7ad..a9d97afd1b 100644 --- a/src/mc/api/State.cpp +++ b/src/mc/api/State.cpp @@ -25,12 +25,12 @@ State::State(const RemoteApp& remote_app) : num_(++expended_states_) } State::State(const RemoteApp& remote_app, const State* previous_state) - : default_transition(std::make_unique()), num_(++expended_states_) + : default_transition_(std::make_unique()), num_(++expended_states_) { remote_app.get_actors_status(actors_to_run_); - transition_ = default_transition.get(); + transition_ = default_transition_.get(); /* Stateful model checking */ if ((_sg_mc_checkpoint > 0 && (num_ % _sg_mc_checkpoint == 0)) || _sg_mc_termination) { @@ -40,39 +40,36 @@ State::State(const RemoteApp& remote_app, const State* previous_state) /* For each actor in the previous sleep set, keep it if it is not dependent with current transition. * And if we kept it and the actor is enabled in this state, mark the actor as already done, so that * it is not explored*/ - for (auto & [aid, transition] : previous_state->get_sleep_set()) { - - if (not previous_state->get_transition()->depends(&transition)) { - - sleep_set_.emplace(aid, transition); - if (actors_to_run_.count(aid) != 0) { - XBT_DEBUG("Actor %ld will not be explored, for it is in the sleep set", aid); - - actors_to_run_.at(aid).mark_done(); - } + for (auto& [aid, transition] : previous_state->get_sleep_set()) { + + if (not previous_state->get_transition()->depends(&transition)) { + + sleep_set_.emplace(aid, transition); + if (actors_to_run_.count(aid) != 0) { + XBT_DEBUG("Actor %ld will not be explored, for it is in the sleep set", aid); + + actors_to_run_.at(aid).mark_done(); } - else - XBT_DEBUG("Transition >>%s<< removed from the sleep set because it was dependent with >>%s<<", transition.to_string().c_str(), previous_state->get_transition()->to_string().c_str()); - + } else + XBT_DEBUG("Transition >>%s<< removed from the sleep set because it was dependent with >>%s<<", + transition.to_string().c_str(), previous_state->get_transition()->to_string().c_str()); } - } - + std::size_t State::count_todo() const { return boost::range::count_if(this->actors_to_run_, [](auto& pair) { return pair.second.is_todo(); }); } -void State::mark_all_todo() +void State::mark_all_todo() { - for (auto & [aid, actor] : actors_to_run_) { + for (auto& [aid, actor] : actors_to_run_) { - if (actor.is_enabled() and not actor.is_done() and not actor.is_todo()) - actor.mark_todo(); - - } + if (actor.is_enabled() and not actor.is_done() and not actor.is_todo()) + actor.mark_todo(); + } } - + Transition* State::get_transition() const { return transition_; @@ -83,22 +80,19 @@ aid_t State::next_transition() const XBT_DEBUG("Search for an actor to run. %zu actors to consider", actors_to_run_.size()); for (auto const& [aid, actor] : actors_to_run_) { /* Only consider actors (1) marked as interleaving by the checker and (2) currently enabled in the application */ - if (not actor.is_todo() || not actor.is_enabled() || actor.is_done()) { - - if (not actor.is_todo()) - XBT_DEBUG("Can't run actor %ld because it is not todo", aid); - - if (not actor.is_enabled()) - XBT_DEBUG("Can't run actor %ld because it is not enabled", aid); + if (not actor.is_todo() || not actor.is_enabled() || actor.is_done()) { - if (actor.is_done()) - XBT_DEBUG("Can't run actor %ld because it has already been done", aid); + if (not actor.is_todo()) + XBT_DEBUG("Can't run actor %ld because it is not todo", aid); - - continue; + if (not actor.is_enabled()) + XBT_DEBUG("Can't run actor %ld because it is not enabled", aid); + if (actor.is_done()) + XBT_DEBUG("Can't run actor %ld because it has already been done", aid); - } + continue; + } return aid; } diff --git a/src/mc/explo/DFSExplorer.cpp b/src/mc/explo/DFSExplorer.cpp index a3e0c38a51..e27cc33ac5 100644 --- a/src/mc/explo/DFSExplorer.cpp +++ b/src/mc/explo/DFSExplorer.cpp @@ -143,13 +143,14 @@ void DFSExplorer::run() continue; } - XBT_VERB("Sleep set actually containing:"); - for (auto & [aid, transition] : state->get_sleep_set()) { - - XBT_VERB(" <%ld,%s>", aid, transition.to_string().c_str()); + if (_sg_mc_sleep_set) { + XBT_VERB("Sleep set actually containing:"); + for (auto & [aid, transition] : state->get_sleep_set()) { + + XBT_VERB(" <%ld,%s>", aid, transition.to_string().c_str()); + } } - /* Actually answer the request: let's execute the selected request (MCed does one step) */ state->execute_next(next); on_transition_execute_signal(state->get_transition(), get_remote_app()); @@ -164,7 +165,7 @@ void DFSExplorer::run() /* If we want sleep set reduction, pass the old state to the new state so it can * both copy the sleep set and eventually removes things from it locally */ - if (sleep_set_reduction_) + if (_sg_mc_sleep_set) next_state = std::make_unique(get_remote_app(), state); else next_state = std::make_unique(get_remote_app()); @@ -305,8 +306,6 @@ DFSExplorer::DFSExplorer(const std::vector& args, bool with_dpor) : Explo else reduction_mode_ = ReductionMode::none; - sleep_set_reduction_ = _sg_mc_sleep_set; - if (_sg_mc_termination) { if (with_dpor) { XBT_INFO("Check non progressive cycles (turning DPOR off)"); diff --git a/src/mc/explo/DFSExplorer.hpp b/src/mc/explo/DFSExplorer.hpp index ad79c59ee6..0a0d51e53d 100644 --- a/src/mc/explo/DFSExplorer.hpp +++ b/src/mc/explo/DFSExplorer.hpp @@ -83,7 +83,6 @@ public: private: void check_non_termination(const State* current_state); void backtrack(); - bool sleep_set_reduction_; /** Stack representing the position in the exploration graph */ std::list> stack_; diff --git a/teshsuite/models/maxmin_bench/maxmin_bench_small.tesh b/teshsuite/models/maxmin_bench/maxmin_bench_small.tesh index 70b2a2c39b..28d690fde0 100644 --- a/teshsuite/models/maxmin_bench/maxmin_bench_small.tesh +++ b/teshsuite/models/maxmin_bench/maxmin_bench_small.tesh @@ -767,4 +767,4 @@ $ ${bindir:=.}/maxmin_bench small 10 test > [0.000000]: [ker_lmm/DEBUG] '92'(1.000000) : 0.373508 > [0.000000]: [ker_lmm/DEBUG] '91'(1.000000) : 0.240523 > [0.000000]: [ker_lmm/DEBUG] '100'(0.000000) : 0.000000 -> 10x One shot execution time for a total of 10 constraints, 10 variables with 4 active constraint each, concurrency in [2,6] and max concurrency share 2 +> 10x One shot execution time for a total of 10 constraints, 10 variables with 4 active constraint each, concurrency in [2,6] and max concurrency share 2 \ No newline at end of file diff --git a/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh b/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh index 2010817a05..9fba18a6e3 100644 --- a/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh +++ b/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh @@ -1,32 +1,380 @@ # Smpi Allreduce collectives tests p Test allreduce -$ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/../../../smpi_script/bin/smpirun -wrapper "${bindir:=.}/../../../bin/simgrid-mc" -map -hostfile ../hostfile_coll -platform ${platfdir:=.}/small_platform.xml -np 4 --log=xbt_cfg.thres:critical ${bindir:=.}/coll-allreduce-with-leaks --log=smpi_config.thres:warning --cfg=smpi/display-allocs:yes --cfg=smpi/simulate-computation:no --log=smpi_coll.thres:error --log=smpi_mpi.thres:error --log=smpi_pmpi.thres:error --cfg=smpi/list-leaks:10 --log=no_loc -> [0.000000] [smpi/INFO] [rank 0] -> Tremblay -> [0.000000] [smpi/INFO] [rank 1] -> Tremblay -> [0.000000] [smpi/INFO] [rank 2] -> Tremblay -> [0.000000] [smpi/INFO] [rank 3] -> Tremblay -> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. +$ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/../../../smpi_script/bin/smpirun + +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting +> > [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: -> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with > [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm > [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group > [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: > [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 > [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. -> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> Largest allocation at once from a single process was 28 bytes, at coll > If this is too much, consider sharing allocations for computation buffers. -> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> This can be done automatically by setting > > [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: -> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with > [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm > [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group > [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: > [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 > [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. -> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> Largest allocation at once from a single process was 28 bytes, at coll > If this is too much, consider sharing allocations for computation buffers. -> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> This can be done automatically by setting > -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 33 unique states visited; 7 backtracks (140 transition replays, 101 states visited overall) +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 616 unique states visited; 167 backtracks (3773 transition replays, 2991 states visited overall)