From: mlaurent Date: Fri, 24 Feb 2023 14:42:51 +0000 (+0100) Subject: Merge branch 'master' of https://framagit.org/simgrid/simgrid X-Git-Tag: v3.34~436^2 X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/commitdiff_plain/3203afd846219ef8b41cadda945ea0a98103c46f?hp=0321eed9a43eb7c7c7ce8ce0274963c5a4ade6ad Merge branch 'master' of https://framagit.org/simgrid/simgrid --- diff --git a/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness-visited.tesh b/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness-visited.tesh index 4e13f97131..cffdd1cda7 100644 --- a/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness-visited.tesh +++ b/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness-visited.tesh @@ -3,7 +3,7 @@ ! expect return 2 ! timeout 30 ! output display -$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-bugged1-liveness ${platfdir:=.}/small_platform.xml 1 --log=xbt_cfg.thresh:warning "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --cfg=contexts/factory:ucontext --cfg=model-check/visited:100 --cfg=contexts/stack-size:256 --cfg=model-check/property:promela_bugged1_liveness +$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true ${bindir:=.}/s4u-mc-bugged1-liveness ${platfdir:=.}/small_platform.xml 1 --log=xbt_cfg.thresh:warning "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --cfg=contexts/factory:ucontext --cfg=model-check/visited:100 --cfg=contexts/stack-size:256 --cfg=model-check/property:promela_bugged1_liveness > [ 0.000000] (0:maestro@) Check the liveness property promela_bugged1_liveness > [ 0.000000] (2:client@Boivin) Ask the request > [ 0.000000] (3:client@Fafard) Ask the request diff --git a/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness.tesh b/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness.tesh index 658e032b3e..14da430e3d 100644 --- a/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness.tesh +++ b/examples/cpp/mc-bugged1-liveness/s4u-mc-bugged1-liveness.tesh @@ -3,7 +3,7 @@ ! expect return 2 ! timeout 20 ! output display -$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-bugged1-liveness ${platfdir:=.}/small_platform.xml 0 --log=xbt_cfg.thresh:warning "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --cfg=contexts/factory:ucontext --cfg=contexts/stack-size:256 --cfg=model-check/property:promela_bugged1_liveness +$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true ${bindir:=.}/s4u-mc-bugged1-liveness ${platfdir:=.}/small_platform.xml 0 --log=xbt_cfg.thresh:warning "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --cfg=contexts/factory:ucontext --cfg=contexts/stack-size:256 --cfg=model-check/property:promela_bugged1_liveness > [ 0.000000] (0:maestro@) Check the liveness property promela_bugged1_liveness > [ 0.000000] (2:client@Boivin) Ask the request > [ 0.000000] (3:client@Fafard) Ask the request diff --git a/examples/cpp/mc-bugged1/s4u-mc-bugged1.tesh b/examples/cpp/mc-bugged1/s4u-mc-bugged1.tesh index 97ec9d5b62..adbd4527fa 100644 --- a/examples/cpp/mc-bugged1/s4u-mc-bugged1.tesh +++ b/examples/cpp/mc-bugged1/s4u-mc-bugged1.tesh @@ -9,13 +9,57 @@ $ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-bugged1 ${platfdir:=. > [ 0.000000] (1:server@HostA) OK > [ 0.000000] (4:client@HostD) Sent! > [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (4:client@HostD) Sent! > [ 0.000000] (3:client@HostC) Sent! > [ 0.000000] (2:client@HostB) Sent! > [ 0.000000] (3:client@HostC) Sent! > [ 0.000000] (1:server@HostA) OK > [ 0.000000] (4:client@HostD) Sent! > [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (4:client@HostD) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (4:client@HostD) Sent! > [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (2:client@HostB) Sent! +> [ 0.000000] (3:client@HostC) Sent! +> [ 0.000000] (1:server@HostA) OK +> [ 0.000000] (4:client@HostD) Sent! > [ 0.000000] (2:client@HostB) Sent! > [ 0.000000] (0:maestro@) ************************** > [ 0.000000] (0:maestro@) *** PROPERTY NOT VALID *** @@ -32,4 +76,4 @@ $ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-bugged1 ${platfdir:=. > [ 0.000000] (0:maestro@) 3: iSend(mbox=0) > [ 0.000000] (0:maestro@) 1: WaitComm(from 3 to 1, mbox=0, no timeout) > [ 0.000000] (0:maestro@) You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;2;1;1;2;4;1;1;3;1' -> [ 0.000000] (0:maestro@) DFS exploration ended. 22 unique states visited; 4 backtracks (56 transition replays, 30 states visited overall) \ No newline at end of file +> [ 0.000000] (0:maestro@) DFS exploration ended. 59 unique states visited; 14 backtracks (192 transition replays, 119 states visited overall) \ No newline at end of file diff --git a/examples/cpp/mc-bugged2/s4u-mc-bugged2.tesh b/examples/cpp/mc-bugged2/s4u-mc-bugged2.tesh index e8b5ceb102..d64aed948f 100644 --- a/examples/cpp/mc-bugged2/s4u-mc-bugged2.tesh +++ b/examples/cpp/mc-bugged2/s4u-mc-bugged2.tesh @@ -2,554 +2,18 @@ ! expect return 1 ! timeout 20 -$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-bugged2 ${platfdir:=.}/model_checker_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --log=xbt_cfg.thresh:warning --cfg=contexts/stack-size:256 -> [ 0.000000] (0:maestro@) Start a DFS exploration. Reduction is: dpor. -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 1 2 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) Second pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 1 -> [ 0.000000] (1:server@HostA) First pair received: 2 2 -> [ 0.000000] (0:maestro@) ************************** -> [ 0.000000] (0:maestro@) *** PROPERTY NOT VALID *** -> [ 0.000000] (0:maestro@) ************************** -> [ 0.000000] (0:maestro@) Counter-example execution trace: -> [ 0.000000] (0:maestro@) 1: iRecv(mbox=0) -> [ 0.000000] (0:maestro@) 3: iSend(mbox=0) -> [ 0.000000] (0:maestro@) 1: WaitComm(from 3 to 1, mbox=0, no timeout) -> [ 0.000000] (0:maestro@) 3: WaitComm(from 3 to 1, mbox=0, no timeout) -> [ 0.000000] (0:maestro@) 1: iRecv(mbox=0) -> [ 0.000000] (0:maestro@) 3: iSend(mbox=0) -> [ 0.000000] (0:maestro@) 1: WaitComm(from 3 to 1, mbox=0, no timeout) -> [ 0.000000] (0:maestro@) You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;3;1;3;1;3;1' -> [ 0.000000] (0:maestro@) DFS exploration ended. 1006 unique states visited; 350 backtracks (5319 transition replays, 3963 states visited overall) +$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true ${bindir:=.}/s4u-mc-bugged2 ${platfdir:=.}/model_checker_platform.xml --log=root.thresh:critical --log=mc.thresh:info +> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. +> [0.000000] [mc_ModelChecker/INFO] ************************** +> [0.000000] [mc_ModelChecker/INFO] *** PROPERTY NOT VALID *** +> [0.000000] [mc_ModelChecker/INFO] ************************** +> [0.000000] [mc_ModelChecker/INFO] Counter-example execution trace: +> [0.000000] [mc_ModelChecker/INFO] 1: iRecv(mbox=0) +> [0.000000] [mc_ModelChecker/INFO] 3: iSend(mbox=0) +> [0.000000] [mc_ModelChecker/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout) +> [0.000000] [mc_ModelChecker/INFO] 1: iRecv(mbox=0) +> [0.000000] [mc_ModelChecker/INFO] 3: WaitComm(from 3 to 1, mbox=0, no timeout) +> [0.000000] [mc_ModelChecker/INFO] 3: iSend(mbox=0) +> [0.000000] [mc_ModelChecker/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout) +> [0.000000] [mc_ModelChecker/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;3;1;1;3;3;1' +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 2091 unique states visited; 529 backtracks (8359 transition replays, 5739 states visited overall) diff --git a/examples/cpp/mc-electric-fence/s4u-mc-electric-fence.tesh b/examples/cpp/mc-electric-fence/s4u-mc-electric-fence.tesh index e974265688..ba390d6eb9 100644 --- a/examples/cpp/mc-electric-fence/s4u-mc-electric-fence.tesh +++ b/examples/cpp/mc-electric-fence/s4u-mc-electric-fence.tesh @@ -5,8 +5,88 @@ $ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-electric-fence ${plat > [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! > [HostA:server:(1) 0.000000] [electric_fence/INFO] OK > [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK > [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! > [HostA:server:(1) 0.000000] [electric_fence/INFO] OK > [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! > [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 15 unique states visited; 5 backtracks (32 transition replays, 13 states visited overall) +> [HostC:client:(3) 0.000000] [electric_fence/INFO] Sent! +> [HostA:server:(1) 0.000000] [electric_fence/INFO] OK +> [HostB:client:(2) 0.000000] [electric_fence/INFO] Sent! +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 169 unique states visited; 29 backtracks (261 transition replays, 64 states visited overall) \ No newline at end of file diff --git a/examples/cpp/mc-failing-assert/s4u-mc-failing-assert.tesh b/examples/cpp/mc-failing-assert/s4u-mc-failing-assert.tesh index fc687a94cc..6d4d6e5782 100644 --- a/examples/cpp/mc-failing-assert/s4u-mc-failing-assert.tesh +++ b/examples/cpp/mc-failing-assert/s4u-mc-failing-assert.tesh @@ -2,16 +2,21 @@ ! expect return 1 ! timeout 20 -$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-failing-assert ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --log=xbt_cfg.thresh:warning +$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true ${bindir:=.}/s4u-mc-failing-assert ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%a@%h)%e%m%n" --log=xbt_cfg.thresh:warning > [ 0.000000] (0:maestro@) Start a DFS exploration. Reduction is: dpor. > [ 0.000000] (2:client1@Bourassa) Sent! > [ 0.000000] (1:server@Boivin) OK > [ 0.000000] (3:client2@Fafard) Sent! -> [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (3:client2@Fafard) Sent! > [ 0.000000] (2:client1@Bourassa) Sent! > [ 0.000000] (1:server@Boivin) OK > [ 0.000000] (3:client2@Fafard) Sent! > [ 0.000000] (2:client1@Bourassa) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (3:client2@Fafard) Sent! +> [ 0.000000] (1:server@Boivin) OK +> [ 0.000000] (3:client2@Fafard) Sent! > [ 0.000000] (0:maestro@) ************************** > [ 0.000000] (0:maestro@) *** PROPERTY NOT VALID *** > [ 0.000000] (0:maestro@) ************************** @@ -23,4 +28,4 @@ $ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-failing-assert ${plat > [ 0.000000] (0:maestro@) 2: iSend(mbox=0) > [ 0.000000] (0:maestro@) 1: WaitComm(from 2 to 1, mbox=0, no timeout) > [ 0.000000] (0:maestro@) You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;3;1;1;2;1' -> [ 0.000000] (0:maestro@) DFS exploration ended. 18 unique states visited; 4 backtracks (36 transition replays, 14 states visited overall) +> [ 0.000000] (0:maestro@) DFS exploration ended. 29 unique states visited; 5 backtracks (49 transition replays, 15 states visited overall) \ No newline at end of file diff --git a/examples/cpp/synchro-barrier/s4u-mc-synchro-barrier.tesh b/examples/cpp/synchro-barrier/s4u-mc-synchro-barrier.tesh index 0ce52a1c98..51eeae219a 100644 --- a/examples/cpp/synchro-barrier/s4u-mc-synchro-barrier.tesh +++ b/examples/cpp/synchro-barrier/s4u-mc-synchro-barrier.tesh @@ -20,13 +20,33 @@ $ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt > [Checker] BARRIER_WAIT(barrier: 0) (state=3) > [Checker] BARRIER_WAIT(barrier: 0) (state=4) > [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) +> [Checker] BARRIER_WAIT(barrier: 0) (state=4) +> [Checker] Dependent Transitions: > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) > [Checker] BARRIER_WAIT(barrier: 0) (state=3) > [Checker] INDEPENDENT Transitions: > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) -> [Checker] Backtracking from 1;2 -> [Checker] DFS exploration ended. 5 unique states visited; 2 backtracks (7 transition replays, 1 states visited overall) +> [Checker] Execute 2: BARRIER_ASYNC_LOCK(barrier: 0) (stack depth: 1, state: 1, 0 interleaves) +> [Checker] Execute 1: BARRIER_ASYNC_LOCK(barrier: 0) (stack depth: 2, state: 6, 0 interleaves) +> [Checker] Execute 1: BARRIER_WAIT(barrier: 0) (stack depth: 3, state: 7, 0 interleaves) +> [Checker] Execute 2: BARRIER_WAIT(barrier: 0) (stack depth: 4, state: 8, 0 interleaves) +> [Checker] Execution came to an end at 2;1;1;2;0 (state: 9, depth: 5) +> [Checker] Backtracking from 2;1;1;2;0 +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_WAIT(barrier: 0) (state=7) +> [Checker] BARRIER_WAIT(barrier: 0) (state=8) +> [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=6) +> [Checker] BARRIER_WAIT(barrier: 0) (state=8) +> [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) +> [Checker] BARRIER_WAIT(barrier: 0) (state=7) +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=6) +> [Checker] DFS exploration ended. 9 unique states visited; 2 backtracks (10 transition replays, 0 states visited overall) $ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt="[Checker]%e%m%n" -- ${bindir:=.}/s4u-synchro-barrier 3 --log=s4u_test.thres:critical --log=root.fmt="[App%e%e%e%e]%e%m%n" > [Checker] Start a DFS exploration. Reduction is: dpor. @@ -44,6 +64,9 @@ $ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt > [Checker] INDEPENDENT Transitions: > [Checker] BARRIER_WAIT(barrier: 0) (state=4) > [Checker] BARRIER_WAIT(barrier: 0) (state=6) +> [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) +> [Checker] BARRIER_WAIT(barrier: 0) (state=6) > [Checker] INDEPENDENT Transitions: > [Checker] BARRIER_WAIT(barrier: 0) (state=4) > [Checker] BARRIER_WAIT(barrier: 0) (state=5) @@ -59,8 +82,41 @@ $ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt > [Checker] INDEPENDENT Transitions: > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=3) -> [Checker] Backtracking from 1;2;3 > [Checker] INDEPENDENT Transitions: > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) > [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) -> [Checker] DFS exploration ended. 7 unique states visited; 2 backtracks (10 transition replays, 2 states visited overall) +> [Checker] Execute 3: BARRIER_ASYNC_LOCK(barrier: 0) (stack depth: 2, state: 2, 0 interleaves) +> [Checker] Execute 2: BARRIER_ASYNC_LOCK(barrier: 0) (stack depth: 3, state: 8, 0 interleaves) +> [Checker] Execute 1: BARRIER_WAIT(barrier: 0) (stack depth: 4, state: 9, 0 interleaves) +> [Checker] Execute 2: BARRIER_WAIT(barrier: 0) (stack depth: 5, state: 10, 0 interleaves) +> [Checker] Execute 3: BARRIER_WAIT(barrier: 0) (stack depth: 6, state: 11, 0 interleaves) +> [Checker] Execution came to an end at 1;3;2;1;2;3;0 (state: 12, depth: 7) +> [Checker] Backtracking from 1;3;2;1;2;3;0 +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_WAIT(barrier: 0) (state=10) +> [Checker] BARRIER_WAIT(barrier: 0) (state=11) +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_WAIT(barrier: 0) (state=9) +> [Checker] BARRIER_WAIT(barrier: 0) (state=11) +> [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=8) +> [Checker] BARRIER_WAIT(barrier: 0) (state=11) +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_WAIT(barrier: 0) (state=9) +> [Checker] BARRIER_WAIT(barrier: 0) (state=10) +> [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) +> [Checker] BARRIER_WAIT(barrier: 0) (state=10) +> [Checker] Dependent Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=8) +> [Checker] BARRIER_WAIT(barrier: 0) (state=9) +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=8) +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=8) +> [Checker] INDEPENDENT Transitions: +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=1) +> [Checker] BARRIER_ASYNC_LOCK(barrier: 0) (state=2) +> [Checker] DFS exploration ended. 12 unique states visited; 2 backtracks (14 transition replays, 1 states visited overall) \ No newline at end of file diff --git a/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex.tesh b/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex.tesh index 2db23f3b98..ea81b940a2 100644 --- a/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex.tesh +++ b/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex.tesh @@ -13,6 +13,12 @@ $ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt > [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 6, 0 interleaves) > [Checker] Execution came to an end at 1;1;1;2;2;2;0 (state: 7, depth: 7) > [Checker] Backtracking from 1;1;1;2;2;2;0 +> [Checker] Dependent Transitions: +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=6) +> [Checker] Dependent Transitions: +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3) +> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=5) > [Checker] INDEPENDENT Transitions: > [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3) > [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) @@ -22,205 +28,60 @@ $ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt > [Checker] Dependent Transitions: > [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=1) > [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) -> [Checker] Execute 2: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 1, state: 1, 0 interleaves) -> [Checker] Execute 1: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 2, state: 8, 0 interleaves) -> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 3, state: 9, 0 interleaves) -> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: 1) (stack depth: 4, state: 10, 0 interleaves) -> [Checker] Execute 1: MUTEX_WAIT(mutex: 0, owner: 1) (stack depth: 5, state: 11, 0 interleaves) -> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 12, 0 interleaves) -> [Checker] Execution came to an end at 2;1;2;2;1;1;0 (state: 13, depth: 7) -> [Checker] Backtracking from 2;1;2;2;1;1;0 -> [Checker] Dependent Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=10) -> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=11) -> [Checker] Backtracking from 2;1;2;2 -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=8) -> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=9) -> [Checker] Dependent Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=8) -> [Checker] DFS exploration ended. 13 unique states visited; 3 backtracks (18 transition replays, 3 states visited overall) - -$ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt="[Checker]%e%m%n" -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:2 --log=s4u_test.thres:critical --log=root.fmt="[App%e%e%e%e]%e%m%n" -> [App ] Configuration change: Set 'actors' to '2' -> [Checker] Start a DFS exploration. Reduction is: dpor. -> [Checker] Execute 1: MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (stack depth: 1, state: 1, 0 interleaves) -> [Checker] Execute 1: MUTEX_WAIT(mutex: 0, owner: 1) (stack depth: 2, state: 2, 0 interleaves) -> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 3, state: 3, 0 interleaves) -> [Checker] Execute 2: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 4, state: 4, 0 interleaves) -> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 5, state: 5, 0 interleaves) -> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 6, 0 interleaves) -> [Checker] Execute 3: MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (stack depth: 7, state: 7, 0 interleaves) -> [Checker] Execute 3: MUTEX_WAIT(mutex: 1, owner: 3) (stack depth: 8, state: 8, 0 interleaves) -> [Checker] Execute 3: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 9, state: 9, 0 interleaves) -> [Checker] Execute 4: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 10, state: 10, 0 interleaves) -> [Checker] Execute 4: MUTEX_WAIT(mutex: 1, owner: 4) (stack depth: 11, state: 11, 0 interleaves) -> [Checker] Execute 4: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 12, state: 12, 0 interleaves) -> [Checker] Execution came to an end at 1;1;1;2;2;2;3;3;3;4;4;4;0 (state: 13, depth: 13) -> [Checker] Backtracking from 1;1;1;2;2;2;3;3;3;4;4;4;0 -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 1, owner: -1) (state=9) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=10) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 1, owner: 3) (state=8) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=10) -> [Checker] Dependent Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=10) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=6) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=5) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=2) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=7) -> [Checker] Execute 4: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 7, state: 7, 0 interleaves) -> [Checker] Execute 3: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 8, state: 14, 0 interleaves) -> [Checker] Execute 4: MUTEX_WAIT(mutex: 1, owner: 4) (stack depth: 9, state: 15, 0 interleaves) -> [Checker] Execute 4: MUTEX_UNLOCK(mutex: 1, owner: 3) (stack depth: 10, state: 16, 0 interleaves) -> [Checker] Execute 3: MUTEX_WAIT(mutex: 1, owner: 3) (stack depth: 11, state: 17, 0 interleaves) -> [Checker] Execute 3: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 12, state: 18, 0 interleaves) -> [Checker] Execution came to an end at 1;1;1;2;2;2;4;3;4;4;3;3;0 (state: 19, depth: 13) -> [Checker] Backtracking from 1;1;1;2;2;2;4;3;4;4;3;3;0 +> [Checker] Execute 2: MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (stack depth: 3, state: 3, 0 interleaves) +> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: 2) (stack depth: 4, state: 8, 0 interleaves) +> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 5, state: 9, 0 interleaves) +> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 10, 0 interleaves) +> [Checker] Execution came to an end at 1;1;2;1;2;2;0 (state: 11, depth: 7) +> [Checker] Backtracking from 1;1;2;1;2;2;0 > [Checker] Dependent Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 1, owner: 3) (state=16) -> [Checker] MUTEX_WAIT(mutex: 1, owner: 3) (state=17) -> [Checker] Backtracking from 1;1;1;2;2;2;4;3;4;4 -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=14) -> [Checker] MUTEX_WAIT(mutex: 1, owner: 4) (state=15) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 2) (state=8) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=10) > [Checker] Dependent Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=14) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=6) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=5) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=2) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=7) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 2) (state=8) +> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=9) > [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=3) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) +> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=3) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 2) (state=8) > [Checker] INDEPENDENT Transitions: > [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=2) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) +> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=3) > [Checker] Dependent Transitions: > [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=4) +> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 1) (state=3) > [Checker] Execute 2: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 1, state: 1, 0 interleaves) -> [Checker] Execute 1: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 2, state: 20, 0 interleaves) -> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 3, state: 21, 0 interleaves) -> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: 1) (stack depth: 4, state: 22, 0 interleaves) -> [Checker] Execute 1: MUTEX_WAIT(mutex: 0, owner: 1) (stack depth: 5, state: 23, 0 interleaves) -> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 24, 0 interleaves) -> [Checker] Execute 3: MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (stack depth: 7, state: 25, 0 interleaves) -> [Checker] Execute 3: MUTEX_WAIT(mutex: 1, owner: 3) (stack depth: 8, state: 26, 0 interleaves) -> [Checker] Execute 3: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 9, state: 27, 0 interleaves) -> [Checker] Execute 4: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 10, state: 28, 0 interleaves) -> [Checker] Execute 4: MUTEX_WAIT(mutex: 1, owner: 4) (stack depth: 11, state: 29, 0 interleaves) -> [Checker] Execute 4: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 12, state: 30, 0 interleaves) -> [Checker] Execution came to an end at 2;1;2;2;1;1;3;3;3;4;4;4;0 (state: 31, depth: 13) -> [Checker] Backtracking from 2;1;2;2;1;1;3;3;3;4;4;4;0 -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 1, owner: -1) (state=27) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=28) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 1, owner: 3) (state=26) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=28) -> [Checker] Dependent Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=28) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=24) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=23) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=22) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=21) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=20) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) (state=25) -> [Checker] Execute 4: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 7, state: 25, 0 interleaves) -> [Checker] Execute 3: MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (stack depth: 8, state: 32, 0 interleaves) -> [Checker] Execute 4: MUTEX_WAIT(mutex: 1, owner: 4) (stack depth: 9, state: 33, 0 interleaves) -> [Checker] Execute 4: MUTEX_UNLOCK(mutex: 1, owner: 3) (stack depth: 10, state: 34, 0 interleaves) -> [Checker] Execute 3: MUTEX_WAIT(mutex: 1, owner: 3) (stack depth: 11, state: 35, 0 interleaves) -> [Checker] Execute 3: MUTEX_UNLOCK(mutex: 1, owner: -1) (stack depth: 12, state: 36, 0 interleaves) -> [Checker] Execution came to an end at 2;1;2;2;1;1;4;3;4;4;3;3;0 (state: 37, depth: 13) -> [Checker] Backtracking from 2;1;2;2;1;1;4;3;4;4;3;3;0 +> [Checker] Execute 1: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (stack depth: 2, state: 12, 0 interleaves) +> [Checker] Execute 2: MUTEX_WAIT(mutex: 0, owner: 2) (stack depth: 3, state: 13, 0 interleaves) +> [Checker] Execute 2: MUTEX_UNLOCK(mutex: 0, owner: 1) (stack depth: 4, state: 14, 0 interleaves) +> [Checker] Execute 1: MUTEX_WAIT(mutex: 0, owner: 1) (stack depth: 5, state: 15, 0 interleaves) +> [Checker] Execute 1: MUTEX_UNLOCK(mutex: 0, owner: -1) (stack depth: 6, state: 16, 0 interleaves) +> [Checker] Execution came to an end at 2;1;2;2;1;1;0 (state: 17, depth: 7) +> [Checker] Backtracking from 2;1;2;2;1;1;0 > [Checker] Dependent Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 1, owner: 3) (state=34) -> [Checker] MUTEX_WAIT(mutex: 1, owner: 3) (state=35) -> [Checker] Backtracking from 2;1;2;2;1;1;4;3;4;4 -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=32) -> [Checker] MUTEX_WAIT(mutex: 1, owner: 4) (state=33) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=14) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=16) > [Checker] Dependent Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=32) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=14) +> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=15) > [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: -1) (state=24) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) +> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=12) +> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=14) > [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=23) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=22) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=21) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=20) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 1, owner: 4) (state=25) -> [Checker] Dependent Transitions: -> [Checker] MUTEX_UNLOCK(mutex: 0, owner: 1) (state=22) -> [Checker] MUTEX_WAIT(mutex: 0, owner: 1) (state=23) -> [Checker] Backtracking from 2;1;2;2 -> [Checker] INDEPENDENT Transitions: -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=20) -> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=21) +> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=12) +> [Checker] MUTEX_WAIT(mutex: 0, owner: 2) (state=13) > [Checker] Dependent Transitions: > [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=1) -> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=20) -> [Checker] DFS exploration ended. 37 unique states visited; 7 backtracks (76 transition replays, 33 states visited overall) +> [Checker] MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) (state=12) +> [Checker] DFS exploration ended. 17 unique states visited; 3 backtracks (21 transition replays, 2 states visited overall) + +$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:2 --log=s4u_test.thres:critical +> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true' +> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'actors' to '2' +> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 128 unique states visited; 26 backtracks (296 transition replays, 143 states visited overall) -$ ${bindir:=.}/../../../bin/simgrid-mc -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:3 --log=s4u_test.thres:critical +$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:3 --log=s4u_test.thres:critical +> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true' > [0.000000] [xbt_cfg/INFO] Configuration change: Set 'actors' to '3' > [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 85 unique states visited; 15 backtracks (240 transition replays, 141 states visited overall) +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 4645 unique states visited; 1082 backtracks (18004 transition replays, 12278 states visited overall) diff --git a/examples/cpp/synchro-mutex/s4u-synchro-mutex.tesh b/examples/cpp/synchro-mutex/s4u-synchro-mutex.tesh index ca6b4894e8..fbec803310 100644 --- a/examples/cpp/synchro-mutex/s4u-synchro-mutex.tesh +++ b/examples/cpp/synchro-mutex/s4u-synchro-mutex.tesh @@ -1,17 +1,17 @@ #!/usr/bin/env tesh $ ${bindir:=.}/s4u-synchro-mutex -> [Jupiter:worker:(1) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a scoped_lock +> [Jupiter:worker:(1) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a lock_guard > [Jupiter:worker:(1) 0.000000] [s4u_test/INFO] I'm done, good bye -> [Jupiter:worker:(3) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a scoped_lock +> [Jupiter:worker:(3) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a lock_guard > [Jupiter:worker:(3) 0.000000] [s4u_test/INFO] I'm done, good bye -> [Jupiter:worker:(5) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a scoped_lock +> [Jupiter:worker:(5) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a lock_guard > [Jupiter:worker:(5) 0.000000] [s4u_test/INFO] I'm done, good bye -> [Jupiter:worker:(7) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a scoped_lock +> [Jupiter:worker:(7) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a lock_guard > [Jupiter:worker:(7) 0.000000] [s4u_test/INFO] I'm done, good bye -> [Jupiter:worker:(9) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a scoped_lock +> [Jupiter:worker:(9) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a lock_guard > [Jupiter:worker:(9) 0.000000] [s4u_test/INFO] I'm done, good bye -> [Jupiter:worker:(11) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a scoped_lock +> [Jupiter:worker:(11) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a lock_guard > [Jupiter:worker:(11) 0.000000] [s4u_test/INFO] I'm done, good bye > [Tremblay:worker:(2) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a regular lock > [Tremblay:worker:(2) 0.000000] [s4u_test/INFO] I'm done, good bye diff --git a/examples/cpp/synchro-semaphore/s4u-mc-synchro-semaphore.tesh b/examples/cpp/synchro-semaphore/s4u-mc-synchro-semaphore.tesh index 1fe9bddd3f..54e641e529 100644 --- a/examples/cpp/synchro-semaphore/s4u-mc-synchro-semaphore.tesh +++ b/examples/cpp/synchro-semaphore/s4u-mc-synchro-semaphore.tesh @@ -1,79 +1,6 @@ #!/usr/bin/env tesh -$ ${bindir:=.}/../../../bin/simgrid-mc --log=mc_dfs.thres:verbose --log=root.fmt="[Checker]%e%m%n" -- ${bindir:=.}/s4u-synchro-semaphore --log=sem_test.thres:critical --log=root.fmt="[App%e%e%e%e]%e%m%n" +$ ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true --log=mc_dfs.thres:info --log=root.fmt="[Checker]%e%m%n" -- ${bindir:=.}/s4u-synchro-semaphore --log=sem_test.thres:critical --log=root.fmt="[App%e%e%e%e]%e%m%n" +> [Checker] Configuration change: Set 'model-check/sleep-set' to 'true' > [Checker] Start a DFS exploration. Reduction is: dpor. -> [Checker] Execute 1: SEM_ASYNC_LOCK(semaphore: 0) (stack depth: 1, state: 1, 0 interleaves) -> [Checker] Execute 1: SEM_WAIT(semaphore: 0, granted: yes) (stack depth: 2, state: 2, 0 interleaves) -> [Checker] Execute 1: SEM_UNLOCK(semaphore: 1) (stack depth: 3, state: 3, 0 interleaves) -> [Checker] Execute 1: SEM_ASYNC_LOCK(semaphore: 0) (stack depth: 4, state: 4, 0 interleaves) -> [Checker] Execute 2: SEM_ASYNC_LOCK(semaphore: 1) (stack depth: 5, state: 5, 0 interleaves) -> [Checker] Execute 2: SEM_WAIT(semaphore: 1, granted: yes) (stack depth: 6, state: 6, 0 interleaves) -> [Checker] Execute 2: SEM_UNLOCK(semaphore: 0) (stack depth: 7, state: 7, 0 interleaves) -> [Checker] Execute 1: SEM_WAIT(semaphore: 0, granted: yes) (stack depth: 8, state: 8, 0 interleaves) -> [Checker] Execute 1: SEM_UNLOCK(semaphore: 1) (stack depth: 9, state: 9, 0 interleaves) -> [Checker] Execute 1: SEM_ASYNC_LOCK(semaphore: 0) (stack depth: 10, state: 10, 0 interleaves) -> [Checker] Execute 2: SEM_ASYNC_LOCK(semaphore: 1) (stack depth: 11, state: 11, 0 interleaves) -> [Checker] Execute 2: SEM_WAIT(semaphore: 1, granted: yes) (stack depth: 12, state: 12, 0 interleaves) -> [Checker] Execute 2: SEM_UNLOCK(semaphore: 0) (stack depth: 13, state: 13, 0 interleaves) -> [Checker] Execute 1: SEM_WAIT(semaphore: 0, granted: yes) (stack depth: 14, state: 14, 0 interleaves) -> [Checker] Execute 1: SEM_UNLOCK(semaphore: 1) (stack depth: 15, state: 15, 0 interleaves) -> [Checker] Execute 1: SEM_ASYNC_LOCK(semaphore: 0) (stack depth: 16, state: 16, 0 interleaves) -> [Checker] Execute 2: SEM_ASYNC_LOCK(semaphore: 1) (stack depth: 17, state: 17, 0 interleaves) -> [Checker] Execute 2: SEM_WAIT(semaphore: 1, granted: yes) (stack depth: 18, state: 18, 0 interleaves) -> [Checker] Execute 2: SEM_UNLOCK(semaphore: 0) (stack depth: 19, state: 19, 0 interleaves) -> [Checker] Execute 1: SEM_WAIT(semaphore: 0, granted: yes) (stack depth: 20, state: 20, 0 interleaves) -> [Checker] Execute 1: SEM_UNLOCK(semaphore: 1) (stack depth: 21, state: 21, 0 interleaves) -> [Checker] Execute 2: SEM_ASYNC_LOCK(semaphore: 1) (stack depth: 22, state: 22, 0 interleaves) -> [Checker] Execute 2: SEM_WAIT(semaphore: 1, granted: yes) (stack depth: 23, state: 23, 0 interleaves) -> [Checker] Execute 2: SEM_UNLOCK(semaphore: 0) (stack depth: 24, state: 24, 0 interleaves) -> [Checker] Execution came to an end at 1;1;1;1;2;2;2;1;1;1;2;2;2;1;1;1;2;2;2;1;1;2;2;2;0 (state: 25, depth: 25) -> [Checker] Backtracking from 1;1;1;1;2;2;2;1;1;1;2;2;2;1;1;1;2;2;2;1;1;2;2;2;0 -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_UNLOCK(semaphore: 1) (state=21) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=22) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=20) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=22) -> [Checker] Dependent Transitions: -> [Checker] SEM_UNLOCK(semaphore: 0) (state=19) -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=20) -> [Checker] Backtracking from 1;1;1;1;2;2;2;1;1;1;2;2;2;1;1;1;2;2;2 -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_ASYNC_LOCK(semaphore: 0) (state=16) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=17) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_UNLOCK(semaphore: 1) (state=15) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=17) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=14) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=17) -> [Checker] Dependent Transitions: -> [Checker] SEM_UNLOCK(semaphore: 0) (state=13) -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=14) -> [Checker] Backtracking from 1;1;1;1;2;2;2;1;1;1;2;2;2 -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_ASYNC_LOCK(semaphore: 0) (state=10) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=11) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_UNLOCK(semaphore: 1) (state=9) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=11) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=8) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=11) -> [Checker] Dependent Transitions: -> [Checker] SEM_UNLOCK(semaphore: 0) (state=7) -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=8) -> [Checker] Backtracking from 1;1;1;1;2;2;2 -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_ASYNC_LOCK(semaphore: 0) (state=4) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=5) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_UNLOCK(semaphore: 1) (state=3) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=5) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_WAIT(semaphore: 0, granted: yes) (state=2) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=5) -> [Checker] INDEPENDENT Transitions: -> [Checker] SEM_ASYNC_LOCK(semaphore: 0) (state=1) -> [Checker] SEM_ASYNC_LOCK(semaphore: 1) (state=5) -> [Checker] DFS exploration ended. 25 unique states visited; 4 backtracks (64 transition replays, 36 states visited overall) +> [Checker] DFS exploration ended. 33 unique states visited; 9 backtracks (125 transition replays, 84 states visited overall) diff --git a/examples/smpi/mc/only_send_deterministic.tesh b/examples/smpi/mc/only_send_deterministic.tesh index 957f042544..447496d1ce 100644 --- a/examples/smpi/mc/only_send_deterministic.tesh +++ b/examples/smpi/mc/only_send_deterministic.tesh @@ -10,4 +10,4 @@ $ ../../../smpi_script/bin/smpirun -wrapper "${bindir:=.}/../../../bin/simgrid-m > [0.000000] [mc_comm_determinism/INFO] The recv communications pattern of the actor 0 is different! Different source for communication #1 > [0.000000] [mc_comm_determinism/INFO] Send-deterministic : Yes > [0.000000] [mc_comm_determinism/INFO] Recv-deterministic : No -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 36 unique states visited; 13 backtracks (97 transition replays, 49 states visited overall) \ No newline at end of file +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 242 unique states visited; 68 backtracks (612 transition replays, 303 states visited overall) \ No newline at end of file diff --git a/examples/smpi/mc/sendsend.tesh b/examples/smpi/mc/sendsend.tesh index 4f73c5d3bc..e719e919b5 100644 --- a/examples/smpi/mc/sendsend.tesh +++ b/examples/smpi/mc/sendsend.tesh @@ -9,7 +9,34 @@ $ ../../../smpi_script/bin/smpirun -quiet -wrapper "${bindir:=.}/../../../bin/si > rank 0 recv the data > rank 1 recv the data > Sent 0 to rank 1 -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 7 unique states visited; 2 backtracks (10 transition replays, 2 states visited overall) +> Sent 1 to rank 0 +> rank 0 recv the data +> rank 1 recv the data +> Sent 0 to rank 1 +> Sent 1 to rank 0 +> rank 1 recv the data +> rank 0 recv the data +> Sent 1 to rank 0 +> Sent 0 to rank 1 +> rank 0 recv the data +> rank 1 recv the data +> Sent 1 to rank 0 +> Sent 0 to rank 1 +> rank 0 recv the data +> rank 1 recv the data +> Sent 1 to rank 0 +> Sent 0 to rank 1 +> rank 1 recv the data +> rank 0 recv the data +> Sent 1 to rank 0 +> Sent 0 to rank 1 +> rank 0 recv the data +> rank 1 recv the data +> Sent 1 to rank 0 +> Sent 0 to rank 1 +> rank 1 recv the data +> rank 0 recv the data +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 30 unique states visited; 8 backtracks (56 transition replays, 19 states visited overall) p Testing the paranoid model ! timeout 60 @@ -26,6 +53,7 @@ $ ../../../smpi_script/bin/smpirun -quiet -wrapper "${bindir:=.}/../../../bin/si > [0.000000] [mc_global/INFO] Counter-example execution trace: > [0.000000] [mc_global/INFO] 1: iSend(mbox=2) > [0.000000] [mc_global/INFO] 2: iSend(mbox=0) -> [0.000000] [mc_Session/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;2' +> [0.000000] [mc_global/INFO] 0: +> [0.000000] [mc_Session/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;2;0' > [0.000000] [mc_dfs/INFO] DFS exploration ended. 3 unique states visited; 1 backtracks (3 transition replays, 0 states visited overall) > Execution failed with code 3. diff --git a/examples/sthread/pthread-mc-mutex-simple.tesh b/examples/sthread/pthread-mc-mutex-simple.tesh index 1aba081db7..6c9b91c68b 100644 --- a/examples/sthread/pthread-mc-mutex-simple.tesh +++ b/examples/sthread/pthread-mc-mutex-simple.tesh @@ -9,8 +9,10 @@ $ ${bindir:=.}/../../bin/simgrid-mc --cfg=model-check/setenv:LD_PRELOAD=${libdir > The thread 0 is terminating. > The thread 1 is terminating. > User's main is terminating. +> The thread 0 is terminating. +> The thread 1 is terminating. +> User's main is terminating. > The thread 1 is terminating. > The thread 0 is terminating. > User's main is terminating. -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 17 unique states visited; 3 backtracks (22 transition replays, 3 states visited overall) - +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 23 unique states visited; 3 backtracks (27 transition replays, 2 states visited overall) diff --git a/examples/sthread/pthread-mc-mutex-simpledeadlock.tesh b/examples/sthread/pthread-mc-mutex-simpledeadlock.tesh index 72324f2a60..c633f469dd 100644 --- a/examples/sthread/pthread-mc-mutex-simpledeadlock.tesh +++ b/examples/sthread/pthread-mc-mutex-simpledeadlock.tesh @@ -12,6 +12,12 @@ $ ${bindir:=.}/../../bin/simgrid-mc --cfg=model-check/setenv:LD_PRELOAD=${libdir > The thread 0 is terminating. > The thread 1 is terminating. > User's main is terminating. +> The thread 0 is terminating. +> The thread 1 is terminating. +> User's main is terminating. +> The thread 0 is terminating. +> The thread 1 is terminating. +> User's main is terminating. > [0.000000] [mc_global/INFO] ************************** > [0.000000] [mc_global/INFO] *** DEADLOCK DETECTED *** > [0.000000] [mc_global/INFO] ************************** @@ -27,5 +33,6 @@ $ ${bindir:=.}/../../bin/simgrid-mc --cfg=model-check/setenv:LD_PRELOAD=${libdir > [0.000000] [mc_global/INFO] 2: MUTEX_ASYNC_LOCK(mutex: 1, owner: 3) > [0.000000] [mc_global/INFO] 3: MUTEX_WAIT(mutex: 1, owner: 3) > [0.000000] [mc_global/INFO] 3: MUTEX_ASYNC_LOCK(mutex: 0, owner: 2) -> [0.000000] [mc_Session/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'2;2;3;2;3;3' -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 19 unique states visited; 2 backtracks (22 transition replays, 2 states visited overall) +> [0.000000] [mc_global/INFO] 0: +> [0.000000] [mc_Session/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'2;2;3;2;3;3;0' +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 38 unique states visited; 4 backtracks (52 transition replays, 11 states visited overall) \ No newline at end of file diff --git a/src/mc/api/ActorState.hpp b/src/mc/api/ActorState.hpp index 9f29307935..eda280894b 100644 --- a/src/mc/api/ActorState.hpp +++ b/src/mc/api/ActorState.hpp @@ -94,7 +94,7 @@ public: unsigned int do_consider() { if (max_consider_ <= times_considered_ + 1) - set_done(); + mark_done(); return times_considered_++; } unsigned int get_times_considered() const { return times_considered_; } @@ -112,7 +112,7 @@ public: this->state_ = InterleavingType::todo; this->times_considered_ = 0; } - void set_done() { this->state_ = InterleavingType::done; } + void mark_done() { this->state_ = InterleavingType::done; } inline Transition* get_transition(unsigned times_considered) { diff --git a/src/mc/api/State.cpp b/src/mc/api/State.cpp index e93885701f..14f0208c38 100644 --- a/src/mc/api/State.cpp +++ b/src/mc/api/State.cpp @@ -24,6 +24,38 @@ State::State(const RemoteApp& remote_app) : num_(++expended_states_) } } +State::State(const RemoteApp& remote_app, const State* previous_state) + : default_transition_(std::make_unique()), num_(++expended_states_) +{ + + remote_app.get_actors_status(actors_to_run_); + + transition_ = default_transition_.get(); + + /* Stateful model checking */ + if ((_sg_mc_checkpoint > 0 && (num_ % _sg_mc_checkpoint == 0)) || _sg_mc_termination) { + system_state_ = std::make_shared(num_); + } + + /* For each actor in the previous sleep set, keep it if it is not dependent with current transition. + * And if we kept it and the actor is enabled in this state, mark the actor as already done, so that + * it is not explored*/ + for (auto& [aid, transition] : previous_state->get_sleep_set()) { + + if (not previous_state->get_transition()->depends(&transition)) { + + sleep_set_.emplace(aid, transition); + if (actors_to_run_.count(aid) != 0) { + XBT_DEBUG("Actor %ld will not be explored, for it is in the sleep set", aid); + + actors_to_run_.at(aid).mark_done(); + } + } else + XBT_DEBUG("Transition >>%s<< removed from the sleep set because it was dependent with >>%s<<", + transition.to_string().c_str(), previous_state->get_transition()->to_string().c_str()); + } +} + std::size_t State::count_todo() const { return boost::range::count_if(this->actors_to_run_, [](auto& pair) { return pair.second.is_todo(); }); @@ -32,7 +64,7 @@ std::size_t State::count_todo() const void State::mark_all_enabled_todo() { for (auto const& [aid, _] : this->get_actors_list()) { - if (this->is_actor_enabled(aid)) { + if (this->is_actor_enabled(aid) and not is_actor_done(aid)) { this->mark_todo(aid); } } @@ -48,8 +80,19 @@ aid_t State::next_transition() const XBT_DEBUG("Search for an actor to run. %zu actors to consider", actors_to_run_.size()); for (auto const& [aid, actor] : actors_to_run_) { /* Only consider actors (1) marked as interleaving by the checker and (2) currently enabled in the application */ - if (not actor.is_todo() || not actor.is_enabled()) + if (not actor.is_todo() || not actor.is_enabled() || actor.is_done()) { + + if (not actor.is_todo()) + XBT_DEBUG("Can't run actor %ld because it is not todo", aid); + + if (not actor.is_enabled()) + XBT_DEBUG("Can't run actor %ld because it is not enabled", aid); + + if (actor.is_done()) + XBT_DEBUG("Can't run actor %ld because it has already been done", aid); + continue; + } return aid; } diff --git a/src/mc/api/State.hpp b/src/mc/api/State.hpp index 7303b64db2..f96d540aef 100644 --- a/src/mc/api/State.hpp +++ b/src/mc/api/State.hpp @@ -42,9 +42,14 @@ class XBT_PRIVATE State : public xbt::Extendable { /** Snapshot of system state (if needed) */ std::shared_ptr system_state_; + /* Sleep sets are composed of the actor and the corresponding transition that made it being added to the sleep + * set. With this information, it is check whether it should be removed from it or not when exploring a new + * transition */ + std::map sleep_set_; + public: explicit State(const RemoteApp& remote_app); - + explicit State(const RemoteApp& remote_app, const State* previous_state); /* Returns a positive number if there is another transition to pick, or -1 if not */ aid_t next_transition() const; @@ -55,7 +60,7 @@ public: std::size_t count_todo() const; void mark_todo(aid_t actor) { actors_to_run_.at(actor).mark_todo(); } void mark_all_enabled_todo(); - bool is_done(aid_t actor) const { return actors_to_run_.at(actor).is_done(); } + bool is_actor_done(aid_t actor) const { return actors_to_run_.at(actor).is_done(); } Transition* get_transition() const; void set_transition(Transition* t) { transition_ = t; } std::map const& get_actors_list() const { return actors_to_run_; } @@ -66,6 +71,9 @@ public: Snapshot* get_system_state() const { return system_state_.get(); } void set_system_state(std::shared_ptr state) { system_state_ = std::move(state); } + std::map const& get_sleep_set() const { return sleep_set_; } + void add_sleep_set(Transition* t) {sleep_set_.insert_or_assign(t->aid_, Transition(t->type_, t->aid_, t->times_considered_)); } + /* Returns the total amount of states created so far (for statistics) */ static long get_expanded_states() { return expended_states_; } }; diff --git a/src/mc/explo/DFSExplorer.cpp b/src/mc/explo/DFSExplorer.cpp index 57b2202482..43dd7ce29b 100644 --- a/src/mc/explo/DFSExplorer.cpp +++ b/src/mc/explo/DFSExplorer.cpp @@ -131,16 +131,26 @@ void DFSExplorer::run() if (next < 0) { // If there is no more transition in the current state, backtrack. XBT_DEBUG("There remains %lu actors, but none to interleave (depth %zu).", state->get_actor_count(), stack_.size() + 1); - + if (state->get_actor_count() == 0) { mc_model_checker->finalize_app(); XBT_VERB("Execution came to an end at %s (state: %ld, depth: %zu)", get_record_trace().to_string().c_str(), state->get_num(), stack_.size()); + } + this->backtrack(); continue; } + if (_sg_mc_sleep_set) { + XBT_VERB("Sleep set actually containing:"); + for (auto & [aid, transition] : state->get_sleep_set()) { + + XBT_VERB(" <%ld,%s>", aid, transition.to_string().c_str()); + + } + } /* Actually answer the request: let's execute the selected request (MCed does one step) */ state->execute_next(next); on_transition_execute_signal(state->get_transition(), get_remote_app()); @@ -151,9 +161,18 @@ void DFSExplorer::run() state->get_transition()->to_string().c_str(), stack_.size(), state->get_num(), state->count_todo()); /* Create the new expanded state (copy the state of MCed into our MCer data) */ - auto next_state = std::make_unique(get_remote_app()); + std::unique_ptr next_state; + + /* If we want sleep set reduction, pass the old state to the new state so it can + * both copy the sleep set and eventually removes things from it locally */ + if (_sg_mc_sleep_set) + next_state = std::make_unique(get_remote_app(), state); + else + next_state = std::make_unique(get_remote_app()); + on_state_creation_signal(next_state.get(), get_remote_app()); + if (_sg_mc_termination) this->check_non_termination(next_state.get()); @@ -165,7 +184,7 @@ void DFSExplorer::run() if (visited_state_ == nullptr) { /* Get an enabled process and insert it in the interleave set of the next state */ for (auto const& [aid, _] : next_state->get_actors_list()) { - if (next_state->is_actor_enabled(aid)) { + if (next_state->is_actor_enabled(aid) and not next_state->is_actor_done(aid)) { next_state->mark_todo(aid); if (reduction_mode_ == ReductionMode::dpor) break; // With DPOR, we take the first enabled transition @@ -191,18 +210,27 @@ void DFSExplorer::backtrack() backtrack_count_++; XBT_VERB("Backtracking from %s", get_record_trace().to_string().c_str()); on_backtracking_signal(get_remote_app()); - stack_.pop_back(); - get_remote_app().check_deadlock(); + /* We may backtrack from somewhere either because it's leaf, or because every enabled process are in done/sleep set. + * In the first case, we need to remove the last transition corresponding to the Finalize */ + if (stack_.back()->get_transition()->aid_ == 0) + stack_.pop_back(); + /* Traverse the stack backwards until a state with a non empty interleave set is found, deleting all the states that * have it empty in the way. For each deleted state, check if the request that has generated it (from its - * predecessor state), depends on any other previous request executed before it. If it does then add it to the - * interleave set of the state that executed that previous request. */ + * predecessor state) depends on any other previous request executed before it on another process. If there exists one, + * find the more recent, and add its process to the interleave set. If the process is not enabled at this point, + * then add every enabled process to the interleave */ bool found_backtracking_point = false; while (not stack_.empty() && not found_backtracking_point) { std::unique_ptr state = std::move(stack_.back()); + stack_.pop_back(); + + XBT_DEBUG("Marking Transition >>%s<< of process %ld done and adding it to the sleep set", state->get_transition()->to_string().c_str(), state->get_transition()->aid_); + state->add_sleep_set(state->get_transition()); // Actors are marked done when they are considerd in ActorState + if (reduction_mode_ == ReductionMode::dpor) { aid_t issuer_id = state->get_transition()->aid_; for (auto i = stack_.rbegin(); i != stack_.rend(); ++i) { @@ -210,16 +238,21 @@ void DFSExplorer::backtrack() if (state->get_transition()->aid_ == prev_state->get_transition()->aid_) { XBT_DEBUG("Simcall >>%s<< and >>%s<< with same issuer %ld", state->get_transition()->to_string().c_str(), prev_state->get_transition()->to_string().c_str(), issuer_id); - break; + continue; } else if (prev_state->get_transition()->depends(state->get_transition())) { XBT_VERB("Dependent Transitions:"); XBT_VERB(" %s (state=%ld)", prev_state->get_transition()->to_string().c_str(), prev_state->get_num()); XBT_VERB(" %s (state=%ld)", state->get_transition()->to_string().c_str(), state->get_num()); - if (not prev_state->is_done(issuer_id)) - prev_state->mark_todo(issuer_id); - else - XBT_DEBUG("Actor %ld is in done set", issuer_id); + if (prev_state->is_actor_enabled(issuer_id)){ + if (not prev_state->is_actor_done(issuer_id)) + prev_state->mark_todo(issuer_id); + else + XBT_DEBUG("Actor %ld is already in done set: no need to explore it again", issuer_id); + } else { + XBT_DEBUG("Actor %ld is not enabled: DPOR may be failing. To stay sound, we are marking every enabled transition as todo", issuer_id); + prev_state->mark_all_enabled_todo(); + } break; } else { XBT_VERB("INDEPENDENT Transitions:"); @@ -229,12 +262,13 @@ void DFSExplorer::backtrack() } } - if (state->count_todo() == 0) { // Empty interleaving set + if (state->count_todo() == 0) { // Empty interleaving set: exploration at this level is over XBT_DEBUG("Delete state %ld at depth %zu", state->get_num(), stack_.size() + 1); } else { - XBT_DEBUG("Back-tracking to state %ld at depth %zu", state->get_num(), stack_.size() + 1); - stack_.push_back(std::move(state)); // Put it back on the stack + XBT_DEBUG("Back-tracking to state %ld at depth %zu: %lu transitions left to be explored", state->get_num(), + stack_.size() + 1, state->count_todo()); + stack_.push_back(std::move(state)); // Put it back on the stack so we can explore the next transition of the interleave found_backtracking_point = true; } } diff --git a/src/mc/mc_config.cpp b/src/mc/mc_config.cpp index 681137ef33..b4900fe947 100644 --- a/src/mc/mc_config.cpp +++ b/src/mc/mc_config.cpp @@ -48,6 +48,10 @@ static simgrid::config::Flag cfg_mc_reduction{ xbt_die("configuration option 'model-check/reduction' can only take 'none' or 'dpor' as a value"); }}; +simgrid::config::Flag _sg_mc_sleep_set{ + "model-check/sleep-set", "Whether to enable the use of sleep-set in the reduction algorithm", false, + [](bool) { _mc_cfg_cb_check("value to enable/disable the use of sleep-set in the reduction algorithm"); }}; + simgrid::config::Flag _sg_mc_checkpoint{ "model-check/checkpoint", "Specify the amount of steps between checkpoints during stateful model-checking " "(default: 0 => stateless verification). If value=1, one checkpoint is saved for each " diff --git a/src/mc/mc_config.hpp b/src/mc/mc_config.hpp index 1f91c222d0..eaf2e20984 100644 --- a/src/mc/mc_config.hpp +++ b/src/mc/mc_config.hpp @@ -25,5 +25,6 @@ extern XBT_PRIVATE simgrid::config::Flag _sg_mc_max_depth; extern "C" XBT_PUBLIC int _sg_mc_max_visited_states; extern XBT_PRIVATE simgrid::config::Flag _sg_mc_dot_output_file; extern XBT_PRIVATE simgrid::config::Flag _sg_mc_termination; +extern XBT_PUBLIC simgrid::config::Flag _sg_mc_sleep_set; #endif diff --git a/src/mc/transition/TransitionComm.cpp b/src/mc/transition/TransitionComm.cpp index dd0cb88666..512519bb1b 100644 --- a/src/mc/transition/TransitionComm.cpp +++ b/src/mc/transition/TransitionComm.cpp @@ -41,9 +41,6 @@ std::string CommWaitTransition::to_string(bool verbose) const } bool CommWaitTransition::depends(const Transition* other) const { - if (aid_ == other->aid_) - return false; - if (other->type_ < type_) return other->depends(this); @@ -82,8 +79,6 @@ std::string CommTestTransition::to_string(bool verbose) const } bool CommTestTransition::depends(const Transition* other) const { - if (aid_ == other->aid_) - return false; if (other->type_ < type_) return other->depends(this); @@ -117,9 +112,6 @@ std::string CommRecvTransition::to_string(bool verbose) const } bool CommRecvTransition::depends(const Transition* other) const { - if (aid_ == other->aid_) - return false; - if (other->type_ < type_) return other->depends(this); @@ -172,9 +164,6 @@ std::string CommSendTransition::to_string(bool verbose = false) const bool CommSendTransition::depends(const Transition* other) const { - if (aid_ == other->aid_) - return false; - if (other->type_ < type_) return other->depends(this); diff --git a/src/mc/transition/TransitionRandom.hpp b/src/mc/transition/TransitionRandom.hpp index b62e8c4b66..ba00821584 100644 --- a/src/mc/transition/TransitionRandom.hpp +++ b/src/mc/transition/TransitionRandom.hpp @@ -17,7 +17,7 @@ class RandomTransition : public Transition { public: std::string to_string(bool verbose) const override; RandomTransition(aid_t issuer, int times_considered, std::stringstream& stream); - bool depends(const Transition* other) const override { return false; } // Independent with any other transition + bool depends(const Transition* other) const override { return aid_ == other->aid_; } // Independent with any other transition }; } // namespace simgrid::mc diff --git a/teshsuite/models/maxmin_bench/maxmin_bench_small.tesh b/teshsuite/models/maxmin_bench/maxmin_bench_small.tesh index 70b2a2c39b..28d690fde0 100644 --- a/teshsuite/models/maxmin_bench/maxmin_bench_small.tesh +++ b/teshsuite/models/maxmin_bench/maxmin_bench_small.tesh @@ -767,4 +767,4 @@ $ ${bindir:=.}/maxmin_bench small 10 test > [0.000000]: [ker_lmm/DEBUG] '92'(1.000000) : 0.373508 > [0.000000]: [ker_lmm/DEBUG] '91'(1.000000) : 0.240523 > [0.000000]: [ker_lmm/DEBUG] '100'(0.000000) : 0.000000 -> 10x One shot execution time for a total of 10 constraints, 10 variables with 4 active constraint each, concurrency in [2,6] and max concurrency share 2 +> 10x One shot execution time for a total of 10 constraints, 10 variables with 4 active constraint each, concurrency in [2,6] and max concurrency share 2 \ No newline at end of file diff --git a/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh b/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh index 2010817a05..002689b454 100644 --- a/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh +++ b/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh @@ -1,7 +1,7 @@ # Smpi Allreduce collectives tests p Test allreduce -$ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/../../../smpi_script/bin/smpirun -wrapper "${bindir:=.}/../../../bin/simgrid-mc" -map -hostfile ../hostfile_coll -platform ${platfdir:=.}/small_platform.xml -np 4 --log=xbt_cfg.thres:critical ${bindir:=.}/coll-allreduce-with-leaks --log=smpi_config.thres:warning --cfg=smpi/display-allocs:yes --cfg=smpi/simulate-computation:no --log=smpi_coll.thres:error --log=smpi_mpi.thres:error --log=smpi_pmpi.thres:error --cfg=smpi/list-leaks:10 --log=no_loc +$ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/../../../smpi_script/bin/smpirun -wrapper "${bindir:=.}/../../../bin/simgrid-mc" -map -hostfile ../hostfile_coll -platform ${platfdir:=.}/small_platform.xml --cfg=model-check/sleep-set:true -np 4 --log=xbt_cfg.thres:critical ${bindir:=.}/coll-allreduce-with-leaks --log=smpi_config.thres:warning --cfg=smpi/display-allocs:yes --cfg=smpi/simulate-computation:no --log=smpi_coll.thres:error --log=smpi_mpi.thres:error --cfg=smpi/list-leaks:10 --log=no_loc > [0.000000] [smpi/INFO] [rank 0] -> Tremblay > [0.000000] [smpi/INFO] [rank 1] -> Tremblay > [0.000000] [smpi/INFO] [rank 2] -> Tremblay @@ -29,4 +29,378 @@ $ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/../../../smpi_script/bin/smpirun -wrapper > If this is too much, consider sharing allocations for computation buffers. > This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) > -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 33 unique states visited; 7 backtracks (140 transition replays, 101 states visited overall) +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 616 unique states visited; 167 backtracks (3773 transition replays, 2991 states visited overall)