From: Maxwell Pirtle Date: Tue, 30 May 2023 13:49:26 +0000 (+0200) Subject: Adjust tesh tests according to changes in deps. X-Git-Tag: v3.34~68^2~7 X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/commitdiff_plain/ce0494def6fe3da07d54a5994e43ca532e1ed34e Adjust tesh tests according to changes in deps. A number of tests were affected by the previous additions that labeled transitions executed by the same actor as dependent. The appropriate modifications were made to the expected outputs and a "sanity" check was performed before making them --- diff --git a/examples/cpp/mc-bugged2/s4u-mc-bugged2.tesh b/examples/cpp/mc-bugged2/s4u-mc-bugged2.tesh index 653cbf94e8..56813fda7e 100644 --- a/examples/cpp/mc-bugged2/s4u-mc-bugged2.tesh +++ b/examples/cpp/mc-bugged2/s4u-mc-bugged2.tesh @@ -16,7 +16,7 @@ $ $VALGRIND_NO_TRACE_CHILDREN ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-c > [0.000000] [mc_explo/INFO] 3: iSend(mbox=0) > [0.000000] [mc_explo/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout) > [0.000000] [mc_explo/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;3;1;1;3;3;1' -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 551 unique states visited; 137 backtracks (2239 transition replays, 1551 states visited overall) +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 1070 unique states visited; 252 backtracks (4082 transition replays, 2760 states visited overall) ! expect return 1 ! timeout 20 @@ -26,12 +26,13 @@ $ $VALGRIND_NO_TRACE_CHILDREN ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-c > [0.000000] [mc_explo/INFO] *** PROPERTY NOT VALID *** > [0.000000] [mc_explo/INFO] ************************** > [0.000000] [mc_explo/INFO] Counter-example execution trace: -> [0.000000] [mc_explo/INFO] 3: iSend(mbox=0) > [0.000000] [mc_explo/INFO] 1: iRecv(mbox=0) +> [0.000000] [mc_explo/INFO] 3: iSend(mbox=0) > [0.000000] [mc_explo/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout) > [0.000000] [mc_explo/INFO] 1: iRecv(mbox=0) > [0.000000] [mc_explo/INFO] 3: WaitComm(from 3 to 1, mbox=0, no timeout) > [0.000000] [mc_explo/INFO] 3: iSend(mbox=0) > [0.000000] [mc_explo/INFO] 1: WaitComm(from 3 to 1, mbox=0, no timeout) -> [0.000000] [mc_explo/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'3;1;1;1;3;3;1' -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 1161 unique states visited; 282 backtracks (4556 transition replays, 3113 states visited overall) +> [0.000000] [mc_explo/INFO] You can debug the problem (and see the whole details) by rerunning out of simgrid-mc with --cfg=model-check/replay:'1;3;1;1;3;3;1' +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 995 unique states visited; 253 backtracks (4006 transition replays, 2758 states visited overall) + diff --git a/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex-stateful.tesh b/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex-stateful.tesh index 3d1e75d99b..8ca73b7c62 100644 --- a/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex-stateful.tesh +++ b/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex-stateful.tesh @@ -7,7 +7,7 @@ $ $VALGRIND_NO_TRACE_CHILDREN ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-c > [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true' > [0.000000] [xbt_cfg/INFO] Configuration change: Set 'actors' to '2' > [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 130 unique states visited; 27 backtracks (209 transition replays, 52 states visited overall) +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 66 unique states visited; 11 backtracks (97 transition replays, 20 states visited overall) p The stats without checkpoints is: 130 unique states visited; 27 backtracks (308 transition replays, 151 states visited overall) p But it runs much faster (0.6 sec vs. 1.6 sec), damn slow checkpointing code. diff --git a/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex.tesh b/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex.tesh index 3ef36cc3a2..4de47b1d5c 100644 --- a/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex.tesh +++ b/examples/cpp/synchro-mutex/s4u-mc-synchro-mutex.tesh @@ -81,11 +81,11 @@ $ $VALGRIND_NO_TRACE_CHILDREN ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-c > [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true' > [0.000000] [xbt_cfg/INFO] Configuration change: Set 'actors' to '2' > [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 130 unique states visited; 27 backtracks (308 transition replays, 151 states visited overall) +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 66 unique states visited; 11 backtracks (126 transition replays, 49 states visited overall) $ $VALGRIND_NO_TRACE_CHILDREN ${bindir:=.}/../../../bin/simgrid-mc --cfg=model-check/sleep-set:true --cfg=model-check/strategy:nb_wait -- ${bindir:=.}/s4u-synchro-mutex --cfg=actors:3 --log=s4u_test.thres:critical > [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true' > [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/strategy' to 'nb_wait' > [0.000000] [xbt_cfg/INFO] Configuration change: Set 'actors' to '3' > [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 3492 unique states visited; 743 backtracks (12498 transition replays, 8263 states visited overall) \ No newline at end of file +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 296 unique states visited; 52 backtracks (765 transition replays, 417 states visited overall) \ No newline at end of file diff --git a/examples/sthread/pthread-mc-producer-consumer.tesh b/examples/sthread/pthread-mc-producer-consumer.tesh index fd44bea235..89be82586b 100644 --- a/examples/sthread/pthread-mc-producer-consumer.tesh +++ b/examples/sthread/pthread-mc-producer-consumer.tesh @@ -5,11 +5,11 @@ $ $VALGRIND_NO_TRACE_CHILDREN ${bindir:=.}/../../bin/simgrid-mc --cfg=model-chec > [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true' > [0.000000] [sthread/INFO] Starting the simulation. > [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 1101 unique states visited; 136 backtracks (2950 transition replays, 1713 states visited overall) +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 106 unique states visited; 17 backtracks (295 transition replays, 172 states visited overall) $ $VALGRIND_NO_TRACE_CHILDREN ${bindir:=.}/../../bin/simgrid-mc --cfg=model-check/sleep-set:true --cfg=model-check/strategy:nb_wait --cfg=model-check/setenv:LD_PRELOAD=${libdir:=.}/libsgmalloc.so:${libdir:=.}/libsthread.so ${bindir:=.}/pthread-producer-consumer -q -c 2 -C 1 -p 2 -P 1 > [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/sleep-set' to 'true' > [0.000000] [xbt_cfg/INFO] Configuration change: Set 'model-check/strategy' to 'nb_wait' > [0.000000] [sthread/INFO] Starting the simulation. > [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 1004 unique states visited; 115 backtracks (2593 transition replays, 1474 states visited overall) \ No newline at end of file +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 107 unique states visited; 18 backtracks (300 transition replays, 175 states visited overall) \ No newline at end of file diff --git a/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh b/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh index 96f475d52f..7eb120f359 100644 --- a/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh +++ b/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh @@ -403,4 +403,246 @@ $ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/../../../smpi_script/bin/smpirun -wrapper > If this is too much, consider sharing allocations for computation buffers. > This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) > -> [0.000000] [mc_dfs/INFO] DFS exploration ended. 635 unique states visited; 173 backtracks (3896 transition replays, 3088 states visited overall) +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm +> [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. +> If this is too much, consider sharing allocations for computation buffers. +> This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) +> +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 1005 unique states visited; 276 backtracks (6559 transition replays, 5278 states visited overall)