X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/blobdiff_plain/0787238b4678dec7f6a702c29077536f560ca7b5..3e0155bf0ef9c98134733f39347c504b16ed0a09:/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh diff --git a/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh b/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh index de00827f29..2010817a05 100644 --- a/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh +++ b/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh @@ -2,42 +2,31 @@ p Test allreduce $ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/../../../smpi_script/bin/smpirun -wrapper "${bindir:=.}/../../../bin/simgrid-mc" -map -hostfile ../hostfile_coll -platform ${platfdir:=.}/small_platform.xml -np 4 --log=xbt_cfg.thres:critical ${bindir:=.}/coll-allreduce-with-leaks --log=smpi_config.thres:warning --cfg=smpi/display-allocs:yes --cfg=smpi/simulate-computation:no --log=smpi_coll.thres:error --log=smpi_mpi.thres:error --log=smpi_pmpi.thres:error --cfg=smpi/list-leaks:10 --log=no_loc -> [rank 0] -> Tremblay -> [rank 1] -> Tremblay -> [rank 2] -> Tremblay -> [rank 3] -> Tremblay -> [0.000000] [mc_safety/INFO] Check a safety property. Reduction is: dpor. -> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles : -> [0.000000] [smpi_utils/INFO] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi/INFO] [rank 0] -> Tremblay +> [0.000000] [smpi/INFO] [rank 1] -> Tremblay +> [0.000000] [smpi/INFO] [rank 2] -> Tremblay +> [0.000000] [smpi/INFO] [rank 3] -> Tremblay +> [0.000000] [mc_dfs/INFO] Start a DFS exploration. Reduction is: dpor. +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 > [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm > [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group -> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 4 unfreed buffers : display types and addresses (n max) with --cfg=smpi/list-leaks:n. -> Running smpirun with -wrapper "valgrind --leak-check=full" can provide more information -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 128 bytes during its lifetime through malloc/calloc calls. -> Largest allocation at once from a single process was 16 bytes, at coll-allreduce-with-leaks.c:27. It was called 4 times during the whole simulation. +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. > If this is too much, consider sharing allocations for computation buffers. > This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) > -> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles : -> [0.000000] [smpi_utils/INFO] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles: +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 > [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm > [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group -> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 4 unfreed buffers : display types and addresses (n max) with --cfg=smpi/list-leaks:n. -> Running smpirun with -wrapper "valgrind --leak-check=full" can provide more information -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 128 bytes during its lifetime through malloc/calloc calls. -> Largest allocation at once from a single process was 16 bytes, at coll-allreduce-with-leaks.c:27. It was called 4 times during the whole simulation. +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers: +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. > If this is too much, consider sharing allocations for computation buffers. > This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) > -> [0.000000] [mc_safety/INFO] No property violation found. -> [0.000000] [mc_safety/INFO] Expanded states = 63 -> [0.000000] [mc_safety/INFO] Visited states = 500 -> [0.000000] [mc_safety/INFO] Executed transitions = 484 +> [0.000000] [mc_dfs/INFO] DFS exploration ended. 33 unique states visited; 7 backtracks (140 transition replays, 101 states visited overall)