From: Augustin Degomme Date: Mon, 19 Apr 2021 14:07:52 +0000 (+0200) Subject: Adapt leak example to new display. X-Git-Tag: v3.28~450 X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/commitdiff_plain/d04db6482a085ddc6a514f34e61cbc8d0da0c1a0 Adapt leak example to new display. Add another leak and a size change between processes to see what it shows. --- diff --git a/teshsuite/smpi/coll-allreduce-with-leaks/coll-allreduce-with-leaks.c b/teshsuite/smpi/coll-allreduce-with-leaks/coll-allreduce-with-leaks.c index c67de826f2..73b6583a33 100644 --- a/teshsuite/smpi/coll-allreduce-with-leaks/coll-allreduce-with-leaks.c +++ b/teshsuite/smpi/coll-allreduce-with-leaks/coll-allreduce-with-leaks.c @@ -25,7 +25,7 @@ int main(int argc, char *argv[]) MPI_Comm_set_errhandler(dup, MPI_ERRORS_RETURN); int* sb = (int*)calloc(size * maxlen, sizeof(int)); - int* rb = (int*)calloc(size * maxlen, sizeof(int)); + int* rb = (int*)calloc(size * maxlen+rank, sizeof(int)); for (int i = 0; i < size * maxlen; ++i) { sb[i] = rank*size + i; @@ -38,8 +38,7 @@ int main(int argc, char *argv[]) printf("all_to_all returned %d\n", status); fflush(stdout); } - //Do not free dup and rb - free(sb); + //Do not free dup and rb/sb MPI_Finalize(); return (EXIT_SUCCESS); } diff --git a/teshsuite/smpi/coll-allreduce-with-leaks/coll-allreduce-with-leaks.tesh b/teshsuite/smpi/coll-allreduce-with-leaks/coll-allreduce-with-leaks.tesh index b2b0595c24..82bba27f12 100644 --- a/teshsuite/smpi/coll-allreduce-with-leaks/coll-allreduce-with-leaks.tesh +++ b/teshsuite/smpi/coll-allreduce-with-leaks/coll-allreduce-with-leaks.tesh @@ -1,5 +1,4 @@ # Smpi Allreduce collectives tests -! output sort p Test allreduce $ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/../../../smpi_script/bin/smpirun -map -hostfile ../hostfile_coll -platform ${platfdir:=.}/small_platform.xml -np 16 --log=xbt_cfg.thres:critical ${bindir:=.}/coll-allreduce-with-leaks --log=smpi_config.thres:warning --cfg=smpi/display-allocs:yes --cfg=smpi/simulate-computation:no --log=smpi_coll.thres:error --log=smpi_mpi.thres:error --log=smpi_pmpi.thres:error --cfg=smpi/list-leaks:10 @@ -22,20 +21,10 @@ $ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/../../../smpi_script/bin/smpirun -map -ho > [0.023768] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 32 unfreed MPI handles : > [0.023768] [smpi_utils/INFO] 16 leaked handles of type MPI_Comm at coll-allreduce-with-leaks.c:23 > [0.023768] [smpi_utils/INFO] 16 leaked handles of type MPI_Group at coll-allreduce-with-leaks.c:23 -> [0.023768] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 16 unfreed buffers : display types and addresses (n max) with --cfg=smpi/list-leaks:n. -> Running smpirun with -wrapper "valgrind --leak-check=full" can provide more information -> [0.023768] [smpi_utils/INFO] Leaked buffer of size 64, allocated in file coll-allreduce-with-leaks.c at line 28 -> [0.023768] [smpi_utils/INFO] Leaked buffer of size 64, allocated in file coll-allreduce-with-leaks.c at line 28 -> [0.023768] [smpi_utils/INFO] Leaked buffer of size 64, allocated in file coll-allreduce-with-leaks.c at line 28 -> [0.023768] [smpi_utils/INFO] Leaked buffer of size 64, allocated in file coll-allreduce-with-leaks.c at line 28 -> [0.023768] [smpi_utils/INFO] Leaked buffer of size 64, allocated in file coll-allreduce-with-leaks.c at line 28 -> [0.023768] [smpi_utils/INFO] Leaked buffer of size 64, allocated in file coll-allreduce-with-leaks.c at line 28 -> [0.023768] [smpi_utils/INFO] Leaked buffer of size 64, allocated in file coll-allreduce-with-leaks.c at line 28 -> [0.023768] [smpi_utils/INFO] Leaked buffer of size 64, allocated in file coll-allreduce-with-leaks.c at line 28 -> [0.023768] [smpi_utils/INFO] Leaked buffer of size 64, allocated in file coll-allreduce-with-leaks.c at line 28 -> [0.023768] [smpi_utils/INFO] Leaked buffer of size 64, allocated in file coll-allreduce-with-leaks.c at line 28 -> [0.023768] [smpi_utils/INFO] (more buffer leaks hidden as you wanted to see only 10 of them) -> [0.023768] [smpi_utils/INFO] Memory Usage: Simulated application allocated 2048 bytes during its lifetime through malloc/calloc calls. -> Largest allocation at once from a single process was 64 bytes, at coll-allreduce-with-leaks.c:27. It was called 16 times during the whole simulation. +> [0.023768] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 32 unfreed buffers : +> [0.023768] [smpi_utils/INFO] coll-allreduce-with-leaks.c:28 : leaked allocations of total size 1504, called 16 times, with minimum size 64 and maximum size 124 +> [0.023768] [smpi_utils/INFO] coll-allreduce-with-leaks.c:27 : leaked allocations of total size 1024, called 16 times, each with size 64 +> [0.023768] [smpi_utils/INFO] Memory Usage: Simulated application allocated 2528 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 124 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. > If this is too much, consider sharing allocations for computation buffers. > This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) diff --git a/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh b/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh index de00827f29..796eb1b2ab 100644 --- a/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh +++ b/teshsuite/smpi/coll-allreduce-with-leaks/mc-coll-allreduce-with-leaks.tesh @@ -8,32 +8,24 @@ $ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/../../../smpi_script/bin/smpirun -wrapper > [rank 3] -> Tremblay > [0.000000] [mc_safety/INFO] Check a safety property. Reduction is: dpor. > [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles : -> [0.000000] [smpi_utils/INFO] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 > [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm > [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group -> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 4 unfreed buffers : display types and addresses (n max) with --cfg=smpi/list-leaks:n. -> Running smpirun with -wrapper "valgrind --leak-check=full" can provide more information -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 128 bytes during its lifetime through malloc/calloc calls. -> Largest allocation at once from a single process was 16 bytes, at coll-allreduce-with-leaks.c:27. It was called 4 times during the whole simulation. +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers : +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. > If this is too much, consider sharing allocations for computation buffers. > This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) > > [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed MPI handles : -> [0.000000] [smpi_utils/INFO] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 +> [0.000000] [smpi_utils/WARNING] To get more information (location of allocations), compile your code with -trace-call-location flag of smpicc/f90 > [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Comm > [0.000000] [smpi_utils/INFO] 4 leaked handles of type MPI_Group -> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 4 unfreed buffers : display types and addresses (n max) with --cfg=smpi/list-leaks:n. -> Running smpirun with -wrapper "valgrind --leak-check=full" can provide more information -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Leaked buffer of size 16 -> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 128 bytes during its lifetime through malloc/calloc calls. -> Largest allocation at once from a single process was 16 bytes, at coll-allreduce-with-leaks.c:27. It was called 4 times during the whole simulation. +> [0.000000] [smpi_utils/INFO] Probable memory leaks in your code: SMPI detected 8 unfreed buffers : +> [0.000000] [smpi_utils/INFO] leaked allocations of total size 152, called 8 times, with minimum size 16 and maximum size 28 +> [0.000000] [smpi_utils/INFO] Memory Usage: Simulated application allocated 152 bytes during its lifetime through malloc/calloc calls. +> Largest allocation at once from a single process was 28 bytes, at coll-allreduce-with-leaks.c:28. It was called 1 times during the whole simulation. > If this is too much, consider sharing allocations for computation buffers. > This can be done automatically by setting --cfg=smpi/auto-shared-malloc-thresh to the minimum size wanted size (this can alter execution if data content is necessary) >