From 78fc5a7da3ddc9fcf9efea99ebb60e3dee401d67 Mon Sep 17 00:00:00 2001 From: Augustin Degomme Date: Thu, 27 May 2021 16:47:48 +0200 Subject: [PATCH] document new option --- ChangeLog | 4 +++- docs/source/Configuring_SimGrid.rst | 17 +++++++++++++++++ 2 files changed, 20 insertions(+), 1 deletion(-) diff --git a/ChangeLog b/ChangeLog index 44d6cd4fcb..08ed0cc07e 100644 --- a/ChangeLog +++ b/ChangeLog @@ -15,7 +15,9 @@ SMPI: - The default SMPI compiler flags are no more taken from the environment. They can be explicitly set through cmake parameters SMPI_C_FLAGS, SMPI_CXX_FLAGS, or SMPI_Fortran_FLAGS. - + - New option: --cfg=smpi/finalization-barrier, which can be used to add + a barrier inside MPI_Finalize. This can help for some codes which cleanup + data attached to a process, but still used in other SMPI processes. LUA: - Lua platform files are deprecated. Their support will be dropped after v3.31. diff --git a/docs/source/Configuring_SimGrid.rst b/docs/source/Configuring_SimGrid.rst index a5497a827d..b59b43e505 100644 --- a/docs/source/Configuring_SimGrid.rst +++ b/docs/source/Configuring_SimGrid.rst @@ -150,6 +150,7 @@ Existing Configuration Items - **smpi/cpu-threshold:** :ref:`cfg=smpi/cpu-threshold` - **smpi/display-allocs:** :ref:`cfg=smpi/display-allocs` - **smpi/display-timing:** :ref:`cfg=smpi/display-timing` +- **smpi/finalization-barrier:** :ref:`cfg=smpi/finalization-barrier` - **smpi/grow-injected-times:** :ref:`cfg=smpi/grow-injected-times` - **smpi/host-speed:** :ref:`cfg=smpi/host-speed` - **smpi/IB-penalty-factors:** :ref:`cfg=smpi/IB-penalty-factors` @@ -1308,6 +1309,22 @@ Each collective operation can be manually selected with a .. TODO:: All available collective algorithms will be made available via the ``smpirun --help-coll`` command. +Add a barrier in MPI_Finalize +............................. + +.. _cfg=smpi/finalization-barrier: + +**Option** ``smpi/finalization-barrier`` **default:** off + +By default, SMPI processes are destroyed as soon as soon as their code ends, +so after a successful MPI_Finalize call returns. In some rare cases, some data +might have been attached to MPI objects still active in the remaining processes, +and can be destroyed eagerly by the finished process. +If your code shows issues at finalization, such as segmentation fault, triggering +this option will add an explicit MPI_Barrier(MPI_COMM_WORLD) call inside the +MPI_Finalize, so that all processes will terminate at almost the same point. +It might affect the total timing by the cost of a barrier. + .. _cfg=smpi/iprobe: Inject constant times for MPI_Iprobe -- 2.20.1