-@subsubsection contributing_todo_smpi SMPI
-
-#### Process-based privatization
-
-Currently, all the simulated processes live in the same process as the SimGrid
-simulator. The benefit is that we don't have to do context switches and IPC
-between the simulator and the processes.
-
-The fact that they share the same address space means that one memory corruption
-in one simulated process can propagate to the other ones and to the SimGrid
-simulator itself.
-
-Moreover, the current design for SMPI applications is to compile the MPI code
-normally and execute it once per simulated process in the same system process:
-This means that all the existing simulated MPI processes share the same virtual
-address space and share by default the same global variables. This is not
-correct as each MPI process is expected to use its own address space and have
-its own global variables. In order to fix, this problem we have an optional
-SMPI privatization feature which creates an instanciation of the executable
-data segment per MPI process and map the correct one (using `mmap`) at each
-context switch.
-
-This approach has many problems:
-
- 1. It is not completely safe. We only handle SMPI privatization for the global
- variables in the execute data segment. Shared objects are ignored but some
- may contain global variables which may need to be privatized:
-
- - libsimgrid for example must not be privatized because it contains
- shared state for the simulator;
-
- - libc must not be privatized for the same reason (but some global variables
- in the libc may not be privatized);
-
- - if we use global variables of some shared object in the executable, this
- global variable will be instanciated in the executable (because of copy
- relocation) and will be privatized even if it shoud not.
-
- 2. We cannot execute the MPI processes in parallel. Only one can execute at
- the same time because only one privatization segment can be mapped at a
- given time.
-
-In order to fix this, the standard solution is to move each MPI process in its
-own system process and use IPC to communicate with the simulator. One concern would
-be the impact on performance and memory consumption:
-
- - It would introduce a lot of context switches and IPC communications between
- the MPI processes and the SimGrid simulator. However, currently every context
- switch needs a `mmap` for SMPI privatization which is costly as well
- (TLB flush).
-
- - Instanciating a lot of processes might consume more memory which might be a
- problem if we want to simulate a lot of MPI processes. Compiling MPI programs
- as static executables with a lightweight libc might help and we might want to
- support that. The SMPI processes should probably not embed all the SimGrid
- simulator and its dependencies, the C++ runtime, etc.
-
-We would need to modify the model-checker as well which currently can only
-manage on model-checked process. For the model-checker we can expect some
-benefits from this approach: if a process did not execute, we know its state
-did not change and we don't need to take its snapshot and compare its state.
-
-Other solutions for this might include:
-
- - Mapping each MPI process in the process of the simulator but in a different
- symbol namespace (see `dlmopen`). Each process would have its own separate
- instanciation and would not share libraries.
-
- - Instanciate each MPI process in a separate lightweight VM (for example based
- on WebAssembly) in the simualtor process.
-
-@subsubsection contributing_todo_mc Model-checker
-
-#### Overhaul the state comparison code