+ 2. We cannot execute the MPI processes in parallel. Only one can execute at
+ the same time because only one privatization segment can be mapped at a
+ given time.
+
+In order to fix this, the standard solution is to move each MPI process in its
+system process and use IPC to communicate with the simulator. One concern would
+be the impact on performance and memory consumption:
+
+ - It would introduce a lot of context switches and IPC communications between
+ the MPI processes and the SimGrid simulator. However, currently every context
+ switch needs a `mmap` for SMPI privatization which is costly as well
+ (TLB flush).
+
+ - Instanciating a lot of processes might consume more memory which might be a
+ problem if we want to simulate a lot of MPI processes. Compiling MPI programs
+ as static executables with a lightweight libc might help and we might want to
+ support that. The SMPI processes should probably not embed all the SimGrid
+ simulator and its dependencies, the C++ runtime, etc.
+
+We would need to modify the model-checker as well which currently can only
+manage on model-checked process. For the model-checker we can expect some
+benefits from this approach: if a process did not execute, we know its state
+did not change and we don't need to take its snapshot and compare its state.
+
+Other solutions for this might include:
+
+ - Mapping each MPI process in the process of the simulator but in a different
+ symbol namespace (see `dlmopen`). Each process would have its own separate
+ instanciation and would not share libraries.
+
+ - Instanciate each MPI process in a separate lightweight VM (for example based
+ on WebAssembly) in the simualtor process.