if(enable_smpi)
SET(HAVE_SMPI 1)
- if("${CMAKE_SYSTEM}" MATCHES "Darwin|FreeBSD|Linux")
+ if("${CMAKE_SYSTEM}" MATCHES "Darwin|FreeBSD|Linux|SunOS")
SET(HAVE_PRIVATIZATION 1)
else()
message (STATUS "Warning: no support for SMPI automatic privatization on this platform")
- MPI/IO is now supported over the Storage API (no files are written or read, storage is simulated). Supported calls are all synchronous ones.
- MPI interface is now const correct for input parameters
+Model-checker:
+ - Remove option 'model-check/record': Paths are recorded in any cases now.
+
Fixed bugs (GH=GitHub; FG=FramaGit):
- FG#10: Can not use MSG_process_set_data from SMPI any more
- FG#11: Auto-restart actors forget their on_exit behavior
type the following on the command-line:
.. code-block:: shell
-
+
my_simulator --cfg=Item:Value (other arguments)
Several ``--cfg`` command line arguments can naturally be used. If you
file:
.. code-block:: xml
-
+
<config>
<prop id="Item" value="Value"/>
</config>
with :cpp:func:`simgrid::s4u::Engine::set_config` or :cpp:func:`MSG_config`.
.. code-block:: cpp
-
+
#include <simgrid/s4u.hpp>
int main(int argc, char *argv[]) {
simgrid::s4u::Engine e(&argc, argv);
-
+
e->set_config("Item:Value");
-
+
// Rest of your code
}
.. _options_list:
-
+
Existing Configuration Items
----------------------------
- **model-check/hash:** :ref:`cfg=model-checker/hash`
- **model-check/max-depth:** :ref:`cfg=model-check/max-depth`
- **model-check/property:** :ref:`cfg=model-check/property`
-- **model-check/record:** :ref:`cfg=model-check/record`
- **model-check/reduction:** :ref:`cfg=model-check/reduction`
- **model-check/replay:** :ref:`cfg=model-check/replay`
- **model-check/send-determinism:** :ref:`cfg=model-check/send-determinism`
models for all existing resources.
- ``network/model``: specify the used network model. Possible values:
-
+
- **LV08 (default one):** Realistic network analytic model
(slow-start modeled by multiplying latency by 13.01, bandwidth by
.97; bottleneck sharing uses a payload of S=20537 for evaluating
RTT). Described in `Accuracy Study and Improvement of Network
Simulation in the SimGrid Framework
- <http://mescal.imag.fr/membres/arnaud.legrand/articles/simutools09.pdf>`_.
+ <http://mescal.imag.fr/membres/arnaud.legrand/articles/simutools09.pdf>`_.
- **Constant:** Simplistic network model where all communication
take a constant time (one second). This model provides the lowest
realism, but is (marginally) faster.
<ftp://ftp.ens-lyon.fr/pub/LIP/Rapports/RR/RR2002/RR2002-40.ps.gz>`_.
- **Reno/Reno2/Vegas:** Models from Steven H. Low using lagrange_solve instead of
lmm_solve (experts only; check the code for more info).
- - **NS3** (only available if you compiled SimGrid accordingly):
+ - **NS3** (only available if you compiled SimGrid accordingly):
Use the packet-level network
simulators as network models (see :ref:`pls_ns3`).
This model can be :ref:`further configured <options_pls>`.
-
+
- ``cpu/model``: specify the used CPU model. We have only one model
for now:
allow parallel tasks because these beasts need some collaboration
between the network and CPU model. That is why, ptask_07 is used by
default when using SimDag.
-
+
- **default:** Default host model. Currently, CPU:Cas01 and
network:LV08 (with cross traffic enabled)
- **compound:** Host model that is automatically chosen if
configurations.
- items ``network/optim`` and ``cpu/optim`` (both default to 'Lazy'):
-
+
- **Lazy:** Lazy action management (partial invalidation in lmm +
heap in action remaining).
- **TI:** Trace integration. Highly optimized mode when using
now).
- **Full:** Full update of remaining and variables. Slow but may be
useful when debugging.
-
+
- items ``network/maxmin-selective-update`` and
``cpu/maxmin-selective-update``: configure whether the underlying
should be lazily updated or not. It should have no impact on the
and you should use the last one, which is the maximal size.
.. code-block:: shell
-
+
cat /proc/sys/net/ipv4/tcp_rmem # gives the sender window
cat /proc/sys/net/ipv4/tcp_wmem # gives the receiver window
.. _cfg=network/bandwidth-factor:
.. _cfg=network/latency-factor:
.. _cfg=network/weight-S:
-
+
Correcting Important Network Parameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
computations. More details in @ref plugin_energy.
- **link_energy:** keeps track of the energy dissipated by
communications. More details in @ref SURF_plugin_energy.
- - **host_load:** keeps track of the computational load.
+ - **host_load:** keeps track of the computational load.
More details in @ref plugin_load.
.. _options_modelchecking:
-
+
Configuring the Model-Checking
------------------------------
be executed using the simgrid-mc wrapper:
.. code-block:: shell
-
+
simgrid-mc ./my_program
Safety properties are expressed as assertions using the function
:cpp:func:`void MC_assert(int prop)`.
.. _cfg=model-check/property:
-
+
Specifying a liveness property
..............................
.. code-block:: shell
-
+
simgrid-mc ./my_program --cfg=model-check/property:<filename>
.. _cfg=model-check/checkpoint:
-
+
Going for Stateful Verification
...............................
implementation was found to be buggy and this options is not as useful as
it could be. For this reason, it is currently disabled by default.
-.. _cfg=model-check/record:
.. _cfg=model-check/replay:
-Record/Replay of Verification
-.............................
+Replaying buggy execution paths out of the model-checker
+........................................................
-As the model-checker keeps jumping at different places in the execution graph,
-it is difficult to understand what happens when trying to debug an application
-under the model-checker. Event the output of the program is difficult to
-interpret. Moreover, the model-checker does not behave nicely with advanced
-debugging tools such as valgrind. For those reason, to identify a trajectory
-in the execution graph with the model-checker and replay this trajcetory and
-without the model-checker black-magic but with more standard tools
-(such as a debugger, valgrind, etc.). For this reason, Simgrid implements an
-experimental record/replay functionnality in order to record a trajectory with
-the model-checker and replay it without the model-checker.
+Debugging the problems reported by the model-checker is challenging: First, the
+application under verification cannot be debugged with gdb because the
+model-checker already traces it. Then, the model-checker may explore several
+execution paths before encountering the issue, making it very difficult to
+understand the outputs. Fortunately, SimGrid provides the execution path leading
+to any reported issue so that you can replay this path out of the model checker,
+enabling the usage of classical debugging tools.
When the model-checker finds an interesting path in the application
execution graph (where a safety or liveness property is violated), it
-can generate an identifier for this path. To enable this behavious the
-``model-check/record`` must be set to **yes**, which is not the case
-by default.
-
-Here is an example of output:
+generates an identifier for this path. Here is an example of output:
.. code-block:: shell
[ 0.000000] (0:@) *** PROPERTY NOT VALID ***
[ 0.000000] (0:@) **************************
[ 0.000000] (0:@) Counter-example execution trace:
+ [ 0.000000] (0:@) [(1)Tremblay (app)] MC_RANDOM(3)
+ [ 0.000000] (0:@) [(1)Tremblay (app)] MC_RANDOM(4)
[ 0.000000] (0:@) Path = 1/3;1/4
- [ 0.000000] (0:@) [(1)Tremblay (app)] MC_RANDOM(3)
- [ 0.000000] (0:@) [(1)Tremblay (app)] MC_RANDOM(4)
[ 0.000000] (0:@) Expanded states = 27
[ 0.000000] (0:@) Visited states = 68
[ 0.000000] (0:@) Executed transitions = 46
-This path can then be replayed outside of the model-checker (and even
-in non-MC build of simgrid) by setting the ``model-check/replay`` item
-to the given path. The other options should be the same (but the
-model-checker should be disabled).
-
-The format and meaning of the path may change between different
-releases so the same release of Simgrid should be used for the record
-phase and the replay phase.
+The interesting line is ``Path = 1/3;1/4``, which means that you should use
+`--cfg=model-check/replay:1/3;1/4`` to replay your application on the buggy
+execution path. The other options should be the same (but the model-checker
+should be disabled). Note that format and meaning of the path may change between
+different releases.
Configuring the User Code Virtualization
----------------------------------------
raw implementation.
|br| Install the relevant library (e.g. with the
libboost-contexts-dev package on Debian/Ubuntu) and recompile
- SimGrid.
+ SimGrid.
- **raw:** amazingly fast factory using a context switching mechanism
of our own, directly implemented in assembly (only available for x86
and amd64 platforms for now) and without any unneeded system call.
.. _cfg=contexts/nthreads:
.. _cfg=contexts/parallel-threshold:
.. _cfg=contexts/synchro:
-
+
Running User Code in Parallel
.............................
your machine for no good reason. You probably prefer the other less
eager schemas.
-
Configuring the Tracing
-----------------------
- SMPI simulator and traces for a space/time view:
.. code-block:: shell
-
+
smpirun -trace ...
The `-trace` parameter for the smpirun script runs the simulation
- Add the contents of a textual file on top of the trace file as comment:
.. code-block:: shell
-
+
--cfg=tracing/comment-file:my_file_with_additional_information.txt
Please, use these two parameters (for comments) to make reproducible
application, the variable ``smpi/simulate-computation`` should be set
to no. This option just ignores the timings in your simulation; it
still executes the computations itself. If you want to stop SMPI from
-doing that, you should check the SMPI_SAMPLE macros, documented in
+doing that, you should check the SMPI_SAMPLE macros, documented in
Section :ref:`SMPI_adapting_speed`.
+------------------------------------+-------------------------+-----------------------------+
| Solution | Computations executed? | Computations simulated? |
-+====================================+=========================+=============================+
++====================================+=========================+=============================+
| --cfg=smpi/simulate-computation:no | Yes | Never |
+------------------------------------+-------------------------+-----------------------------+
| --cfg=smpi/cpu-threshold:42 | Yes, in all cases | If it lasts over 42 seconds |
http://simgrid.gforge.inria.fr/contrib/smpi-saturation-doc.html
.. _cfg=smpi/display-timing:
-
+
Reporting Simulation Time
.........................
increase the latency, i.e., values larger than or equal to 1 are valid here.
.. _cfg=smpi/papi-events:
-
+
Trace hardware counters with PAPI
.................................
files (See Section :ref:`tracing_tracing_options`).
.. warning::
-
+
This feature currently requires superuser privileges, as registers
are queried. Only use this feature with code you trust! Call
smpirun for instance via ``smpirun -wrapper "sudo "
use. Example:
.. code-block:: shell
-
+
ldd allpairf90
...
libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 (0x00007fbb4d91b000)
Then, it can be deallocated by calling SMPI_SHARED_FREE(mem).
When smpi/shared-malloc:global is used, the memory consumption problem
-is solved, but it may induce too much load on the kernel's pages table.
+is solved, but it may induce too much load on the kernel's pages table.
In this case, you should use huge pages so that we create only one
entry per Mb of malloced data instead of one entry per 4k.
To activate this, you must mount a hugetlbfs on your system and allocate
at least one huge page:
.. code-block:: shell
-
+
mkdir /home/huge
sudo mount none /home/huge -t hugetlbfs -o rw,mode=0777
sudo sh -c 'echo 1 > /proc/sys/vm/nr_hugepages' # echo more if you need more
to issue if your application contains such a loop:
.. code-block:: cpp
-
+
while(MPI_Wtime() < some_time_bound) {
/* some tests, with no communication nor computation */
}
set variable simgrid::simix::breakpoint = 3.1416
.. _cfg=verbose-exit:
-
+
Behavior on Ctrl-C
..................
runtime. However, it obviously becomes impossible to get any debug
info from SimGrid if something goes wrong.
-enable_documentation (ON/off)
- Generates the documentation pages.
+enable_documentation (on/OFF)
+ Generates the documentation pages. Building the documentation is not
+ as easy as it used to be, and you should probably use the online
+ version for now.
enable_java (on/OFF)
Generates the java bindings of SimGrid.
optional in this tutorial, it is not installed to reduce the image
size.
-The code template is available under ``/source/simgrid-template-s4u.git``
+The code template is available under ``/source/simgrid-template-s4u.git``
in the image. You should copy it to your working directory and
-recompile it when you first log in:
+recompile it when you first log in:
.. code-block:: shell
sudo apt install simgrid pajeng cmake g++ vite
-For R analysis of the produced traces, you may want to install R,
-and the `pajengr<https://github.com/schnorr/pajengr#installation/>`_ package.
+For R analysis of the produced traces, you may want to install R,
+and the `pajengr <https://github.com/schnorr/pajengr#installation/>`_ package.
.. code-block:: shell
# (exporting SimGrid_PATH is only needed if SimGrid is installed in a non-standard path)
export SimGrid_PATH=/where/to/simgrid
-
+
git clone https://framagit.org/simgrid/simgrid-template-s4u.git
cd simgrid-template-s4u/
- cmake .
+ cmake .
make
If you struggle with the compilation, then you should double check
This can be done with the following platform file, that considers the
simulated platform as a graph of hosts and network links.
-
+
.. literalinclude:: /tuto_smpi/3hosts.xml
:language: xml
sudo apt install simgrid pajeng make gcc g++ gfortran vite
-For R analysis of the produced traces, you may want to install R,
-and the `pajengr<https://github.com/schnorr/pajengr#installation/>`_ package.
+For R analysis of the produced traces, you may want to install R,
+and the `pajengr <https://github.com/schnorr/pajengr#installation/>`_ package.
.. code-block:: shell
the documentation is up-to-date.
Lab 3: Execution Sampling on Matrix Multiplication example
--------------------------------
+----------------------------------------------------------
The second method to speed up simulations is to sample the computation
parts in the code. This means that the person doing the simulation
.. literalinclude:: /tuto_smpi/gemm_mpi.cpp
:language: c
:lines: 4-19
-
.. code-block:: shell
$ smpicc -O3 gemm_mpi.cpp -o gemm
$ time smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile --cfg=smpi/display-timing:yes --cfg=smpi/running-power:1000000000 ./gemm
-
+
This should end quite quickly, as the size of each matrix is only 1000x1000.
But what happens if we want to simulate larger runs ?
Replace the size by 2000, 3000, and try again.
Lab 4: Memory folding on large allocations
--------------------------------
+------------------------------------------
Another issue that can be encountered when simulation with SMPI is lack of memory.
Indeed we are executing all MPI processes on a single node, which can lead to crashes.
In this mode, your application is actually executed. Every computation
occurs for real while every communication is simulated. In addition,
the executions are automatically benchmarked so that their timings can
-be applied within the simulator.
+be applied within the simulator.
SMPI can also go offline by replaying a trace. :ref:`Trace replay
<SMPI_offline>` is usually ways faster than online simulation (because
.. _SMPI_use_colls:
-................................
+................................
Simulating Collective Operations
................................
- **ompi:** default selection logic of OpenMPI (version 3.1.2)
- **mpich**: default selection logic of MPICH (version 3.3b)
- **mvapich2**: selection logic of MVAPICH2 (version 1.9) tuned
- on the Stampede cluster
+ on the Stampede cluster
- **impi**: preliminary version of an Intel MPI selector (version
4.1.3, also tuned for the Stampede cluster). Due the closed source
nature of Intel MPI, some of the algorithms described in the
- documentation are not available, and are replaced by mvapich ones.
+ documentation are not available, and are replaced by mvapich ones.
- **default**: legacy algorithms used in the earlier days of
SimGrid. Do not use for serious perform performance studies.
-.. todo:: default should not even exist.
+.. todo:: default should not even exist.
....................
Available Algorithms
MPI_Alltoall
^^^^^^^^^^^^
-Most of these are best described in `STAR-MPI <http://www.cs.arizona.edu/~dkl/research/papers/ics06.pdf>`_.
+Most of these are best described in `STAR-MPI's white paper <www.cs.fsu.edu/~xyuan/paper/06ics.pdf>`_.
- default: naive one, by default
- ompi: use openmpi selector for the alltoall operations
- mpich: use mpich selector for the alltoall operations
- mvapich2: use mvapich2 selector for the alltoall operations
- impi: use intel mpi selector for the alltoall operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- bruck: Described by Bruck et.al. in <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=642949">this paper</a>
- - 2dmesh: organizes the nodes as a two dimensional mesh, and perform allgather
+ - 2dmesh: organizes the nodes as a two dimensional mesh, and perform allgather
along the dimensions
- 3dmesh: adds a third dimension to the previous algorithm
- - rdb: recursive doubling: extends the mesh to a nth dimension, each one
+ - rdb: recursive doubling: extends the mesh to a nth dimension, each one
containing two nodes
- pair: pairwise exchange, only works for power of 2 procs, size-1 steps,
each process sends and receives from the same process at each step
- mpich: use mpich selector for the alltoallv operations
- mvapich2: use mvapich2 selector for the alltoallv operations
- impi: use intel mpi selector for the alltoallv operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- bruck: same as alltoall
- pair: same as alltoall
- pair_light_barrier: same as alltoall
- mpich: use mpich selector for the barrier operations
- mvapich2: use mvapich2 selector for the barrier operations
- impi: use intel mpi selector for the barrier operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- ompi_basic_linear: all processes send to root
- ompi_two_procs: special case for two processes
- ompi_bruck: nsteps = sqrt(size), at each step, exchange data with rank-2^k and rank+2^k
- mpich: use mpich selector for the scatter operations
- mvapich2: use mvapich2 selector for the scatter operations
- impi: use intel mpi selector for the scatter operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
- - ompi_basic_linear: basic linear scatter
+ - automatic (experimental): use an automatic self-benchmarking algorithm
+ - ompi_basic_linear: basic linear scatter
- ompi_binomial: binomial tree scatter
- mvapich2_two_level_direct: SMP aware algorithm, with an intra-node stage (default set to mpich selector), and then a basic linear inter node stage. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
- mvapich2_two_level_binomial: SMP aware algorithm, with an intra-node stage (default set to mpich selector), and then a binomial phase. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
- mpich: use mpich selector for the reduce operations
- mvapich2: use mvapich2 selector for the reduce operations
- impi: use intel mpi selector for the reduce operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- arrival_pattern_aware: root exchanges with the first process to arrive
- binomial: uses a binomial tree
- flat_tree: uses a flat tree
- - NTSL: Non-topology-specific pipelined linear-bcast function
+ - NTSL: Non-topology-specific pipelined linear-bcast function
0->1, 1->2 ,2->3, ....., ->last node: in a pipeline fashion, with segments
of 8192 bytes
- scatter_gather: scatter then gather
- ompi_chain: openmpi reduce algorithms are built on the same basis, but the
topology is generated differently for each flavor
- chain = chain with spacing of size/2, and segment size of 64KB
- - ompi_pipeline: same with pipeline (chain with spacing of 1), segment size
+ chain = chain with spacing of size/2, and segment size of 64KB
+ - ompi_pipeline: same with pipeline (chain with spacing of 1), segment size
depends on the communicator size and the message size
- ompi_binary: same with binary tree, segment size of 32KB
- - ompi_in_order_binary: same with binary tree, enforcing order on the
+ - ompi_in_order_binary: same with binary tree, enforcing order on the
operations
- - ompi_binomial: same with binomial algo (redundant with default binomial
+ - ompi_binomial: same with binomial algo (redundant with default binomial
one in most cases)
- ompi_basic_linear: basic algorithm, each process sends to root
- mvapich2_knomial: k-nomial algorithm. Default factor is 4 (mvapich2 selector adapts it through tuning)
- mvapich2_two_level: SMP-aware reduce, with default set to mpich both for intra and inter communicators. Use mvapich2 selector to change these to tuned algorithms for Stampede cluster.
- - rab: `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_'s reduce algorithm
+ - rab: `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_'s reduce algorithm
MPI_Allreduce
^^^^^^^^^^^^^
- mpich: use mpich selector for the allreduce operations
- mvapich2: use mvapich2 selector for the allreduce operations
- impi: use intel mpi selector for the allreduce operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- lr: logical ring reduce-scatter then logical ring allgather
- rab1: variations of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: reduce_scatter then allgather
- rab2: variations of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: alltoall then allgather
- - rab_rsag: variation of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: recursive doubling
- reduce_scatter then recursive doubling allgather
+ - rab_rsag: variation of the <a href="https://fs.hlrs.de/projects/par/mpi//myreduce.html">Rabenseifner</a> algorithm: recursive doubling
+ reduce_scatter then recursive doubling allgather
- rdb: recursive doubling
- - smp_binomial: binomial tree with smp: binomial intra
+ - smp_binomial: binomial tree with smp: binomial intra
SMP reduce, inter reduce, inter broadcast then intra broadcast
- smp_binomial_pipeline: same with segment size = 4096 bytes
- - smp_rdb: intra: binomial allreduce, inter: Recursive
+ - smp_rdb: intra: binomial allreduce, inter: Recursive
doubling allreduce, intra: binomial broadcast
- - smp_rsag: intra: binomial allreduce, inter: reduce-scatter,
+ - smp_rsag: intra: binomial allreduce, inter: reduce-scatter,
inter:allgather, intra: binomial broadcast
- - smp_rsag_lr: intra: binomial allreduce, inter: logical ring
+ - smp_rsag_lr: intra: binomial allreduce, inter: logical ring
reduce-scatter, logical ring inter:allgather, intra: binomial broadcast
- smp_rsag_rab: intra: binomial allreduce, inter: rab
reduce-scatter, rab inter:allgather, intra: binomial broadcast
- mpich: use mpich selector for the reduce_scatter operations
- mvapich2: use mvapich2 selector for the reduce_scatter operations
- impi: use intel mpi selector for the reduce_scatter operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- ompi_basic_recursivehalving: recursive halving version from OpenMPI
- ompi_ring: ring version from OpenMPI
- mpich_pair: pairwise exchange version from MPICH
- mpich: use mpich selector for the allgather operations
- mvapich2: use mvapich2 selector for the allgather operations
- impi: use intel mpi selector for the allgather operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- 2dmesh: see alltoall
- 3dmesh: see alltoall
- bruck: Described by Bruck et.al. in <a href="http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=642949">
- Efficient algorithms for all-to-all communications in multiport message-passing systems</a>
+ Efficient algorithms for all-to-all communications in multiport message-passing systems</a>
- GB: Gather - Broadcast (uses tuned version if specified)
- - loosely_lr: Logical Ring with grouping by core (hardcoded, default
+ - loosely_lr: Logical Ring with grouping by core (hardcoded, default
processes/node: 4)
- NTSLR: Non Topology Specific Logical Ring
- NTSLR_NB: Non Topology Specific Logical Ring, Non Blocking operations
- rdb: see alltoall
- rhv: only power of 2 number of processes
- ring: see alltoall
- - SMP_NTS: gather to root of each SMP, then every root of each SMP node
- post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
+ - SMP_NTS: gather to root of each SMP, then every root of each SMP node
+ post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
using logical ring algorithm (hardcoded, default processes/SMP: 8)
- - smp_simple: gather to root of each SMP, then every root of each SMP node
- post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
+ - smp_simple: gather to root of each SMP, then every root of each SMP node
+ post INTER-SMP Sendrecv, then do INTRA-SMP Bcast for each receiving message,
using simple algorithm (hardcoded, default processes/SMP: 8)
- spreading_simple: from node i, order of communications is i -> i + 1, i ->
i + 2, ..., i -> (i + p -1) % P
- - ompi_neighborexchange: Neighbor Exchange algorithm for allgather.
+ - ompi_neighborexchange: Neighbor Exchange algorithm for allgather.
Described by Chen et.al. in `Performance Evaluation of Allgather
Algorithms on Terascale Linux Cluster with Fast Ethernet <http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1592302>`_
- mvapich2_smp: SMP aware algorithm, performing intra-node gather, inter-node allgather with one process/node, and bcast intra-node
- mpich: use mpich selector for the allgatherv operations
- mvapich2: use mvapich2 selector for the allgatherv operations
- impi: use intel mpi selector for the allgatherv operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- GB: Gatherv - Broadcast (uses tuned version if specified, but only for Bcast, gatherv is not tuned)
- pair: see alltoall
- ring: see alltoall
- mpich: use mpich selector for the bcast operations
- mvapich2: use mvapich2 selector for the bcast operations
- impi: use intel mpi selector for the bcast operations
- - automatic (experimental): use an automatic self-benchmarking algorithm
+ - automatic (experimental): use an automatic self-benchmarking algorithm
- arrival_pattern_aware: root exchanges with the first process to arrive
- arrival_pattern_aware_wait: same with slight variation
- binomial_tree: binomial tree exchange
- SMP_linear: linear algorithm with 8 cores/SMP
- ompi_split_bintree: binary tree algorithm from OpenMPI, with message split in 8192 bytes pieces
- ompi_pipeline: pipeline algorithm from OpenMPI, with message split in 128KB pieces
- - mvapich2_inter_node: Inter node default mvapich worker
+ - mvapich2_inter_node: Inter node default mvapich worker
- mvapich2_intra_node: Intra node default mvapich worker
- mvapich2_knomial_intra_node: k-nomial intra node default mvapich worker. default factor is 4.
.. warning:: This is still very experimental.
-An automatic version is available for each collective (or even as a selector). This specific
-version will loop over all other implemented algorithm for this particular collective, and apply
-them while benchmarking the time taken for each process. It will then output the quickest for
-each process, and the global quickest. This is still unstable, and a few algorithms which need
+An automatic version is available for each collective (or even as a selector). This specific
+version will loop over all other implemented algorithm for this particular collective, and apply
+them while benchmarking the time taken for each process. It will then output the quickest for
+each process, and the global quickest. This is still unstable, and a few algorithms which need
specific number of nodes may crash.
Adding an algorithm
and compare collective algorithms, you should set the
``tracing/smpi/internals`` configuration item to 1 instead of 0.
-Here are examples of two alltoall collective algorithms runs on 16 nodes,
+Here are examples of two alltoall collective algorithms runs on 16 nodes,
the first one with a ring algorithm, the second with a pairwise one.
.. image:: /img/smpi_simgrid_alltoall_ring_16.png
:align: center
-
+
Alltoall on 16 Nodes with the Ring Algorithm.
.. image:: /img/smpi_simgrid_alltoall_pair_16.png
:align: center
-
+
Alltoall on 16 Nodes with the Pairwise Algorithm.
-------------------------
....................
Our coverage of the interface is very decent, but still incomplete;
-Given the size of the MPI standard, we may well never manage to
+Given the size of the MPI standard, we may well never manage to
implement absolutely all existing primitives. Currently, we have
almost no support for I/O primitives, but we still pass a very large
amount of the MPICH coverage tests.
our implementation was not robust enough to be used in production, so
it was removed at some point. Currently, SMPI comes with two
privatization mechanisms that you can :ref:`select at runtime
-<options_smpi_privatization>`_. The dlopen approach is used by
+<cfg=smpi/privatization>`_. The dlopen approach is used by
default as it is much faster and still very robust. The mmap approach
is an older approach that proves to be slower.
then this macro can dramatically shrink your memory consumption. For example,
that will be very beneficial to a matrix multiplication code, as all blocks will
be stored on the same area. Of course, the resulting computations will useless,
-but you can still study the application behavior this way.
+but you can still study the application behavior this way.
Naturally, this won't work if your code is data-dependent. For example, a Jacobi
iterative computation depends on the result computed by the code to detect
SMPI_SAMPLE_GLOBAL. Of course, none of this will work if the execution
time of your loop iteration are not stable.
-This feature is demoed by the example file
+This feature is demoed by the example file
`examples/smpi/NAS/ep.c <https://framagit.org/simgrid/simgrid/tree/master/examples/smpi/NAS/ep.c>`_
.............................
precious for that). Then, try to modify your model (of the platform,
of the collective operations) to reduce the most preeminent differences.
-If the discrepancies come from the computing time, try adapting the
+If the discrepancies come from the computing time, try adapting the
``smpi/host-speed``: reduce it if your simulation runs faster than in
reality. If the error come from the communication, then you need to
fiddle with your platform file.
explicitely told what compiler to use, as follows:
.. code-block:: shell
-
+
SMPI_PRETEND_CC=1 ./configure CC=smpicc # here come the other configure parameters
make
Although SMPI is often used for :ref:`online simulation
<SMPI_online>`, where the application is executed for real, you can
-also go for offline simulation through trace replay.
+also go for offline simulation through trace replay.
SimGrid uses time-independent traces, in which each actor is given a
script of the actions to do sequentially. These trace files can
.. code-block:: shell
- $ smpirun -trace-ti --cfg=tracing/filename:LU.A.32 -np 32 -platform ../cluster_backbone.xml bin/lu.A.32
+ $ smpirun -trace-ti --cfg=tracing/filename:LU.A.32 -np 32 -platform ../cluster_backbone.xml bin/lu.A.32
The produced trace is composed of a file ``LU.A.32`` and a folder
``LU.A.32_files``. The file names don't match with the MPI ranks, but
""""""""""""""""""""
- Some features are missing in the Maestro future implementation
- (`simgrid::kernel::Future`, `simgrid::kernel::Promise`)
- could be extended to support additional features:
- `when_any`, `shared_future`, etc.
+ (`simgrid::kernel::Future`, `simgrid::kernel::Promise`)
+ could be extended to support additional features:
+ `when_any`, `shared_future`, etc.
- The corresponding feature might then be implemented in the user process
futures (`simgrid::simix::Future`).
<br/>
.. _howto:
-
+
Modeling Hints
##############
.. _howto_churn:
Modeling Churn (e.g., in P2P)
-****************************
+*****************************
One of the biggest challenges in P2P settings is to cope with the
churn, meaning that resources keep appearing and disappearing. In
<br/>
.. _platform_reference:
-
+
DTD Reference
*************
-Your platform description should follow the specification presented in the
-`simgrid.dtd <https://simgrid.org/simgrid.dtd>`_ DTD file. The same DTD is used for both platform and deployment files.
+Your platform description should follow the specification presented in the
+`simgrid.dtd <https://simgrid.org/simgrid.dtd>`_ DTD file. The same DTD is used for both platform and deployment files.
+
+-------------------------------------------------------------------------------
.. _pf_tag_config:
-------------------------------------------------------------------
<config>
-------------------------------------------------------------------
+--------
Adding configuration flags directly into the platform file becomes particularly useful when the realism of the described
platform depends on some specific flags. For example, this could help you to finely tune SMPI. Almost all
<!-- The rest of your platform -->
</platform>
-|hr|
-
+-------------------------------------------------------------------------------
+
.. _pf_tag_host:
-------------------------------------------------------------------
<host>
-------------------------------------------------------------------
+------
A host is the computing resource on which an actor can run. See :cpp:class:`simgrid::s4u::Host`.
5 1
LOOPAFTER 5
- - At time t = 1, half of the host computational power (0.5 means 50%) is used to process some background load, hence
- only 50% of this initial power remains available to your own simulation.
+ - At time t = 1, half of the host computational power (0.5 means 50%) is used to process some background load, hence
+ only 50% of this initial power remains available to your own simulation.
- At time t = 2, the available power drops at 20% of the initial value.
- At time t = 5, the host can compute at full speed again.
- At time t = 10, the profile is reset (as we are 5 seconds after the last event). Then the available speed will drop
:``pstate``: Initial pstate (default: 0, the first one).
See :ref:`howto_dvfs`.
-|hr|
-
+-------------------------------------------------------------------------------
+
.. _pf_tag_link:
-------------------------------------------------------------------
<link>
-------------------------------------------------------------------
+------
SimGrid links usually represent one-hop network connections (see :cpp:class:`simgrid::s4u::Link`), i.e., a single wire.
They can also be used to abstract a larger network interconnect, e.g., the entire transcontinental network, into a
names are suffixed with "_UP" and "_DOWN"). Then you must specify
which direction gets actually used when referring to that link in a
:ref:`pf_tag_link_ctn`.
-
+
:``bandwidth_file``: File containing the bandwidth profile.
Almost every lines of such files describe timed events as ``date
bandwidth`` (in bytes per second).
Almost every lines of such files describe timed events as ``date
latency`` (in seconds).
Example:
-
+
.. code-block:: python
-
+
1.0 0.001
3.0 0.1
LOOPAFTER 5.0
:``state_file``: File containing the state profile. See :ref:`pf_tag_host`.
-|hr|
-
+-------------------------------------------------------------------------------
+
.. _pf_tag_link_ctn:
-------------------------------------------------------------------
<link_ctn>
-------------------------------------------------------------------
+----------
An element in a route, representing a previously defined link.
-**Parent tags:** :ref:`pf_tag_route` |br|
+**Parent tags:** :ref:`pf_tag_route` |br|
**Children tags:** none |br|
**Attributes:**
(with ``DOWN``) of the link. This is only valid if the
link has ``sharing=SPLITDUPLEX``.
-|hr|
+-------------------------------------------------------------------------------
.. _pf_tag_peer:
-------------------------------------------------------------------
<peer>
-------------------------------------------------------------------
+------
This tag represents a peer, as in Peer-to-Peer (P2P) networks. It is
handy to model situations where hosts have an asymmetric
:``coordinates``: Coordinates of the gateway for this peer.
The communication latency between a host A = (xA,yA,zA) and a host B = (xB,yB,zB) is computed as follows:
-
+
latency = sqrt( (xA-xB)² + (yA-yB)² ) + zA + zB
See the documentation of
:``state_file``: File containing the state profile.
See the full description in :ref:`pf_tag_host`
-|hr|
+-------------------------------------------------------------------------------
.. _pf_tag_platform:
-------------------------------------------------------------------
<platform>
-------------------------------------------------------------------
+----------
**Parent tags:** none (this is the root tag of every file) |br|
**Children tags:** :ref:`pf_tag_config` (must come first),
:ref:`pf_tag_cluster`, :ref:`pf_tag_cabinet`, :ref:`pf_tag_peer`,
:ref:`pf_tag_zone`, :ref:`pf_tag_trace`, :ref:`pf_tag_trace_connect` |br|
-**Attributes:**
+**Attributes:**
:``version``: Version of the DTD, describing the whole XML format.
This versionning allow future evolutions, even if we
upgrade most of the past platform files to the most recent
formalism.
-|hr|
-
+-------------------------------------------------------------------------------
+
.. _pf_tag_prop:
-------------------------------------------------------------------
<prop>
-------------------------------------------------------------------
+------
This tag can be used to attach user-defined properties to some
platform elements. Both the name and the value can be any string of
:``id``: Name of the defined property.
:``value``: Value of the defined property.
-|hr|
-
+-------------------------------------------------------------------------------
+
.. _pf_tag_route:
-------------------------------------------------------------------
<route>
-------------------------------------------------------------------
+-------
-A path between two network locations, composed of several :ref:`pf_tag_link`s.
+A path between two network locations, composed of several :ref:`pf_tag_link`s.
-**Parent tags:** :ref:`pf_tag_zone` |br|
+**Parent tags:** :ref:`pf_tag_zone` |br|
**Children tags:** :ref:`pf_tag_link_ctn` |br|
**Attributes:**
are defining the route ``dst -> src`` at the same
time. Valid values: ``yes``, ``no``,``YES``, ``NO``.
-|hr|
+-------------------------------------------------------------------------------
.. _pf_tag_router:
-------------------------------------------------------------------
<router>
------------------------------------------------------------------
:``id``: Router name.
No other host or router may have the same name over the whole platform.
-:``coordinates``: Vivaldi coordinates. See :ref:`pf_tag_peer`.
+:``coordinates``: Vivaldi coordinates. See :ref:`pf_tag_peer`.
-|hr|
+-------------------------------------------------------------------------------
.. _pf_tag_zone:
-------------------------------------------------------------------
<zone>
-------------------------------------------------------------------
+------
A networking zone is an area in which elements are located. See :cpp:class:`simgrid::s4u::Zone`.
**Parent tags:** :ref:`pf_tag_platform`, :ref:`pf_tag_zone` (only internal nodes, i.e., zones
containing only inner zones or clusters but no basic
-elements such as host or peer) |br|
+elements such as host or peer) |br|
**Children tags (if internal zone):** :ref:`pf_tag_cluster`, :ref:`pf_tag_link`, :ref:`pf_tag_zone` |br|
**Children tags (if leaf zone):** :ref:`pf_tag_host`, :ref:`pf_tag_link`, :ref:`pf_tag_peer` |br|
**Attributes:**
:``id``: Zone name.
No other zone may have the same name over the whole platform.
-:``routing``: Routing algorithm to use.
+:``routing``: Routing algorithm to use.
.. |br| raw:: html
<br />
-
-.. |hr| raw:: html
-
- <hr />
p Testing a simple master/worker example application
-$ $SG_TEST_EXENV ./app-masterworker/app-masterworker ${platfdir}/multicore_machine.xml ${srcdir}/app-masterworker-multicore_d.xml --cfg=cpu/model:Cas01 --cfg=cpu/optim:Full "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ./app-masterworker/app-masterworker ${platfdir}/multicore_machine.xml ${srcdir}/app-masterworker-multicore_d.xml --cfg=cpu/model:Cas01 --cfg=cpu/optim:Full "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'cpu/model' to 'Cas01'
> [ 0.000000] (0:maestro@) Configuration change: Set 'cpu/optim' to 'Full'
> [ 0.000000] (1:master@Tremblay) Got 6 workers and 20 tasks to process
p Testing a simple master/worker example application
-$ $SG_TEST_EXENV ${bindir}/app-masterworker ${platfdir}/vivaldi.xml ${srcdir}/app-masterworker-vivaldi_d.xml --cfg=network/latency-factor:1.0 --cfg=network/bandwidth-factor:1.0 "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir}/app-masterworker ${platfdir}/vivaldi.xml ${srcdir}/app-masterworker-vivaldi_d.xml --cfg=network/latency-factor:1.0 --cfg=network/bandwidth-factor:1.0 "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/latency-factor' to '1.0'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/bandwidth-factor' to '1.0'
> [ 0.000000] (1:master@100030591) Got 15 workers and 10 tasks to process
p Testing a simple master/worker example application (mailbox version)
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/app-masterworker$EXEEXT ${platfdir}/small_platform_with_routers.xml ${srcdir}/app-masterworker_d.xml --cfg=network/crosstraffic:0 --trace "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/app-masterworker ${platfdir}/small_platform_with_routers.xml ${srcdir}/app-masterworker_d.xml --cfg=network/crosstraffic:0 --trace "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/crosstraffic' to '0'
> [ 0.000000] (1:master@Tremblay) Got 5 workers and 20 tasks to process
> [ 0.000000] (1:master@Tremblay) Sending "Task_0" (of 20) to mailbox "worker-0"
> [ 5.094868] (0:maestro@) Simulation time 5.09487
> [ 5.094868] (6:worker@Bourassa) I'm done. See you!
-$ $SG_TEST_EXENV ${bindir:=.}/app-masterworker$EXEEXT ${platfdir}/small_platform_with_routers.xml ${srcdir}/app-masterworker_d.xml --trace "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/app-masterworker ${platfdir}/small_platform_with_routers.xml ${srcdir}/app-masterworker_d.xml --trace "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:master@Tremblay) Got 5 workers and 20 tasks to process
> [ 0.000000] (1:master@Tremblay) Sending "Task_0" (of 20) to mailbox "worker-0"
> [ 0.002265] (1:master@Tremblay) Sending "Task_1" (of 20) to mailbox "worker-1"
p Testing the Cloud API with a simple master/workers
-$ $SG_TEST_EXENV ${bindir:=.}/cloud-masterworker$EXEEXT --log=no_loc ${platfdir}/cluster_backbone.xml
+$ ${bindir:=.}/cloud-masterworker --log=no_loc ${platfdir}/cluster_backbone.xml
> [node-0.simgrid.org:master:(1) 0.000000] [msg_test/INFO] # Launch 2 VMs
> [node-0.simgrid.org:master:(1) 0.000000] [msg_test/INFO] create VM00 on PM(node-1.simgrid.org)
> [node-0.simgrid.org:master:(1) 0.000000] [msg_test/INFO] put a process (WRK00) on VM00
p Testing the Kademlia implementation with MSG
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/dht-kademlia ${platfdir}/cluster_backbone.xml ${srcdir}/dht-kademlia_d.xml "--log=root.fmt:[%10.6r]%e(%02i:%P@%h)%e%m%n"
+$ ${bindir:=.}/dht-kademlia ${platfdir}/cluster_backbone.xml ${srcdir}/dht-kademlia_d.xml "--log=root.fmt:[%10.6r]%e(%02i:%P@%h)%e%m%n"
> [ 0.000000] ( 1:node@node-0.simgrid.org) Hi, I'm going to create the network with id 0
> [ 0.000000] ( 2:node@node-1.simgrid.org) Hi, I'm going to join the network with id 1
> [ 0.000000] ( 3:node@node-2.simgrid.org) Hi, I'm going to join the network with id 3
p Testing the Pastry implementation with MSG
-$ $SG_TEST_EXENV ${bindir:=.}/dht-pastry$EXEEXT -nb_bits=6 ${platfdir}/cluster_backbone.xml ${srcdir}/dht-pastry_d.xml --log=msg_pastry.thres:verbose "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/dht-pastry -nb_bits=6 ${platfdir}/cluster_backbone.xml ${srcdir}/dht-pastry_d.xml --log=msg_pastry.thres:verbose "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 25.007806] (1:node@node-0.simgrid.org) Task update from 366680 !!!
> [ 25.007806] (1:node@node-0.simgrid.org) Node:
> [ 25.007806] (1:node@node-0.simgrid.org) Id: 42 '0000002a'
p Testing the mechanism for computing host energy consumption in case of VMs
-$ ${bindir:=.}/energy-vm$EXEEXT ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/energy-vm ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:dvfs@MyHost1) Creating and starting two VMs
> [ 0.000000] (1:dvfs@MyHost1) Create two tasks on Host1: both inside a VM
> [ 0.000000] (1:dvfs@MyHost1) Create two tasks on Host2: one inside a VM, the other directly on the host
> [ 0.000000] (0:maestro@) *** PROPERTY NOT VALID ***
> [ 0.000000] (0:maestro@) **************************
> [ 0.000000] (0:maestro@) Counter-example execution trace:
-> [ 0.000000] (0:maestro@) [(1)HostA (server)] iRecv(dst=(1)HostA (server), buff=(verbose only), size=(verbose only))
-> [ 0.000000] (0:maestro@) [(2)HostB (client)] iSend(src=(2)HostB (client), buff=(verbose only), size=(verbose only))
-> [ 0.000000] (0:maestro@) [(1)HostA (server)] Wait(comm=(verbose only) [(2)HostB (client)-> (1)HostA (server)])
-> [ 0.000000] (0:maestro@) [(1)HostA (server)] iRecv(dst=(1)HostA (server), buff=(verbose only), size=(verbose only))
-> [ 0.000000] (0:maestro@) [(2)HostB (client)] Wait(comm=(verbose only) [(2)HostB (client)-> (1)HostA (server)])
-> [ 0.000000] (0:maestro@) [(4)HostD (client)] iSend(src=(4)HostD (client), buff=(verbose only), size=(verbose only))
-> [ 0.000000] (0:maestro@) [(1)HostA (server)] Wait(comm=(verbose only) [(4)HostD (client)-> (1)HostA (server)])
-> [ 0.000000] (0:maestro@) [(1)HostA (server)] iRecv(dst=(1)HostA (server), buff=(verbose only), size=(verbose only))
-> [ 0.000000] (0:maestro@) [(3)HostC (client)] iSend(src=(3)HostC (client), buff=(verbose only), size=(verbose only))
-> [ 0.000000] (0:maestro@) [(1)HostA (server)] Wait(comm=(verbose only) [(3)HostC (client)-> (1)HostA (server)])
+> [ 0.000000] (0:maestro@) [(1)HostA (server)] iRecv(dst=(1)HostA (server), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(2)HostB (client)] iSend(src=(2)HostB (client), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(1)HostA (server)] Wait(comm=(verbose only) [(2)HostB (client)-> (1)HostA (server)])
+> [ 0.000000] (0:maestro@) [(1)HostA (server)] iRecv(dst=(1)HostA (server), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(2)HostB (client)] Wait(comm=(verbose only) [(2)HostB (client)-> (1)HostA (server)])
+> [ 0.000000] (0:maestro@) [(4)HostD (client)] iSend(src=(4)HostD (client), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(1)HostA (server)] Wait(comm=(verbose only) [(4)HostD (client)-> (1)HostA (server)])
+> [ 0.000000] (0:maestro@) [(1)HostA (server)] iRecv(dst=(1)HostA (server), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(3)HostC (client)] iSend(src=(3)HostC (client), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(1)HostA (server)] Wait(comm=(verbose only) [(3)HostC (client)-> (1)HostA (server)])
+> [ 0.000000] (0:maestro@) Path = 1;2;1;1;2;4;1;1;3;1
> [ 0.000000] (0:maestro@) Expanded states = 22
> [ 0.000000] (0:maestro@) Visited states = 56
> [ 0.000000] (0:maestro@) Executed transitions = 52
> [ 0.000000] (0:maestro@) *** PROPERTY NOT VALID ***
> [ 0.000000] (0:maestro@) **************************
> [ 0.000000] (0:maestro@) Counter-example execution trace:
-> [ 0.000000] (0:maestro@) [(1)HostA (server)] iRecv(dst=(1)HostA (server), buff=(verbose only), size=(verbose only))
-> [ 0.000000] (0:maestro@) [(3)HostC (client)] iSend(src=(3)HostC (client), buff=(verbose only), size=(verbose only))
-> [ 0.000000] (0:maestro@) [(1)HostA (server)] Wait(comm=(verbose only) [(3)HostC (client)-> (1)HostA (server)])
-> [ 0.000000] (0:maestro@) [(3)HostC (client)] Wait(comm=(verbose only) [(3)HostC (client)-> (1)HostA (server)])
-> [ 0.000000] (0:maestro@) [(1)HostA (server)] iRecv(dst=(1)HostA (server), buff=(verbose only), size=(verbose only))
-> [ 0.000000] (0:maestro@) [(3)HostC (client)] iSend(src=(3)HostC (client), buff=(verbose only), size=(verbose only))
-> [ 0.000000] (0:maestro@) [(1)HostA (server)] Wait(comm=(verbose only) [(3)HostC (client)-> (1)HostA (server)])
+> [ 0.000000] (0:maestro@) [(1)HostA (server)] iRecv(dst=(1)HostA (server), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(3)HostC (client)] iSend(src=(3)HostC (client), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(1)HostA (server)] Wait(comm=(verbose only) [(3)HostC (client)-> (1)HostA (server)])
+> [ 0.000000] (0:maestro@) [(3)HostC (client)] Wait(comm=(verbose only) [(3)HostC (client)-> (1)HostA (server)])
+> [ 0.000000] (0:maestro@) [(1)HostA (server)] iRecv(dst=(1)HostA (server), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(3)HostC (client)] iSend(src=(3)HostC (client), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(1)HostA (server)] Wait(comm=(verbose only) [(3)HostC (client)-> (1)HostA (server)])
+> [ 0.000000] (0:maestro@) Path = 1;3;1;3;1;3;1
> [ 0.000000] (0:maestro@) Expanded states = 1006
> [ 0.000000] (0:maestro@) Visited states = 5319
> [ 0.000000] (0:maestro@) Executed transitions = 4969
\ No newline at end of file
#define FINALIZE ((void*)221297) /* a magic number to tell people to stop working */
+static void task_cleanup_handler(void* task)
+{
+ if (task)
+ MSG_task_destroy(task);
+}
+
static int master(int argc, char *argv[])
{
long number_of_tasks = xbt_str_parse_int(argv[1], "Invalid amount of tasks: %s");
break;
}
XBT_INFO("Start execution...");
+ MSG_process_set_data(MSG_process_self(), task);
retcode = MSG_task_execute(task);
+ MSG_process_set_data(MSG_process_self(), NULL);
if (retcode == MSG_OK) {
XBT_INFO("Execution complete.");
MSG_task_destroy(task);
MSG_function_register("master", master);
MSG_function_register("worker", worker);
+ MSG_process_set_data_cleanup(task_cleanup_handler);
MSG_launch_application(argv[2]);
msg_error_t res = MSG_main();
p Testing a simple master/worker example application handling failures TCP crosstraffic DISABLED
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/platform-failures$EXEEXT --log=xbt_cfg.thres:critical --log=no_loc ${platfdir}/small_platform_failures.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml --cfg=path:${srcdir} --cfg=network/crosstraffic:0 "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --log=surf_cpu.t:verbose
+$ ${bindir:=.}/platform-failures --log=xbt_cfg.thres:critical --log=no_loc ${platfdir}/small_platform_failures.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml --cfg=path:${srcdir} --cfg=network/crosstraffic:0 "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --log=surf_cpu.t:verbose
> [ 0.000000] (0:maestro@) Cannot launch actor 'worker' on failed host 'Fafard'
> [ 0.000000] (0:maestro@) Deployment includes some initially turned off Hosts ... nevermind.
> [ 0.000000] (1:master@Tremblay) Got 5 workers and 20 tasks to process
p Testing a simple master/worker example application handling failures. TCP crosstraffic ENABLED
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/platform-failures$EXEEXT --log=xbt_cfg.thres:critical --log=no_loc ${platfdir}/small_platform_failures.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml --cfg=path:${srcdir} "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --log=surf_cpu.t:verbose
+$ ${bindir:=.}/platform-failures --log=xbt_cfg.thres:critical --log=no_loc ${platfdir}/small_platform_failures.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml --cfg=path:${srcdir} "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --log=surf_cpu.t:verbose
> [ 0.000000] (0:maestro@) Cannot launch actor 'worker' on failed host 'Fafard'
> [ 0.000000] (0:maestro@) Deployment includes some initially turned off Hosts ... nevermind.
> [ 0.000000] (1:master@Tremblay) Got 5 workers and 20 tasks to process
p unit tests for the surf solver, and such issues will be addressable again.
p For the time being, I just give up, sorry.
-p $ $SG_TEST_EXENV ${bindir:=.}/platform-failures$EXEEXT --log=xbt_cfg.thres:critical --log=no_loc ${platfdir}/small_platform_failures.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml --cfg=path:${srcdir} --cfg=cpu/optim:TI "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --log=surf_cpu.t:verbose
+p $ ${bindir:=.}/platform-failures --log=xbt_cfg.thres:critical --log=no_loc ${platfdir}/small_platform_failures.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml --cfg=path:${srcdir} --cfg=cpu/optim:TI "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --log=surf_cpu.t:verbose
#!/usr/bin/env tesh
p Start remote processes
-$ $SG_TEST_EXENV ${bindir:=.}/process-create$EXEEXT ${platfdir}/small_platform.xml
+$ ${bindir:=.}/process-create ${platfdir}/small_platform.xml
p Testing synchronization with semaphores
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/synchro-semaphore ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/synchro-semaphore ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:Alice@Fafard) Trying to acquire 1
> [ 0.000000] (1:Alice@Fafard) Acquired 1
> [ 0.900000] (2:Bob@Fafard) Trying to acquire 1
p Tracing multiple categories master/worker application
-$ $SG_TEST_EXENV ${bindir:=.}/trace-categories$EXEEXT --cfg=tracing:yes --cfg=tracing/filename:categories.trace --cfg=tracing/categorized:yes --cfg=tracing/uncategorized:yes ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
+$ ${bindir:=.}/trace-categories --cfg=tracing:yes --cfg=tracing/filename:categories.trace --cfg=tracing/categorized:yes --cfg=tracing/uncategorized:yes ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing' to 'yes'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/filename' to 'categories.trace'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/categorized' to 'yes'
#!/usr/bin/env tesh
p Tracing user variables for hosts
-$ $SG_TEST_EXENV ${bindir:=.}/trace-host-user-variables$EXEEXT --cfg=tracing:yes --cfg=tracing/platform:yes ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
+$ ${bindir:=.}/trace-host-user-variables --cfg=tracing:yes --cfg=tracing/platform:yes ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing' to 'yes'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/platform' to 'yes'
> [0.004078] [msg_test/INFO] Declared host variables:
$ rm -f simgrid.trace
p Not tracing user variables
-$ $SG_TEST_EXENV ${bindir:=.}/trace-host-user-variables$EXEEXT ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
+$ ${bindir:=.}/trace-host-user-variables ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
#!/usr/bin/env tesh
p Trace user variables associated to links of the platform file
-$ $SG_TEST_EXENV ${bindir:=.}/trace-link-user-variables$EXEEXT --cfg=tracing:yes --cfg=tracing/platform:yes ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
+$ ${bindir:=.}/trace-link-user-variables --cfg=tracing:yes --cfg=tracing/platform:yes ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing' to 'yes'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/platform' to 'yes'
#!/usr/bin/env tesh
p Tracing master/worker application
-$ $SG_TEST_EXENV ${bindir:=.}/trace-masterworker$EXEEXT --cfg=tracing:yes --cfg=tracing/filename:trace-masterworker.trace --cfg=tracing/categorized:yes --cfg=tracing/uncategorized:yes ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
+$ ${bindir:=.}/trace-masterworker --cfg=tracing:yes --cfg=tracing/filename:trace-masterworker.trace --cfg=tracing/categorized:yes --cfg=tracing/uncategorized:yes ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing' to 'yes'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/filename' to 'trace-masterworker.trace'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/categorized' to 'yes'
> [4.214821] [msg_trace_masterworker/INFO] msmark
p Tracing master/worker application with xml config
-$ $SG_TEST_EXENV ${bindir:=.}/trace-masterworker$EXEEXT ${platfdir}/config_tracing.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
+$ ${bindir:=.}/trace-masterworker ${platfdir}/config_tracing.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing' to 'yes'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/categorized' to 'yes'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/filename' to 'trace-masterworker.trace'
> [4.214821] [msg_trace_masterworker/INFO] msmark
p Not tracing master/worker application
-$ $SG_TEST_EXENV ${bindir:=.}/trace-masterworker$EXEEXT ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
+$ ${bindir:=.}/trace-masterworker ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
p Testing tracing by process
-$ $SG_TEST_EXENV ${bindir:=.}/trace-masterworker$EXEEXT --cfg=tracing:yes --cfg=tracing/msg/process:yes --cfg=tracing/filename:trace-masterworker.trace --cfg=tracing/categorized:yes --cfg=tracing/uncategorized:yes ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
+$ ${bindir:=.}/trace-masterworker --cfg=tracing:yes --cfg=tracing/msg/process:yes --cfg=tracing/filename:trace-masterworker.trace --cfg=tracing/categorized:yes --cfg=tracing/uncategorized:yes ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing' to 'yes'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/msg/process' to 'yes'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/filename' to 'trace-masterworker.trace'
#!/usr/bin/env tesh
p Tracing processes
-$ $SG_TEST_EXENV ${bindir:=.}/trace-process-migration$EXEEXT --cfg=tracing:yes --cfg=tracing/filename:procmig.trace --cfg=tracing/msg/process:yes ${platfdir}/small_platform.xml
+$ ${bindir:=.}/trace-process-migration --cfg=tracing:yes --cfg=tracing/filename:procmig.trace --cfg=tracing/msg/process:yes ${platfdir}/small_platform.xml
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing' to 'yes'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/filename' to 'procmig.trace'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/msg/process' to 'yes'
#!/usr/bin/env tesh
p Trace user variables associated to links of the platform file
-$ $SG_TEST_EXENV ${bindir:=.}/trace-route-user-variables$EXEEXT --cfg=tracing:yes --cfg=tracing/platform:yes ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
+$ ${bindir:=.}/trace-route-user-variables --cfg=tracing:yes --cfg=tracing/platform:yes ${platfdir}/small_platform.xml ${srcdir}/../app-masterworker/app-masterworker_d.xml
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing' to 'yes'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/platform' to 'yes'
# The order differ when executed with gcc's thread sanitizer
! output sort
-$ $SG_TEST_EXENV ${bindir:=.}/dag-dotload/sd_dag-dotload --log=no_loc ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/dag-dotload/dag.dot
+$ ${bindir:=.}/dag-dotload/sd_dag-dotload --log=no_loc ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/dag-dotload/dag.dot
> [0.000000] [xbt_cfg/INFO] Switching to the L07 model to handle parallel tasks.
> [0.000000] [test/INFO] ------------------- Display all tasks of the loaded DAG ---------------------------
> [0.000000] [sd_task/INFO] Displaying task root
$ rm -f ${srcdir:=.}/dag-dotload/dag.trace ${srcdir:=.}/dot.dot
! expect return 2
-$ $SG_TEST_EXENV ${bindir:=.}/dag-dotload/sd_dag-dotload --log=no_loc ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/dag-dotload/dag_with_cycle.dot
+$ ${bindir:=.}/dag-dotload/sd_dag-dotload --log=no_loc ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/dag-dotload/dag_with_cycle.dot
> [0.000000] [xbt_cfg/INFO] Switching to the L07 model to handle parallel tasks.
> [0.000000] [sd_daxparse/WARNING] the task root is not marked
> [0.000000] [sd_daxparse/WARNING] the task 1 is in a cycle
p Test the DAX loader on a small DAX instance
! output sort
-$ $SG_TEST_EXENV ${bindir:=.}/daxload/sd_daxload --log=no_loc ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/daxload/smalldax.xml
+$ ${bindir:=.}/daxload/sd_daxload --log=no_loc ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/daxload/smalldax.xml
> [0.000000] [xbt_cfg/INFO] Switching to the L07 model to handle parallel tasks.
> [0.000000] [sd_daxparse/WARNING] Ignore file o1 size redefinition from 1000000 to 304
> [0.000000] [sd_daxparse/WARNING] Ignore file o2 size redefinition from 1000000 to 304
p Test the DAX loader with a DAX comprising a cycle.
! expect return 255
-$ $SG_TEST_EXENV ${bindir:=.}/daxload/sd_daxload --log=no_loc ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/daxload/simple_dax_with_cycle.xml
+$ ${bindir:=.}/daxload/sd_daxload --log=no_loc ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/daxload/simple_dax_with_cycle.xml
> [0.000000] [xbt_cfg/INFO] Switching to the L07 model to handle parallel tasks.
> [0.000000] [sd_daxparse/WARNING] the task root is not marked
> [0.000000] [sd_daxparse/WARNING] the task 1@task1 is in a cycle
p Test of the management of failed tasks simdag
-$ $SG_TEST_EXENV ${bindir:=.}/fail/sd_fail ${srcdir:=.}/../../platforms/faulty_host.xml
+$ ${bindir:=.}/fail/sd_fail ${srcdir:=.}/../../platforms/faulty_host.xml
> [0.000000] [xbt_cfg/INFO] Switching to the L07 model to handle parallel tasks.
> [0.000000] [sd_fail/INFO] First test: COMP_SEQ task
> [0.000000] [sd_fail/INFO] Schedule task 'Poor task' on 'Faulty Host'
p Simple test of simdag with properties
-$ $SG_TEST_EXENV properties/sd_properties ${srcdir:=.}/../../platforms/prop.xml
+$ properties/sd_properties ${srcdir:=.}/../../platforms/prop.xml
> [0.000000] [xbt_cfg/INFO] Switching to the L07 model to handle parallel tasks.
> [0.000000] [test/INFO] Property list for host host1
> [0.000000] [test/INFO] Property: mem has value: 4
# The order differ when executed with gcc's thread sanitizer
! output sort
-$ $SG_TEST_EXENV ${bindir:=.}/ptg-dotload/sd_ptg-dotload ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/ptg-dotload/ptg.dot
+$ ${bindir:=.}/ptg-dotload/sd_ptg-dotload ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/ptg-dotload/ptg.dot
> [0.000000] [xbt_cfg/INFO] Switching to the L07 model to handle parallel tasks.
> [0.000000] [test/INFO] ------------------- Display all tasks of the loaded DAG ---------------------------
> [0.000000] [sd_task/INFO] Displaying task root
p Test the loader of DAG written in the DOT format
! expect return 2
-$ $SG_TEST_EXENV ${bindir:=.}/schedule-dotload/sd_schedule-dotload --log=no_loc "--log=sd_dotparse.thres:verbose" ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/schedule-dotload/dag_with_bad_schedule.dot
+$ ${bindir:=.}/schedule-dotload/sd_schedule-dotload --log=no_loc "--log=sd_dotparse.thres:verbose" ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/schedule-dotload/dag_with_bad_schedule.dot
> [0.000000] [xbt_cfg/INFO] Switching to the L07 model to handle parallel tasks.
> [0.000000] [sd_dotparse/VERBOSE] The schedule is ignored, task 'end' can not be scheduled on -1 hosts
> [0.000000] [sd_dotparse/VERBOSE] The schedule is ignored, task '1' can not be scheduled on 0 hosts
# The order differ when executed with gcc's thread sanitizer
! output sort
-$ $SG_TEST_EXENV ${bindir:=.}/schedule-dotload/sd_schedule-dotload --log=no_loc ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/schedule-dotload/dag_with_good_schedule.dot
+$ ${bindir:=.}/schedule-dotload/sd_schedule-dotload --log=no_loc ${srcdir:=.}/../../platforms/cluster_backbone.xml ${srcdir:=.}/schedule-dotload/dag_with_good_schedule.dot
> [0.000000] [xbt_cfg/INFO] Switching to the L07 model to handle parallel tasks.
> [0.000000] [test/INFO] ------------------- Display all tasks of the loaded DAG ---------------------------
> [0.000000] [sd_task/INFO] Displaying task root
p Simple test of simdag
-$ $SG_TEST_EXENV ${bindir:=.}/scheduling/sd_scheduling --log=sd_daxparse.thresh:critical ${srcdir:=.}/../../platforms/simulacrum_7_hosts.xml ${srcdir:=.}/scheduling/Montage_25.xml
+$ ${bindir:=.}/scheduling/sd_scheduling --log=sd_daxparse.thresh:critical ${srcdir:=.}/../../platforms/simulacrum_7_hosts.xml ${srcdir:=.}/scheduling/Montage_25.xml
> [0.000000] [xbt_cfg/INFO] Switching to the L07 model to handle parallel tasks.
> [0.000000] [test/INFO] Schedule ID00002@mProjectPP on Host 27
> [0.000105] [test/INFO] Schedule ID00000@mProjectPP on Host 26
p Simple test of simdag
! output sort
-$ $SG_TEST_EXENV ./test/sd_test ${srcdir:=.}/../../platforms/small_platform.xml
+$ ./test/sd_test ${srcdir:=.}/../../platforms/small_platform.xml
> [0.000000] [xbt_cfg/INFO] Switching to the L07 model to handle parallel tasks.
> [0.000000] [sd_test/INFO] Computation time for 2000000.000000 flops on Jacquelin: 0.014563
> [0.000000] [sd_test/INFO] Computation time for 1000000.000000 flops on Fafard: 0.013107
# We need to sort this out because the order changes with the sanitizers (at least)
! output sort
-$ $SG_TEST_EXENV ./throttling/sd_throttling ${srcdir:=.}/../../platforms/cluster_backbone.xml
+$ ./throttling/sd_throttling ${srcdir:=.}/../../platforms/cluster_backbone.xml
> [0.000000] [xbt_cfg/INFO] Switching to the L07 model to handle parallel tasks.
> [5.000000] [sd_comm_throttling/INFO] Simulation stopped after 5.0000 seconds
> [5.000000] [sd_comm_throttling/INFO] Task 'Task A' start time: 0.000000, finish time: 5.000000
p Usage test of simdag's typed tasks
! output sort
-$ $SG_TEST_EXENV ./typed_tasks/sd_typed_tasks ${srcdir:=.}/../../platforms/cluster_backbone.xml
+$ ./typed_tasks/sd_typed_tasks ${srcdir:=.}/../../platforms/cluster_backbone.xml
> [0.000000] [xbt_cfg/INFO] Switching to the L07 model to handle parallel tasks.
> [2.080600] [sd_typed_tasks_test/INFO] Task 'Par. Comp. 3' start time: 0.000000, finish time: 0.400000
> [2.080600] [sd_typed_tasks_test/INFO] Task 'Par. Comp. 1' start time: 0.000000, finish time: 0.400000
${CMAKE_HOME_DIRECTORY}/examples/s4u/${example}/s4u-${example}.tesh)
endforeach()
+
+#Â Model-checking examples: with only one source and tested with all factories but thread
+######################################################################
+
+foreach (example mc-failing-assert)
+ if(SIMGRID_HAVE_MC)
+ add_executable (s4u-${example} EXCLUDE_FROM_ALL ${example}/s4u-${example}.cpp)
+ add_dependencies (tests s4u-${example})
+ target_link_libraries(s4u-${example} simgrid)
+ set_target_properties(s4u-${example} PROPERTIES RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/${example})
+
+
+ ADD_TESH_FACTORIES(s4u-${example} "ucontext;raw;boost"
+ --setenv bindir=${CMAKE_CURRENT_BINARY_DIR}/${example}
+ --setenv platfdir=${CMAKE_HOME_DIRECTORY}/examples/platforms
+ --cd ${CMAKE_CURRENT_SOURCE_DIR}/${example}
+ ${CMAKE_HOME_DIRECTORY}/examples/s4u/${example}/s4u-${example}.tesh)
+ endif()
+
+ set(tesh_files ${tesh_files} ${CMAKE_CURRENT_SOURCE_DIR}/${example}/s4u-${example}.tesh)
+ set(examples_src ${examples_src} ${CMAKE_CURRENT_SOURCE_DIR}/${example}/s4u-${example}.cpp)
+endforeach()
+
+
# Multi-files examples
######################
.. TODO:: document here the examples about plugins
+=======================
+Model-Checking Examples
+=======================
+
+The model-checker can be used to exhaustively search for issues in the
+tested application. It must be activated at compile time, but this
+mode is rather experimental in SimGrid (as of v3.22). You should not
+enable it unless you really want to formally verify your applications:
+SimGrid is slower and maybe less robust when MC is enabled.
+
+ - **Failing assert**
+ In this example, two actors send some data to a central server,
+ which asserts that the messages are always received in the same order.
+ This is obviously wrong, and the model-checker correctly finds a
+ counter-example to that assertion.
+ |br| `examples/s4u/mc-failing-assert/s4u-mc-failing-assert.cpp <https://framagit.org/simgrid/simgrid/tree/master/examples/s4u/mc-failing-assert/s4u-mc-failing-assert.cpp>`_
+
.. |br| raw:: html
<br />
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-actor-create
+$ ${bindir:=.}/s4u-actor-create
> [Tremblay:sender1:(2) 0.000000] [s4u_actor_create/INFO] Hello s4u, I have something to send
> [Jupiter:sender2:(3) 0.000000] [s4u_actor_create/INFO] Hello s4u, I have something to send
> [Fafard:sender:(4) 0.000000] [s4u_actor_create/INFO] Hello s4u, I have something to send
p Testing the process daemonization feature
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-actor-daemon ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-actor-daemon ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
> [ 0.000000] (worker@Boivin) Let's do some work (for 10 sec on Boivin).
> [ 0.000000] (daemon@Tremblay) Hello from the infinite loop
> [ 3.000000] (daemon@Tremblay) Hello from the infinite loop
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-actor-exiting ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-actor-exiting ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
> [ 10.194200] (A@Tremblay) I stop now
> [ 10.194200] (maestro@) Actor A stops now
> [ 26.213694] (maestro@) Actor B stops now
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-actor-join$EXEEXT ${platfdir}/small_platform.xml
+$ ${bindir:=.}/s4u-actor-join ${platfdir}/small_platform.xml
> [Tremblay:master:(1) 0.000000] [s4u_test/INFO] Start sleeper
> [Tremblay:sleeper from master:(2) 0.000000] [s4u_test/INFO] Sleeper started
> [Tremblay:master:(1) 0.000000] [s4u_test/INFO] Join the sleeper (timeout 2)
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-actor-kill ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-actor-kill ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
> [ 0.000000] (killer@Tremblay) Hello!
> [ 0.000000] (victim A@Fafard) Hello!
> [ 0.000000] (victim A@Fafard) Suspending myself
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-actor-lifetime ${platfdir}/cluster_backbone.xml s4u-actor-lifetime_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-actor-lifetime ${platfdir}/cluster_backbone.xml s4u-actor-lifetime_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sleeper@node-0.simgrid.org) Hello! I go to sleep.
> [ 0.000000] (2:sleeper@node-1.simgrid.org) Hello! I go to sleep.
> [ 2.000000] (3:sleeper@node-0.simgrid.org) Hello! I go to sleep.
p Testing the actor migration feature
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-actor-migrate ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-actor-migrate ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
> [ 0.000000] (worker@Fafard) Let's move to Boivin to execute 1177.14 Mflops (5sec on Boivin and 5sec on Jacquelin)
> [ 5.000000] (monitor@Boivin) After 5 seconds, move the process to Jacquelin
> [ 10.000000] (worker@Jacquelin) I wake up on Jacquelin. Let's suspend a bit
p Testing the suspend/resume feature of S4U
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-actor-suspend ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-actor-suspend ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
> [ 0.000000] (dream_master@Boivin) Let's create a lazy guy.
> [ 0.000000] (Lazy@Boivin) Nobody's watching me ? Let's go to sleep.
> [ 0.000000] (dream_master@Boivin) Let's wait a little bit...
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-actor-yield ${platfdir}/small_platform_fatpipe.xml s4u-actor-yield_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-actor-yield ${platfdir}/small_platform_fatpipe.xml s4u-actor-yield_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:yielder@Tremblay) I yielded 10 times. Goodbye now!
> [ 0.000000] (2:yielder@Ruby) I yielded 15 times. Goodbye now!
! timeout 10
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-bittorrent ${platfdir}/cluster_backbone.xml s4u-app-bittorrent_d.xml "--log=root.fmt:[%12.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-bittorrent ${platfdir}/cluster_backbone.xml s4u-app-bittorrent_d.xml "--log=root.fmt:[%12.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:tracker@node-0.simgrid.org) Tracker launched.
> [ 0.000000] (2:peer@node-1.simgrid.org) Hi, I'm joining the network with id 2
> [ 0.000000] (3:peer@node-2.simgrid.org) Hi, I'm joining the network with id 3
! timeout 60
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-chainsend ${platfdir}/cluster_backbone.xml "--log=root.fmt:[%12.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-app-chainsend ${platfdir}/cluster_backbone.xml "--log=root.fmt:[%12.6r]%e(%i:%P@%h)%e%m%n"
> [ 2.214423] (2:peer@node-1.simgrid.org) ### 2.214423 16777216 bytes (Avg 7.225360 MB/s); copy finished (simulated).
> [ 2.222796] (3:peer@node-2.simgrid.org) ### 2.222796 16777216 bytes (Avg 7.198141 MB/s); copy finished (simulated).
> [ 2.231170] (4:peer@node-3.simgrid.org) ### 2.231170 16777216 bytes (Avg 7.171127 MB/s); copy finished (simulated).
p Testing a simple master/workers example application
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-masterworkers-class$EXEEXT ${platfdir}/small_platform.xml s4u-app-masterworkers_d.xml --trace "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-app-masterworkers-class ${platfdir}/small_platform.xml s4u-app-masterworkers_d.xml --trace "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
> [ 0.000000] (master@Tremblay) Got 5 workers and 20 tasks to process
> [ 0.000000] (master@Tremblay) Sending task 0 of 20 to mailbox 'Tremblay'
> [ 0.002265] (master@Tremblay) Sending task 1 of 20 to mailbox 'Jupiter'
> [ 5.133855] (worker@Bourassa) Exiting now.
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-masterworkers-fun$EXEEXT ${platfdir}/small_platform.xml s4u-app-masterworkers_d.xml --trace "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-app-masterworkers-fun ${platfdir}/small_platform.xml s4u-app-masterworkers_d.xml --trace "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
> [ 0.000000] (master@Tremblay) Got 5 workers and 20 tasks to process
> [ 0.000000] (master@Tremblay) Sending task 0 of 20 to mailbox 'Tremblay'
> [ 0.002265] (master@Tremblay) Sending task 1 of 20 to mailbox 'Jupiter'
p Testing with default compound
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-pingpong$EXEEXT ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-app-pingpong ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:pinger@Tremblay) Ping from mailbox Mailbox 1 to mailbox Mailbox 2
> [ 0.000000] (2:ponger@Jupiter) Pong from mailbox Mailbox 2 to mailbox Mailbox 1
> [ 0.019014] (2:ponger@Jupiter) Task received : small communication (latency bound)
p Testing with default compound Full network optimization
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-pingpong$EXEEXT ${platfdir}/small_platform.xml "--cfg=network/optim:Full" "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-app-pingpong ${platfdir}/small_platform.xml "--cfg=network/optim:Full" "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/optim' to 'Full'
> [ 0.000000] (1:pinger@Tremblay) Ping from mailbox Mailbox 1 to mailbox Mailbox 2
> [ 0.000000] (2:ponger@Jupiter) Pong from mailbox Mailbox 2 to mailbox Mailbox 1
p Testing the deprecated CM02 network model
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-pingpong$EXEEXT ${platfdir}/small_platform.xml --cfg=cpu/model:Cas01 --cfg=network/model:CM02 "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-app-pingpong ${platfdir}/small_platform.xml --cfg=cpu/model:Cas01 --cfg=network/model:CM02 "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'cpu/model' to 'Cas01'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/model' to 'CM02'
> [ 0.000000] (1:pinger@Tremblay) Ping from mailbox Mailbox 1 to mailbox Mailbox 2
p Testing the surf network Reno fairness model using lagrangian approach
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-pingpong$EXEEXT ${platfdir}/small_platform.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Reno" --log=surf_lagrange.thres=critical "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-app-pingpong ${platfdir}/small_platform.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Reno" --log=surf_lagrange.thres=critical "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'host/model' to 'compound'
> [ 0.000000] (0:maestro@) Configuration change: Set 'cpu/model' to 'Cas01'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/model' to 'Reno'
p Testing the surf network Reno2 fairness model using lagrangian approach
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-pingpong$EXEEXT ${platfdir}/small_platform.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Reno2" --log=surf_lagrange.thres=critical "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-app-pingpong ${platfdir}/small_platform.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Reno2" --log=surf_lagrange.thres=critical "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'host/model' to 'compound'
> [ 0.000000] (0:maestro@) Configuration change: Set 'cpu/model' to 'Cas01'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/model' to 'Reno2'
p Testing the surf network Vegas fairness model using lagrangian approach
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-pingpong$EXEEXT ${platfdir}/small_platform.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Vegas" "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-app-pingpong ${platfdir}/small_platform.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Vegas" "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'host/model' to 'compound'
> [ 0.000000] (0:maestro@) Configuration change: Set 'cpu/model' to 'Cas01'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/model' to 'Vegas'
p Testing the surf network constant model
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-pingpong$EXEEXT ${platfdir}/small_platform_constant.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Constant" "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-app-pingpong ${platfdir}/small_platform_constant.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Constant" "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'host/model' to 'compound'
> [ 0.000000] (0:maestro@) Configuration change: Set 'cpu/model' to 'Cas01'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/model' to 'Constant'
p Testing option --cfg=simix/breakpoint
! expect signal SIGTRAP
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-pingpong$EXEEXT ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=simix/breakpoint:3.1416
+$ ${bindir:=.}/s4u-app-pingpong ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=simix/breakpoint:3.1416
> [ 0.000000] (0:maestro@) Configuration change: Set 'simix/breakpoint' to '3.1416'
> [ 0.000000] (1:pinger@Tremblay) Ping from mailbox Mailbox 1 to mailbox Mailbox 2
> [ 0.000000] (2:ponger@Jupiter) Pong from mailbox Mailbox 2 to mailbox Mailbox 1
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-token-ring ${platfdir}/routing_cluster.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-app-token-ring ${platfdir}/routing_cluster.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
> [ 0.000000] (maestro@) Number of hosts '6'
> [ 0.000000] (0@host1) Host "0" send 'Token' to Host "1"
> [ 0.017354] (1@host2) Host "1" received "Token"
> [ 0.131796] (0@host1) Host "0" received "Token"
> [ 0.131796] (maestro@) Simulation time 0.131796
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-token-ring ${platfdir}/two_peers.xml "--log=root.fmt:[%12.6r]%e(%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-app-token-ring ${platfdir}/two_peers.xml "--log=root.fmt:[%12.6r]%e(%P@%h)%e%m%n"
> [ 0.000000] (maestro@) Number of hosts '2'
> [ 0.000000] (0@100030591) Host "0" send 'Token' to Host "1"
> [ 0.624423] (1@100036570) Host "1" received "Token"
> [ 1.248846] (0@100030591) Host "0" received "Token"
> [ 1.248846] (maestro@) Simulation time 1.24885
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-app-token-ring ${platfdir}/meta_cluster.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-app-token-ring ${platfdir}/meta_cluster.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
> [ 0.000000] (maestro@) Number of hosts '60'
> [ 0.000000] (0@host-1.cluster1) Host "0" send 'Token' to Host "1"
> [ 0.030364] (1@host-1.cluster2) Host "1" received "Token"
p Test1 Peer sending and receiving
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-async-ready ${platfdir}/small_platform_fatpipe.xml s4u-async-ready_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-async-ready ${platfdir}/small_platform_fatpipe.xml s4u-async-ready_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:peer@Tremblay) Send 'Message 0 from peer 0' to 'peer-1'
> [ 0.000000] (2:peer@Ruby) Send 'Message 0 from peer 1' to 'peer-0'
> [ 0.000000] (1:peer@Tremblay) Send 'Message 0 from peer 0' to 'peer-2'
p Test1 Sleep_sender > Sleep_receiver
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-async-wait ${platfdir}/small_platform_fatpipe.xml s4u-async-wait_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-async-wait ${platfdir}/small_platform_fatpipe.xml s4u-async-wait_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sender@Tremblay) Send 'Message 0' to 'receiver-0'
> [ 0.000000] (2:receiver@Ruby) Wait for my first message
> [ 0.000000] (1:sender@Tremblay) Send 'Message 1' to 'receiver-0'
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-async-waitall ${platfdir}/small_platform_fatpipe.xml s4u-async-waitall_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-async-waitall ${platfdir}/small_platform_fatpipe.xml s4u-async-waitall_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sender@Tremblay) Send 'Message 0' to 'receiver-0'
> [ 0.000000] (2:receiver@Ruby) Wait for my first message
> [ 0.000000] (3:receiver@Perl) Wait for my first message
p Testing this_actor->wait_any()
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-async-waitany ${platfdir}/small_platform.xml s4u-async-waitany_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-async-waitany ${platfdir}/small_platform.xml s4u-async-waitany_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sender@Tremblay) Send 'Message 0' to 'receiver-0'
> [ 0.000000] (2:receiver@Fafard) Wait for my first message
> [ 0.000000] (3:receiver@Jupiter) Wait for my first message
p Test1 Sleep_sender > Sleep_receiver
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-async-waituntil ${platfdir}/small_platform_fatpipe.xml s4u-async-waituntil_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-async-waituntil ${platfdir}/small_platform_fatpipe.xml s4u-async-waituntil_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sender@Tremblay) Send 'Message 0' to 'receiver-0'
> [ 0.000000] (2:receiver@Ruby) Wait for my first message
> [ 0.000000] (1:sender@Tremblay) Send 'Message 1' to 'receiver-0'
! output sort
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-cloud-capping ${platfdir}/small_platform.xml --log=no_loc "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-cloud-capping ${platfdir}/small_platform.xml --log=no_loc "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:master_@Fafard) # 1. Put a single task on a PM.
> [ 0.000000] (1:master_@Fafard) ### Test: with/without task set_bound
> [ 0.000000] (1:master_@Fafard) ### Test: no bound for Task1@Fafard
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-cloud-migration ${platfdir}/small_platform.xml --log=no_loc "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-cloud-migration ${platfdir}/small_platform.xml --log=no_loc "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:master_@Fafard) Test: Migrate a VM with 1000 Mbytes RAM
> [132.765801] (1:master_@Fafard) VM0 migrated: Fafard->Tremblay in 132.766 s
> [132.765801] (1:master_@Fafard) Test: Migrate a VM with 100 Mbytes RAM
p Testing a vm with two successive tasks
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-cloud-simple$EXEEXT --log=no_loc ${platfdir:=.}/small_platform.xml
+$ ${bindir:=.}/s4u-cloud-simple --log=no_loc ${platfdir:=.}/small_platform.xml
> [Fafard:master_:(1) 0.000000] [s4u_test/INFO] ## Test 1 (started): check computation on normal PMs
> [Fafard:master_:(1) 0.000000] [s4u_test/INFO] ### Put a task on a PM
> [Fafard:compute:(2) 0.013107] [s4u_test/INFO] Fafard:compute task executed 0.0131068
p Testing the Chord implementation with S4U
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-dht-chord$EXEEXT -nb_bits=3 ${platfdir}/cluster_backbone.xml s4u-dht-chord_d.xml --log=s4u_chord.thres:verbose "--log=root.fmt:[%10.5r]%e(%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-dht-chord -nb_bits=3 ${platfdir}/cluster_backbone.xml s4u-dht-chord_d.xml --log=s4u_chord.thres:verbose "--log=root.fmt:[%10.5r]%e(%P@%h)%e%m%n"
> [ 0.00000] (node@node-0.simgrid.org) My finger table:
> [ 0.00000] (node@node-0.simgrid.org) Start | Succ
> [ 0.00000] (node@node-0.simgrid.org) 3 | 42
p Testing the Kademlia implementation with S4U
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-dht-kademlia ${platfdir}/cluster_backbone.xml ${srcdir:=.}/s4u-dht-kademlia_d.xml "--log=root.fmt:[%10.6r]%e(%02i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-dht-kademlia ${platfdir}/cluster_backbone.xml ${srcdir:=.}/s4u-dht-kademlia_d.xml "--log=root.fmt:[%10.6r]%e(%02i:%P@%h)%e%m%n"
> [ 0.000000] ( 1:node@node-0.simgrid.org) Hi, I'm going to create the network with id 0
> [ 0.000000] ( 2:node@node-1.simgrid.org) Hi, I'm going to join the network with id 1
> [ 0.000000] ( 3:node@node-2.simgrid.org) Hi, I'm going to join the network with id 3
p Modeling the host energy consumption during boot and shutdown
-$ ${bindir:=.}/s4u-energy-boot$EXEEXT ${srcdir:=.}/platform_boot.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-energy-boot ${srcdir:=.}/platform_boot.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:Boot Monitor@MyHost2) Initial pstate: 0; Energy dissipated so far:0E+00 J
> [ 0.000000] (1:Boot Monitor@MyHost2) Sleep for 10 seconds
> [ 10.000000] (1:Boot Monitor@MyHost2) Done sleeping. Current pstate: 0; Energy dissipated so far: 950.00 J
> [177.000000] (0:maestro@) Energy consumption of host MyHost1: 19820.000000 Joules
> [177.000000] (0:maestro@) Energy consumption of host MyHost2: 17700.000000 Joules
-$ ${bindir:=.}/s4u-energy-boot$EXEEXT ${srcdir:=.}/platform_boot.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=host/model:ptask_L07
+$ ${bindir:=.}/s4u-energy-boot ${srcdir:=.}/platform_boot.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=host/model:ptask_L07
> [ 0.000000] (0:maestro@) Configuration change: Set 'host/model' to 'ptask_L07'
> [ 0.000000] (0:maestro@) Switching to the L07 model to handle parallel tasks.
> [ 0.000000] (1:Boot Monitor@MyHost2) Initial pstate: 0; Energy dissipated so far:0E+00 J
p Testing the mechanism for computing host energy consumption
-$ ${bindir:=.}/s4u-energy-exec$EXEEXT ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-energy-exec ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:dvfs_test@MyHost1) Energetic profile: 100.0:120.0:200.0, 93.0:110.0:170.0, 90.0:105.0:150.0
> [ 0.000000] (1:dvfs_test@MyHost1) Initial peak speed=1E+08 flop/s; Energy dissipated =0E+00 J
> [ 0.000000] (1:dvfs_test@MyHost1) Sleep for 10 seconds
> [ 30.000000] (0:maestro@) Energy consumption of host MyHost2: 2100.000000 Joules
> [ 30.000000] (0:maestro@) Energy consumption of host MyHost3: 3000.000000 Joules
-$ ${bindir:=.}/s4u-energy-exec$EXEEXT ${platfdir}/energy_cluster.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=host/model:ptask_L07
+$ ${bindir:=.}/s4u-energy-exec ${platfdir}/energy_cluster.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=host/model:ptask_L07
> [ 0.000000] (0:maestro@) Configuration change: Set 'host/model' to 'ptask_L07'
> [ 0.000000] (0:maestro@) Switching to the L07 model to handle parallel tasks.
> [ 0.000000] (1:dvfs_test@MyHost1) Energetic profile: 100.0:120.0:200.0, 93.0:110.0:170.0, 90.0:105.0:150.0
p Testing the mechanism for computing link energy consumption (using CM02 as a network model)
-$ ${bindir:=.}/s4u-energy-link$EXEEXT ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=network/model:CM02 --cfg=network/crosstraffic:no
+$ ${bindir:=.}/s4u-energy-link ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=network/model:CM02 --cfg=network/crosstraffic:no
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/model' to 'CM02'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/crosstraffic' to 'no'
> [ 0.000000] (0:maestro@) Activating the SimGrid link energy plugin
p And now test with 500000 bytes
-$ ${bindir:=.}/s4u-energy-link$EXEEXT ${platfdir}/energy_platform.xml 1 50000000 "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=network/model:CM02 --cfg=network/crosstraffic:no
+$ ${bindir:=.}/s4u-energy-link ${platfdir}/energy_platform.xml 1 50000000 "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=network/model:CM02 --cfg=network/crosstraffic:no
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/model' to 'CM02'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/crosstraffic' to 'no'
> [ 0.000000] (0:maestro@) Activating the SimGrid link energy plugin
p Testing the mechanism for computing host energy consumption in case of VMs
-$ ${bindir:=.}/s4u-energy-vm$EXEEXT ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-energy-vm ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:dvfs@MyHost1) Creating and starting two VMs
> [ 0.000000] (1:dvfs@MyHost1) Create two tasks on Host1: both inside a VM
> [ 0.000000] (1:dvfs@MyHost1) Create two tasks on Host2: one inside a VM, the other directly on the host
#!/usr/bin/env tesh
p Let's filter some hosts...
-$ ${bindir:=.}/s4u-engine-filtering$EXEEXT ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=network/model:CM02 --cfg=network/crosstraffic:no
+$ ${bindir:=.}/s4u-engine-filtering ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=network/model:CM02 --cfg=network/crosstraffic:no
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/model' to 'CM02'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/crosstraffic' to 'no'
> [ 0.000000] (0:maestro@) Hosts currently registered with this engine: 3
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-exec-async$EXEEXT ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-exec-async ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:wait@Fafard) Execute 7.6296e+07 flops, should take 1 second.
> [ 0.000000] (2:monitor@Ginette) Execute 4.8492e+07 flops, should take 1 second.
> [ 0.000000] (3:cancel@Boivin) Execute 9.8095e+07 flops, should take 1 second.
#!/usr/bin/env tesh
p Start remote processes
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-exec-basic$EXEEXT ${platfdir}/small_platform.xml
+$ ${bindir:=.}/s4u-exec-basic ${platfdir}/small_platform.xml
> [Tremblay:privileged:(2) 0.001500] [s4u_test/INFO] Done.
> [Tremblay:executor:(1) 0.002000] [s4u_test/INFO] Done.
p Testing the DVFS-related functions
-$ ${bindir:=.}/s4u-exec-dvfs$EXEEXT ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-exec-dvfs ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:dvfs_test@MyHost1) Count of Processor states=3
> [ 0.000000] (1:dvfs_test@MyHost1) Current power peak=100000000.000000
> [ 0.000000] (2:dvfs_test@MyHost2) Count of Processor states=3
> [ 6.000000] (2:dvfs_test@MyHost2) Current power peak=20000000.000000
> [ 6.000000] (0:maestro@) Total simulation time: 6.000000e+00
-$ ${bindir:=.}/s4u-exec-dvfs$EXEEXT ${platfdir}/energy_cluster.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-exec-dvfs ${platfdir}/energy_cluster.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:dvfs_test@MyHost1) Count of Processor states=3
> [ 0.000000] (1:dvfs_test@MyHost1) Current power peak=100000000.000000
> [ 0.000000] (2:dvfs_test@MyHost2) Count of Processor states=3
#!/usr/bin/env tesh
-$ ${bindir:=.}/s4u-exec-ptask$EXEEXT ${platfdir}/energy_platform.xml --cfg=host/model:ptask_L07 --cfg=tracing:yes --cfg=tracing/uncategorized:yes --log=instr_resource.t:debug --log=no_loc "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-exec-ptask ${platfdir}/energy_platform.xml --cfg=host/model:ptask_L07 --cfg=tracing:yes --cfg=tracing/uncategorized:yes --log=instr_resource.t:debug --log=no_loc "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'host/model' to 'ptask_L07'
> [ 0.000000] (0:maestro@) Configuration change: Set 'tracing' to 'yes'
> [ 0.000000] (0:maestro@) Configuration change: Set 'tracing/uncategorized' to 'yes'
#!/usr/bin/env tesh
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-exec-remote$EXEEXT ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-exec-remote ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:test@Fafard) I'm a wizard! I can run a task on the Ginette host from the Fafard one! Look!
> [ 0.000000] (1:test@Fafard) It started. Running 48.492Mf takes exactly one second on Ginette (but not on Fafard).
> [ 0.100000] (1:test@Fafard) Loads in flops/s: Boivin=0; Fafard=0; Ginette=48492000
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-io-async$EXEEXT ${platfdir}/storage/storage.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-io-async ${platfdir}/storage/storage.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:test@bob) Hello! read 20000000 bytes from Storage Disk1
> [ 0.000000] (2:test_cancel@alice) Hello! write 50000000 bytes from Storage Disk2
> [ 0.200000] (1:test@bob) Goodbye now!
#!/usr/bin/env tesh
-$ ${bindir:=.}/s4u-io-file-remote$EXEEXT ${platfdir}/storage/remote_io.xml s4u-io-file-remote_d.xml "--log=root.fmt:[%10.6r]%e(%i@%5h)%e%m%n"
+$ ${bindir:=.}/s4u-io-file-remote ${platfdir}/storage/remote_io.xml s4u-io-file-remote_d.xml "--log=root.fmt:[%10.6r]%e(%i@%5h)%e%m%n"
> [ 0.000000] (0@ ) Init: 12/476824 MiB used/free on 'Disk1'
> [ 0.000000] (0@ ) Init: 2280/474556 MiB used/free on 'Disk2'
> [ 0.000000] (1@alice) Opened file 'c:\Windows\setupact.log'
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-io-file-system ${platfdir}/storage/storage.xml
+$ ${bindir:=.}/s4u-io-file-system ${platfdir}/storage/storage.xml
> [denise:host:(1) 0.000000] [s4u_test/INFO] Storage info on denise:
> [denise:host:(1) 0.000000] [s4u_test/INFO] Disk2 (c:) Used: 2391537133; Free: 534479374867; Total: 536870912000.
> [denise:host:(1) 0.000000] [s4u_test/INFO] Disk4 (/home) Used: 13221994; Free: 536857690006; Total: 536870912000.
#!/usr/bin/env tesh
-$ ${bindir}/s4u-io-storage-raw$EXEEXT ${platfdir}/storage/storage.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir}/s4u-io-storage-raw ${platfdir}/storage/storage.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:@denise) *** Storage info on denise ***
> [ 0.000000] (1:@denise) Storage name: Disk2, mount name: c:
> [ 0.000000] (1:@denise) Storage name: Disk4, mount name: /home
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-maestro-set$EXEEXT ${platfdir:=.}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-maestro-set ${platfdir:=.}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) I am not the main thread, as expected
> [ 0.000000] (2:receiver@Jupiter) I am not the main thread, as expected
> [ 0.000000] (1:sender@Tremblay) I am the main thread, as expected
--- /dev/null
+/* Copyright (c) 2010-2019. The SimGrid Team. All rights reserved. */
+
+/* This program is free software; you can redistribute it and/or modify it
+ * under the terms of the license (GNU LGPL) which comes with this package. */
+
+/******************** Non-deterministic message ordering *********************/
+/* Server assumes a fixed order in the reception of messages from its clients */
+/* which is incorrect because the message ordering is non-deterministic */
+/******************************************************************************/
+
+#include <simgrid/modelchecker.h>
+#include <simgrid/s4u.hpp>
+
+XBT_LOG_NEW_DEFAULT_CATEGORY(mc_assert_example, "Logging channel used in this example");
+
+static int server(int worker_amount)
+{
+ int value_got = -1;
+ simgrid::s4u::Mailbox* mb = simgrid::s4u::Mailbox::by_name("server");
+ for (int count = 0; count < worker_amount; count++) {
+ int* msg = static_cast<int*>(mb->get());
+ value_got = *msg;
+ delete msg;
+ }
+ /*
+ * We assert here that the last message we got (which overwrite any previously received message) is the one from the
+ * last worker This will obviously fail when the messages are received out of order.
+ */
+ MC_assert(value_got == 2);
+
+ XBT_INFO("OK");
+ return 0;
+}
+
+static int client(int rank)
+{
+ /* I just send my rank onto the mailbox. It must be passed as a stable memory block (thus the new) so that that
+ * memory survives even after the end of the client */
+
+ simgrid::s4u::Mailbox* mailbox = simgrid::s4u::Mailbox::by_name("server");
+ mailbox->put(new int(rank), 1 /* communication cost is not really relevant in MC mode */);
+
+ XBT_INFO("Sent!");
+ return 0;
+}
+
+int main(int argc, char* argv[])
+{
+ simgrid::s4u::Engine e(&argc, argv);
+ xbt_assert(argc > 1, "Usage: %s platform_file\n", argv[0]);
+
+ e.load_platform(argv[1]);
+ auto hosts = e.get_all_hosts();
+ xbt_assert(hosts.size() >= 3, "This example requires at least 3 hosts");
+
+ simgrid::s4u::Actor::create("server", hosts[0], &server, 2);
+ simgrid::s4u::Actor::create("client1", hosts[1], &client, 1);
+ simgrid::s4u::Actor::create("client2", hosts[2], &client, 2);
+
+ e.run();
+ return 0;
+}
--- /dev/null
+#!/usr/bin/env tesh
+
+! expect return 1
+! timeout 20
+$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/s4u-mc-failing-assert ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --log=xbt_cfg.thresh:warning
+> [ 0.000000] (0:maestro@) Check a safety property. Reduction is: dpor.
+> [ 0.000000] (2:client1@Bourassa) Sent!
+> [ 0.000000] (1:server@Boivin) OK
+> [ 0.000000] (3:client2@Fafard) Sent!
+> [ 0.000000] (2:client1@Bourassa) Sent!
+> [ 0.000000] (2:client1@Bourassa) Sent!
+> [ 0.000000] (1:server@Boivin) OK
+> [ 0.000000] (3:client2@Fafard) Sent!
+> [ 0.000000] (2:client1@Bourassa) Sent!
+> [ 0.000000] (0:maestro@) **************************
+> [ 0.000000] (0:maestro@) *** PROPERTY NOT VALID ***
+> [ 0.000000] (0:maestro@) **************************
+> [ 0.000000] (0:maestro@) Counter-example execution trace:
+> [ 0.000000] (0:maestro@) [(1)Boivin (server)] iRecv(dst=(1)Boivin (server), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(3)Fafard (client2)] iSend(src=(3)Fafard (client2), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(1)Boivin (server)] Wait(comm=(verbose only) [(3)Fafard (client2)-> (1)Boivin (server)])
+> [ 0.000000] (0:maestro@) [(1)Boivin (server)] iRecv(dst=(1)Boivin (server), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(2)Bourassa (client1)] iSend(src=(2)Bourassa (client1), buff=(verbose only), size=(verbose only))
+> [ 0.000000] (0:maestro@) [(1)Boivin (server)] Wait(comm=(verbose only) [(2)Bourassa (client1)-> (1)Boivin (server)])
+> [ 0.000000] (0:maestro@) Path = 1;3;1;1;2;1
+> [ 0.000000] (0:maestro@) Expanded states = 18
+> [ 0.000000] (0:maestro@) Visited states = 36
+> [ 0.000000] (0:maestro@) Executed transitions = 32
p Testing a simple master/worker example application handling failures TCP crosstraffic DISABLED
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-platform-failures$EXEEXT --log=xbt_cfg.thres:critical --log=no_loc ${platfdir}/small_platform_failures.xml ${srcdir:=.}/s4u-platform-failures_d.xml --cfg=path:${srcdir} --cfg=network/crosstraffic:0 "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --log=surf_cpu.t:verbose
+$ ${bindir:=.}/s4u-platform-failures --log=xbt_cfg.thres:critical --log=no_loc ${platfdir}/small_platform_failures.xml ${srcdir:=.}/s4u-platform-failures_d.xml --cfg=path:${srcdir} --cfg=network/crosstraffic:0 "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --log=surf_cpu.t:verbose
> [ 0.000000] (0:maestro@) Cannot launch actor 'worker' on failed host 'Fafard'
> [ 0.000000] (0:maestro@) Deployment includes some initially turned off Hosts ... nevermind.
> [ 0.000000] (1:master@Tremblay) Got 5 workers and 20 tasks to process
p Testing a simple master/worker example application handling failures. TCP crosstraffic ENABLED
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-platform-failures$EXEEXT --log=xbt_cfg.thres:critical --log=no_loc ${platfdir}/small_platform_failures.xml ${srcdir:=.}/s4u-platform-failures_d.xml --cfg=path:${srcdir} "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --log=surf_cpu.t:verbose
+$ ${bindir:=.}/s4u-platform-failures --log=xbt_cfg.thres:critical --log=no_loc ${platfdir}/small_platform_failures.xml ${srcdir:=.}/s4u-platform-failures_d.xml --cfg=path:${srcdir} "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --log=surf_cpu.t:verbose
> [ 0.000000] (0:maestro@) Cannot launch actor 'worker' on failed host 'Fafard'
> [ 0.000000] (0:maestro@) Deployment includes some initially turned off Hosts ... nevermind.
> [ 0.000000] (1:master@Tremblay) Got 5 workers and 20 tasks to process
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-platform-profile ${platfdir}/small_platform_profile.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-platform-profile ${platfdir}/small_platform_profile.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:watcher@Tremblay) Fafard: 25Gflops, Jupiter: 12Gflops, Link1: (1000.00MB/s 10ms), Link2: (1000.00MB/s 10ms)
> [ 1.000000] (1:watcher@Tremblay) Fafard: 25Gflops, Jupiter: 12Gflops, Link1: (1000.00MB/s 3ms), Link2: (1000.00MB/s 10ms)
> [ 2.000000] (1:watcher@Tremblay) Fafard: 25Gflops, Jupiter: 25Gflops, Link1: (2000.00MB/s 3ms), Link2: (1000.00MB/s 10ms)
p Testing a S4U application with properties in the XML for Hosts, Links and Actors
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-platform-properties$EXEEXT ${platfdir}/prop.xml s4u-platform-properties_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-platform-properties ${platfdir}/prop.xml s4u-platform-properties_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) There are 7 hosts in the environment
> [ 0.000000] (0:maestro@) Host 'host1' runs at 1000000000 flops/s
> [ 0.000000] (0:maestro@) Host 'host2' runs at 1000000000 flops/s
p This tests the HostLoad plugin (this allows the user to get the current load of a host and the computed flops)
-$ ${bindir:=.}/s4u-plugin-hostload$EXEEXT ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-plugin-hostload ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:load_test@MyHost1) Initial peak speed: 1E+08 flop/s; number of flops computed so far: 0E+00 (should be 0) and current average load: 0.00000 (should be 0)
> [ 0.000000] (1:load_test@MyHost1) Sleep for 10 seconds
> [ 10.000000] (1:load_test@MyHost1) Done sleeping 10.00s; peak speed: 1E+08 flop/s; number of flops computed so far: 0E+00 (nothing should have changed)
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-routing-get-clusters$EXEEXT ${platfdir}/cluster_multi.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-routing-get-clusters ${platfdir}/cluster_multi.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
> [ 0.000000] (maestro@) simple
> [ 0.000000] (maestro@) node-0.1core.org
> [ 0.000000] (maestro@) node-1.1core.org
> [ 0.000000] (maestro@) node-6.4cores.org
> [ 0.000000] (maestro@) node-7.4cores.org
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-routing-get-clusters$EXEEXT ${platfdir}/cluster_dragonfly.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
+$ ${bindir:=.}/s4u-routing-get-clusters ${platfdir}/cluster_dragonfly.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
> [ 0.000000] (maestro@) bob_cluster
> [ 0.000000] (maestro@) node-0.simgrid.org
> [ 0.000000] (maestro@) node-1.simgrid.org
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-synchro-barrier 1
+$ ${bindir:=.}/s4u-synchro-barrier 1
> [Tremblay:master:(1) 0.000000] [s4u_test/INFO] Spawning 0 workers
> [Tremblay:master:(1) 0.000000] [s4u_test/INFO] Waiting on the barrier
> [Tremblay:master:(1) 0.000000] [s4u_test/INFO] Bye
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-synchro-barrier 2
+$ ${bindir:=.}/s4u-synchro-barrier 2
> [Tremblay:master:(1) 0.000000] [s4u_test/INFO] Spawning 1 workers
> [Jupiter:worker:(2) 0.000000] [s4u_test/INFO] Waiting on the barrier
> [Tremblay:master:(1) 0.000000] [s4u_test/INFO] Waiting on the barrier
> [Tremblay:master:(1) 0.000000] [s4u_test/INFO] Bye
> [Jupiter:worker:(2) 0.000000] [s4u_test/INFO] Bye
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-synchro-barrier 3
+$ ${bindir:=.}/s4u-synchro-barrier 3
> [Tremblay:master:(1) 0.000000] [s4u_test/INFO] Spawning 2 workers
> [Jupiter:worker:(2) 0.000000] [s4u_test/INFO] Waiting on the barrier
> [Jupiter:worker:(3) 0.000000] [s4u_test/INFO] Waiting on the barrier
> [Jupiter:worker:(2) 0.000000] [s4u_test/INFO] Bye
> [Jupiter:worker:(3) 0.000000] [s4u_test/INFO] Bye
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-synchro-barrier 10
+$ ${bindir:=.}/s4u-synchro-barrier 10
> [Tremblay:master:(1) 0.000000] [s4u_test/INFO] Spawning 9 workers
> [Jupiter:worker:(2) 0.000000] [s4u_test/INFO] Waiting on the barrier
> [Jupiter:worker:(3) 0.000000] [s4u_test/INFO] Waiting on the barrier
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-synchro-mutex
+$ ${bindir:=.}/s4u-synchro-mutex
> [Jupiter:worker:(2) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a lock_guard
> [Jupiter:worker:(2) 0.000000] [s4u_test/INFO] I'm done, good bye
> [Tremblay:worker:(3) 0.000000] [s4u_test/INFO] Hello s4u, I'm ready to compute after a regular lock
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-synchro-semaphore
+$ ${bindir:=.}/s4u-synchro-semaphore
> [Tremblay:producer:(1) 0.000000] [s4u_test/INFO] Pushing 'one'
> [Jupiter:consumer:(2) 0.000000] [s4u_test/INFO] Receiving 'one'
> [Tremblay:producer:(1) 0.000000] [s4u_test/INFO] Pushing 'two'
#!/usr/bin/env tesh
p Tracing platform only
-$ $SG_TEST_EXENV ${bindir:=.}/s4u-trace-platform$EXEEXT --cfg=tracing:yes --cfg=tracing/filename:trace_platform.trace --cfg=tracing/categorized:yes ${platfdir}/small_platform.xml
+$ ${bindir:=.}/s4u-trace-platform --cfg=tracing:yes --cfg=tracing/filename:trace_platform.trace --cfg=tracing/categorized:yes ${platfdir}/small_platform.xml
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing' to 'yes'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/filename' to 'trace_platform.trace'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'tracing/categorized' to 'yes'
p already been compiled with the -trace-call-location switch.
$ ${bindir:=.}/../../../smpi_script/bin/smpirun -trace -trace-file ${bindir:=.}/smpi_trace.trace -hostfile ${srcdir:=.}/hostfile -platform ${platfdir}/small_platform.xml --cfg=smpi/trace-call-location:1 -np 3 ${bindir:=.}/smpi_trace_call_location --cfg=smpi/host-speed:1 --log=smpi_kernel.thres:warning --log=xbt_cfg.thres:warning
-$ grep --quiet "12 0.* 2 1 7 .*trace_call_location\.c\" 14$" ${bindir:=.}/smpi_trace.trace
+$ grep -q "12 0.* 2 1 7 .*trace_call_location\.c\" 14$" ${bindir:=.}/smpi_trace.trace
$ rm -f ${bindir:=.}/smpi_trace.trace
Comm() : Activity() {}
public:
+#ifndef DOXYGEN
friend XBT_PUBLIC void intrusive_ptr_release(Comm* c);
friend XBT_PUBLIC void intrusive_ptr_add_ref(Comm* c);
friend Mailbox; // Factory of comms
+#endif
virtual ~Comm();
#ifndef DOXYGEN
Exec(Exec const&) = delete;
Exec& operator=(Exec const&) = delete;
-#endif
friend ExecSeq;
friend ExecPar;
friend XBT_PUBLIC void intrusive_ptr_release(Exec* e);
friend XBT_PUBLIC void intrusive_ptr_add_ref(Exec* e);
+#endif
static xbt::signal<void(Actor const&)> on_start;
static xbt::signal<void(Actor const&)> on_completion;
explicit Io(sg_storage_t storage, sg_size_t size, OpType type);
public:
+#ifndef DOXYGEN
friend XBT_PUBLIC void intrusive_ptr_release(simgrid::s4u::Io* i);
friend XBT_PUBLIC void intrusive_ptr_add_ref(simgrid::s4u::Io* i);
friend Storage; // Factory of IOs
+#endif
~Io() = default;
# Disable some rules on some files
-sonar.issue.ignore.multicriteria=j1,j2,j3,jni1,jni2,c1,c2a,c2b,c3,c4a,c4b,c5a,c5b,c6a,c6b,c7,c8,c9,c10,f1
+sonar.issue.ignore.multicriteria=j1,j2,j3,jni1,jni2,c1,c2a,c2b,c3,c4a,c4b,c5a,c5b,c6a,c6b,c7,c8,c9,c10,f1,p1
# The Object.finalize() method should not be overriden
# But we need to clean the native memory with JNI
sonar.issue.ignore.multicriteria.f1.ruleKey=cpp:S3630
sonar.issue.ignore.multicriteria.f1.resourceKey=src/smpi/bindings/smpi_f77*.cpp
+# In Python, Using command line arguments is security-sensitive
+# But we are cautionous with it
+sonar.issue.ignore.multicriteria.p1.ruleKey=python:S4823
+sonar.issue.ignore.multicriteria.p1.resourceKey=**/*.py
+
# Exclude some files from the analysis:
# - our unit tests
# - the tests that we borrowed elsewhere (MPICH and ISP)
private int coreAmount = 1;
/**
- * Create a `basic' VM : 1 core and 1GB of RAM.
+ * Create a `basic` VM : 1 core and 1GB of RAM.
* @param host Host node
* @param name name of the machine
*/
else
XBT_INFO("No core dump was generated by the system.");
XBT_INFO("Counter-example execution trace:");
- simgrid::mc::dumpRecordPath();
for (auto const& s : mc_model_checker->getChecker()->getTextualTrace())
- XBT_INFO("%s", s.c_str());
+ XBT_INFO(" %s", s.c_str());
+ simgrid::mc::dumpRecordPath();
simgrid::mc::session->logState();
XBT_INFO("Stack trace:");
mc_model_checker->process().dumpStack();
XBT_INFO("*** PROPERTY NOT VALID ***");
XBT_INFO("**************************");
XBT_INFO("Counter-example execution trace:");
- simgrid::mc::dumpRecordPath();
for (auto const& s : mc_model_checker->getChecker()->getTextualTrace())
- XBT_INFO("%s", s.c_str());
+ XBT_INFO(" %s", s.c_str());
+ simgrid::mc::dumpRecordPath();
simgrid::mc::session->logState();
}
{
simgrid::mc::Snapshot* s1 = state1->system_state.get();
simgrid::mc::Snapshot* s2 = state2->system_state.get();
- int num1 = state1->num;
- int num2 = state2->num;
- return snapshot_compare(num1, s1, num2, s2);
+ return snapshot_compare(s1, s2);
}
/** @brief Save the current state */
{
simgrid::mc::Snapshot* s1 = state1->graph_state->system_state.get();
simgrid::mc::Snapshot* s2 = state2->graph_state->system_state.get();
- int num1 = state1->num;
- int num2 = state2->num;
- return simgrid::mc::snapshot_compare(num1, s1, num2, s2);
+ return simgrid::mc::snapshot_compare(s1, s2);
}
std::shared_ptr<VisitedPair> LivenessChecker::insertAcceptancePair(simgrid::mc::Pair* pair)
XBT_INFO("*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*");
XBT_INFO("| ACCEPTANCE CYCLE |");
XBT_INFO("*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*");
- XBT_INFO("Counter-example that violates formula :");
- simgrid::mc::dumpRecordPath();
+ XBT_INFO("Counter-example that violates formula:");
for (auto const& s : this->getTextualTrace())
- XBT_INFO("%s", s.c_str());
+ XBT_INFO(" %s", s.c_str());
+ simgrid::mc::dumpRecordPath();
simgrid::mc::session->logState();
- XBT_INFO("Counter-example depth : %zu", depth);
+ XBT_INFO("Counter-example depth: %zu", depth);
}
std::vector<std::string> LivenessChecker::getTextualTrace() // override
{
simgrid::mc::Snapshot* s1 = state1->system_state.get();
simgrid::mc::Snapshot* s2 = state2->system_state.get();
- int num1 = state1->num;
- int num2 = state2->num;
- return snapshot_compare(num1, s1, num2, s2);
+ return snapshot_compare(s1, s2);
}
void SafetyChecker::checkNonTermination(simgrid::mc::State* current_state)
XBT_INFO("******************************************");
XBT_INFO("Counter-example execution trace:");
for (auto const& s : mc_model_checker->getChecker()->getTextualTrace())
- XBT_INFO("%s", s.c_str());
+ XBT_INFO(" %s", s.c_str());
+ simgrid::mc::dumpRecordPath();
simgrid::mc::session->logState();
throw simgrid::mc::TerminationError();
}
// TODO, have a robust way to find it in O(1)
-static inline
-mc_mem_region_t MC_get_heap_region(simgrid::mc::Snapshot* snapshot)
+static inline RegionSnapshot* MC_get_heap_region(Snapshot* snapshot)
{
for (auto const& region : snapshot->snapshot_regions)
if (region->region_type() == simgrid::mc::RegionType::Heap)
simgrid::mc::RemoteClient* process = &mc_model_checker->process();
/* Start comparison */
- size_t i1;
- size_t i2;
- size_t j1;
- size_t j2;
- size_t k;
- void* addr_block1;
- void* addr_block2;
- void* addr_frag1;
- void* addr_frag2;
int nb_diff1 = 0;
int nb_diff2 = 0;
- int equal;
+ bool equal;
/* Check busy blocks */
- i1 = 1;
+ size_t i1 = 1;
malloc_info heapinfo_temp1;
malloc_info heapinfo_temp2;
malloc_info heapinfo_temp2b;
- mc_mem_region_t heap_region1 = MC_get_heap_region(snapshot1);
- mc_mem_region_t heap_region2 = MC_get_heap_region(snapshot2);
+ simgrid::mc::RegionSnapshot* heap_region1 = MC_get_heap_region(snapshot1);
+ simgrid::mc::RegionSnapshot* heap_region2 = MC_get_heap_region(snapshot2);
// This is the address of std_heap->heapinfo in the application process:
void* heapinfo_address = &((xbt_mheap_t) process->heap_address)->heapinfo;
abort();
}
- addr_block1 = ((void*)(((ADDR2UINT(i1)) - 1) * BLOCKSIZE + (char*)state.std_heap_copy.heapbase));
+ void* addr_block1 = ((void*)(((ADDR2UINT(i1)) - 1) * BLOCKSIZE + (char*)state.std_heap_copy.heapbase));
if (heapinfo1->type == MMALLOC_TYPE_UNFRAGMENTED) { /* Large block */
if (is_stack(addr_block1)) {
- for (k = 0; k < heapinfo1->busy_block.size; k++)
+ for (size_t k = 0; k < heapinfo1->busy_block.size; k++)
state.equals_to1_(i1 + k, 0) = HeapArea(i1, -1);
- for (k = 0; k < heapinfo2->busy_block.size; k++)
+ for (size_t k = 0; k < heapinfo2->busy_block.size; k++)
state.equals_to2_(i1 + k, 0) = HeapArea(i1, -1);
i1 += heapinfo1->busy_block.size;
continue;
continue;
}
- i2 = 1;
- equal = 0;
+ size_t i2 = 1;
+ equal = false;
/* Try first to associate to same block in the other heap */
if (heapinfo2->type == heapinfo1->type && state.equals_to2_(i1, 0).valid_ == 0) {
- addr_block2 = (ADDR2UINT(i1) - 1) * BLOCKSIZE + (char*)state.std_heap_copy.heapbase;
+ void* addr_block2 = (ADDR2UINT(i1) - 1) * BLOCKSIZE + (char*)state.std_heap_copy.heapbase;
int res_compare = compare_heap_area(state, simgrid::mc::ProcessIndexMissing, addr_block1, addr_block2,
snapshot1, snapshot2, nullptr, nullptr, 0);
if (res_compare != 1) {
- for (k = 1; k < heapinfo2->busy_block.size; k++)
+ for (size_t k = 1; k < heapinfo2->busy_block.size; k++)
state.equals_to2_(i1 + k, 0) = HeapArea(i1, -1);
- for (k = 1; k < heapinfo1->busy_block.size; k++)
+ for (size_t k = 1; k < heapinfo1->busy_block.size; k++)
state.equals_to1_(i1 + k, 0) = HeapArea(i1, -1);
- equal = 1;
+ equal = true;
i1 += heapinfo1->busy_block.size;
}
}
while (i2 < state.heaplimit && not equal) {
- addr_block2 = (ADDR2UINT(i2) - 1) * BLOCKSIZE + (char*)state.std_heap_copy.heapbase;
+ void* addr_block2 = (ADDR2UINT(i2) - 1) * BLOCKSIZE + (char*)state.std_heap_copy.heapbase;
if (i2 == i1) {
i2++;
snapshot1, snapshot2, nullptr, nullptr, 0);
if (res_compare != 1) {
- for (k = 1; k < heapinfo2b->busy_block.size; k++)
+ for (size_t k = 1; k < heapinfo2b->busy_block.size; k++)
state.equals_to2_(i2 + k, 0) = HeapArea(i1, -1);
- for (k = 1; k < heapinfo1->busy_block.size; k++)
+ for (size_t k = 1; k < heapinfo1->busy_block.size; k++)
state.equals_to1_(i1 + k, 0) = HeapArea(i2, -1);
- equal = 1;
+ equal = true;
i1 += heapinfo1->busy_block.size;
}
} else { /* Fragmented block */
- for (j1 = 0; j1 < (size_t) (BLOCKSIZE >> heapinfo1->type); j1++) {
+ for (size_t j1 = 0; j1 < (size_t)(BLOCKSIZE >> heapinfo1->type); j1++) {
if (heapinfo1->busy_frag.frag_size[j1] == -1) /* Free fragment_ */
continue;
if (state.equals_to1_(i1, j1).valid_)
continue;
- addr_frag1 = (void*)((char*)addr_block1 + (j1 << heapinfo1->type));
+ void* addr_frag1 = (void*)((char*)addr_block1 + (j1 << heapinfo1->type));
- i2 = 1;
- equal = 0;
+ size_t i2 = 1;
+ equal = false;
/* Try first to associate to same fragment_ in the other heap */
if (heapinfo2->type == heapinfo1->type && not state.equals_to2_(i1, j1).valid_) {
- addr_block2 = (ADDR2UINT(i1) - 1) * BLOCKSIZE +
- (char *) state.std_heap_copy.heapbase;
- addr_frag2 =
- (void *) ((char *) addr_block2 +
- (j1 << heapinfo2->type));
+ void* addr_block2 = (ADDR2UINT(i1) - 1) * BLOCKSIZE + (char*)state.std_heap_copy.heapbase;
+ void* addr_frag2 = (void*)((char*)addr_block2 + (j1 << heapinfo2->type));
int res_compare = compare_heap_area(state, simgrid::mc::ProcessIndexMissing, addr_frag1, addr_frag2,
snapshot1, snapshot2, nullptr, nullptr, 0);
if (res_compare != 1)
- equal = 1;
+ equal = true;
}
while (i2 < state.heaplimit && not equal) {
abort();
}
- for (j2 = 0; j2 < (size_t) (BLOCKSIZE >> heapinfo2b->type);
- j2++) {
+ for (size_t j2 = 0; j2 < (size_t)(BLOCKSIZE >> heapinfo2b->type); j2++) {
if (i2 == i1 && j2 == j1)
continue;
if (state.equals_to2_(i2, j2).valid_)
continue;
- addr_block2 = (ADDR2UINT(i2) - 1) * BLOCKSIZE + (char*)state.std_heap_copy.heapbase;
- addr_frag2 = (void*)((char*)addr_block2 + (j2 << heapinfo2b->type));
+ void* addr_block2 = (ADDR2UINT(i2) - 1) * BLOCKSIZE + (char*)state.std_heap_copy.heapbase;
+ void* addr_frag2 = (void*)((char*)addr_block2 + (j2 << heapinfo2b->type));
int res_compare = compare_heap_area(state, simgrid::mc::ProcessIndexMissing, addr_frag1, addr_frag2,
snapshot2, snapshot2, nullptr, nullptr, 0);
if (res_compare != 1) {
- equal = 1;
+ equal = true;
break;
}
}
}
/* All blocks/fragments are equal to another block/fragment_ ? */
- size_t i = 1;
- size_t j = 0;
-
- for(i = 1; i < state.heaplimit; i++) {
+ for (size_t i = 1; i < state.heaplimit; i++) {
const malloc_info* heapinfo1 = (const malloc_info*) MC_region_read(
heap_region1, &heapinfo_temp1, &heapinfos1[i], sizeof(malloc_info));
if (heapinfo1->type <= 0)
continue;
- for (j = 0; j < (size_t) (BLOCKSIZE >> heapinfo1->type); j++)
+ for (size_t j = 0; j < (size_t)(BLOCKSIZE >> heapinfo1->type); j++)
if (i1 == state.heaplimit && heapinfo1->busy_frag.frag_size[j] > 0 && not state.equals_to1_(i, j).valid_) {
XBT_DEBUG("Block %zu, Fragment %zu not found (size used = %zd)", i, j, heapinfo1->busy_frag.frag_size[j]);
nb_diff1++;
if (i1 == state.heaplimit)
XBT_DEBUG("Number of blocks/fragments not found in heap1: %d", nb_diff1);
- for (i=1; i < state.heaplimit; i++) {
+ for (size_t i = 1; i < state.heaplimit; i++) {
const malloc_info* heapinfo2 = (const malloc_info*) MC_region_read(
heap_region2, &heapinfo_temp2, &heapinfos2[i], sizeof(malloc_info));
if (heapinfo2->type == MMALLOC_TYPE_UNFRAGMENTED && i1 == state.heaplimit && heapinfo2->busy_block.busy_size > 0 &&
if (heapinfo2->type <= 0)
continue;
- for (j = 0; j < (size_t) (BLOCKSIZE >> heapinfo2->type); j++)
+ for (size_t j = 0; j < (size_t)(BLOCKSIZE >> heapinfo2->type); j++)
if (i1 == state.heaplimit && heapinfo2->busy_frag.frag_size[j] > 0 && not state.equals_to2_(i, j).valid_) {
XBT_DEBUG("Block %zu, Fragment %zu not found (size used = %zd)",
i, j, heapinfo2->busy_frag.frag_size[j]);
int check_ignore)
{
simgrid::mc::RemoteClient* process = &mc_model_checker->process();
- mc_mem_region_t heap_region1 = MC_get_heap_region(snapshot1);
- mc_mem_region_t heap_region2 = MC_get_heap_region(snapshot2);
+ simgrid::mc::RegionSnapshot* heap_region1 = MC_get_heap_region(snapshot1);
+ simgrid::mc::RegionSnapshot* heap_region2 = MC_get_heap_region(snapshot2);
for (int i = 0; i < size; ) {
const void* addr_pointed1;
const void* addr_pointed2;
- mc_mem_region_t heap_region1 = MC_get_heap_region(snapshot1);
- mc_mem_region_t heap_region2 = MC_get_heap_region(snapshot2);
+ simgrid::mc::RegionSnapshot* heap_region1 = MC_get_heap_region(snapshot1);
+ simgrid::mc::RegionSnapshot* heap_region2 = MC_get_heap_region(snapshot2);
switch (type->type) {
case DW_TAG_unspecified_type:
}
- mc_mem_region_t heap_region1 = MC_get_heap_region(snapshot1);
- mc_mem_region_t heap_region2 = MC_get_heap_region(snapshot2);
+ simgrid::mc::RegionSnapshot* heap_region1 = MC_get_heap_region(snapshot1);
+ simgrid::mc::RegionSnapshot* heap_region2 = MC_get_heap_region(snapshot2);
const malloc_info* heapinfo1 = (const malloc_info*) MC_region_read(
heap_region1, &heapinfo_temp1, &heapinfos1[block1], sizeof(malloc_info));
/************************** Snapshot comparison *******************************/
/******************************************************************************/
-static int compare_areas_with_type(simgrid::mc::StateComparator& state,
- int process_index,
- void* real_area1, simgrid::mc::Snapshot* snapshot1, mc_mem_region_t region1,
- void* real_area2, simgrid::mc::Snapshot* snapshot2, mc_mem_region_t region2,
- simgrid::mc::Type* type, int pointer_level)
+static int compare_areas_with_type(simgrid::mc::StateComparator& state, int process_index, void* real_area1,
+ simgrid::mc::Snapshot* snapshot1, simgrid::mc::RegionSnapshot* region1,
+ void* real_area2, simgrid::mc::Snapshot* snapshot2,
+ simgrid::mc::RegionSnapshot* region2, simgrid::mc::Type* type, int pointer_level)
{
simgrid::mc::RemoteClient* process = &mc_model_checker->process();
for (simgrid::mc::Member& member : type->members) {
void* member1 = simgrid::dwarf::resolve_member(real_area1, type, &member, snapshot1, process_index);
void* member2 = simgrid::dwarf::resolve_member(real_area2, type, &member, snapshot2, process_index);
- mc_mem_region_t subregion1 = mc_get_region_hinted(member1, snapshot1, process_index, region1);
- mc_mem_region_t subregion2 = mc_get_region_hinted(member2, snapshot2, process_index, region2);
+ simgrid::mc::RegionSnapshot* subregion1 = mc_get_region_hinted(member1, snapshot1, process_index, region1);
+ simgrid::mc::RegionSnapshot* subregion2 = mc_get_region_hinted(member2, snapshot2, process_index, region2);
res = compare_areas_with_type(state, process_index, member1, snapshot1, subregion1, member2, snapshot2,
subregion2, member.type, pointer_level);
if (res == 1)
} while (true);
}
-static int compare_global_variables(
- simgrid::mc::StateComparator& state,
- simgrid::mc::ObjectInformation* object_info,
- int process_index,
- mc_mem_region_t r1, mc_mem_region_t r2,
- simgrid::mc::Snapshot* snapshot1, simgrid::mc::Snapshot* snapshot2)
+static int compare_global_variables(simgrid::mc::StateComparator& state, simgrid::mc::ObjectInformation* object_info,
+ int process_index, simgrid::mc::RegionSnapshot* r1, simgrid::mc::RegionSnapshot* r2,
+ simgrid::mc::Snapshot* snapshot1, simgrid::mc::Snapshot* snapshot2)
{
xbt_assert(r1 && r2, "Missing region.");
static std::unique_ptr<simgrid::mc::StateComparator> state_comparator;
-int snapshot_compare(int num1, simgrid::mc::Snapshot* s1, int num2, simgrid::mc::Snapshot* s2)
+int snapshot_compare(Snapshot* s1, Snapshot* s2)
{
// TODO, make this a field of ModelChecker or something similar
-
if (state_comparator == nullptr)
state_comparator.reset(new StateComparator());
else
state_comparator->clear();
- simgrid::mc::RemoteClient* process = &mc_model_checker->process();
+ RemoteClient* process = &mc_model_checker->process();
int errors = 0;
if (_sg_mc_hash) {
hash_result = (s1->hash != s2->hash);
if (hash_result) {
- XBT_VERB("(%d - %d) Different hash: 0x%" PRIx64 "--0x%" PRIx64, num1, num2, s1->hash, s2->hash);
+ XBT_VERB("(%d - %d) Different hash: 0x%" PRIx64 "--0x%" PRIx64, s1->num_state, s2->num_state, s1->hash, s2->hash);
#ifndef MC_DEBUG
return 1;
#endif
} else
- XBT_VERB("(%d - %d) Same hash: 0x%" PRIx64, num1, num2, s1->hash);
+ XBT_VERB("(%d - %d) Same hash: 0x%" PRIx64, s1->num_state, s2->num_state, s1->hash);
}
/* Compare enabled processes */
if (s1->enabled_processes != s2->enabled_processes) {
- XBT_VERB("(%d - %d) Different amount of enabled processes", num1, num2);
+ XBT_VERB("(%d - %d) Different amount of enabled processes", s1->num_state, s2->num_state);
return 1;
}
size_t size_used2 = s2->stack_sizes[i];
if (size_used1 != size_used2) {
#ifdef MC_DEBUG
- XBT_DEBUG("(%d - %d) Different size used in stacks: %zu - %zu", num1, num2, size_used1, size_used2);
+ XBT_DEBUG("(%d - %d) Different size used in stacks: %zu - %zu", s1->num_state, s2->num_state, size_used1,
+ size_used2);
errors++;
is_diff = 1;
#else
#ifdef MC_VERBOSE
- XBT_VERB("(%d - %d) Different size used in stacks: %zu - %zu", num1, num2, size_used1, size_used2);
+ XBT_VERB("(%d - %d) Different size used in stacks: %zu - %zu", s1->num_state, s2->num_state, size_used1,
+ size_used2);
#endif
return 1;
#endif
if (res_init == -1) {
#ifdef MC_DEBUG
- XBT_DEBUG("(%d - %d) Different heap information", num1, num2);
+ XBT_DEBUG("(%d - %d) Different heap information", num1, nus1->num_state, s2->num_statem2);
errors++;
#else
#ifdef MC_VERBOSE
- XBT_VERB("(%d - %d) Different heap information", num1, num2);
+ XBT_VERB("(%d - %d) Different heap information", s1->num_state, s2->num_state);
#endif
return 1;
if (stack1->process_index != stack2->process_index) {
diff_local = 1;
- XBT_DEBUG("(%d - %d) Stacks with different process index (%i vs %i)", num1, num2,
- stack1->process_index, stack2->process_index);
+ XBT_DEBUG("(%d - %d) Stacks with different process index (%i vs %i)", s1->num_state, s2->num_state,
+ stack1->process_index, stack2->process_index);
}
else diff_local = compare_local_variables(*state_comparator,
stack1->process_index, s1, s2, stack1, stack2);
#else
#ifdef MC_VERBOSE
- XBT_VERB("(%d - %d) Different local variables between stacks %u", num1, num2, cursor + 1);
+ XBT_VERB("(%d - %d) Different local variables between stacks %u", s1->num_state, s2->num_state, cursor + 1);
#endif
return 1;
xbt_assert(regions_count == s2->snapshot_regions.size());
for (size_t k = 0; k != regions_count; ++k) {
- mc_mem_region_t region1 = s1->snapshot_regions[k].get();
- mc_mem_region_t region2 = s2->snapshot_regions[k].get();
+ RegionSnapshot* region1 = s1->snapshot_regions[k].get();
+ RegionSnapshot* region2 = s2->snapshot_regions[k].get();
// Preconditions:
- if (region1->region_type() != simgrid::mc::RegionType::Data)
+ if (region1->region_type() != RegionType::Data)
continue;
xbt_assert(region1->region_type() == region2->region_type());
region2, s1, s2)) {
#ifdef MC_DEBUG
- XBT_DEBUG("(%d - %d) Different global variables in %s",
- num1, num2, name.c_str());
+ XBT_DEBUG("(%d - %d) Different global variables in %s", s1->num_state, s2->num_state, name.c_str());
errors++;
#else
#ifdef MC_VERBOSE
- XBT_VERB("(%d - %d) Different global variables in %s",
- num1, num2, name.c_str());
+ XBT_VERB("(%d - %d) Different global variables in %s", s1->num_state, s2->num_state, name.c_str());
#endif
return 1;
}
/* Compare heap */
- if (simgrid::mc::mmalloc_compare_heap(*state_comparator, s1, s2) > 0) {
+ if (mmalloc_compare_heap(*state_comparator, s1, s2) > 0) {
#ifdef MC_DEBUG
- XBT_DEBUG("(%d - %d) Different heap (mmalloc_compare)", num1, num2);
+ XBT_DEBUG("(%d - %d) Different heap (mmalloc_compare)", s1->num_state, s2->num_state);
errors++;
#else
#ifdef MC_VERBOSE
- XBT_VERB("(%d - %d) Different heap (mmalloc_compare)", num1, num2);
+ XBT_VERB("(%d - %d) Different heap (mmalloc_compare)", s1->num_state, s2->num_state);
#endif
return 1;
#endif
#ifdef MC_VERBOSE
if (errors || hash_result)
- XBT_VERB("(%d - %d) Difference found", num1, num2);
+ XBT_VERB("(%d - %d) Difference found", s1->num_state, s2->num_state);
else
- XBT_VERB("(%d - %d) No difference found", num1, num2);
+ XBT_VERB("(%d - %d) No difference found", s1->num_state, s2->num_state);
#endif
#if defined(MC_DEBUG) && defined(MC_VERBOSE)
// * false positive SHOULD be avoided.
// * There MUST not be any false negative.
- XBT_VERB("(%d - %d) State equality hash test is %s %s", num1, num2,
+ XBT_VERB("(%d - %d) State equality hash test is %s %s", s1->num_state, s2->num_state,
(hash_result != 0) == (errors != 0) ? "true" : "false", not hash_result ? "positive" : "negative");
}
#endif
#include "src/mc/mc_ignore.hpp"
#include "src/mc/mc_private.hpp"
#include "src/mc/mc_record.hpp"
+#include "src/mc/mc_replay.hpp"
#include "src/mc/remote/Client.hpp"
#include "src/mc/remote/mc_protocol.h"
void MC_assert(int prop)
{
xbt_assert(mc_model_checker == nullptr);
- if (MC_is_active() && not prop)
- simgrid::mc::Client::get()->reportAssertionFailure();
+ if (not prop) {
+ if (MC_is_active())
+ simgrid::mc::Client::get()->reportAssertionFailure();
+ if (MC_record_replay_is_active())
+ xbt_die("MC assertion failed");
+ }
}
void MC_cut()
int _sg_do_model_check = 0;
int _sg_mc_max_visited_states = 0;
-simgrid::config::Flag<bool> _sg_do_model_check_record{"model-check/record", "Record the model-checking paths", false};
-
simgrid::config::Flag<int> _sg_mc_checkpoint{
"model-check/checkpoint", "Specify the amount of steps between checkpoints during stateful model-checking "
"(default: 0 => stateless verification). If value=1, one checkpoint is saved for each "
simgrid::config::Flag<bool> _sg_mc_sparse_checkpoint{"model-check/sparse-checkpoint", "Use sparse per-page snapshots.",
false, [](bool) { _mc_cfg_cb_check("checkpointing value"); }};
-simgrid::config::Flag<bool> _sg_mc_ksm{"model-check/ksm", "Kernel same-page merging", false,
- [](bool) { _mc_cfg_cb_check("KSM value"); }};
-
simgrid::config::Flag<std::string> _sg_mc_property_file{
"model-check/property", "Name of the file containing the property, as formatted by the ltl2ba program.", "",
[](const std::string&) { _mc_cfg_cb_check("property"); }};
/********************************** Configuration of MC **************************************/
extern "C" XBT_PUBLIC int _sg_do_model_check;
extern XBT_PUBLIC simgrid::config::Flag<std::string> _sg_mc_record_path;
-extern XBT_PRIVATE simgrid::config::Flag<bool> _sg_do_model_check_record;
extern XBT_PRIVATE simgrid::config::Flag<int> _sg_mc_checkpoint;
extern XBT_PUBLIC simgrid::config::Flag<bool> _sg_mc_sparse_checkpoint;
-extern XBT_PUBLIC simgrid::config::Flag<bool> _sg_mc_ksm;
extern XBT_PUBLIC simgrid::config::Flag<std::string> _sg_mc_property_file;
extern XBT_PUBLIC simgrid::config::Flag<bool> _sg_mc_comms_determinism;
extern XBT_PUBLIC simgrid::config::Flag<bool> _sg_mc_send_determinism;
XBT_INFO("**************************");
XBT_INFO("Counter-example execution trace:");
for (auto const& s : mc_model_checker->getChecker()->getTextualTrace())
- XBT_INFO("%s", s.c_str());
+ XBT_INFO(" %s", s.c_str());
+ simgrid::mc::dumpRecordPath();
simgrid::mc::session->logState();
}
#define XBT_ALWAYS_INLINE inline __attribute__((always_inline))
#endif
-/** Cache the size of a memory page for the current system. */
+/** Size of a memory page for the current system. */
extern "C" int xbt_pagesize;
-/** Cache the number of bits of addresses inside a given page, log2(xbt_pagesize). */
+/** Number of bits of addresses inside a given page, log2(xbt_pagesize). */
extern "C" int xbt_pagebits;
namespace simgrid {
// TODO, do not depend on xbt_pagesize/xbt_pagebits but our own chunk size
namespace mmu {
-static int chunkSize()
+static int chunk_size()
{
return xbt_pagesize;
}
* @param size Byte size
* @return Number of memory pages
*/
-static XBT_ALWAYS_INLINE std::size_t chunkCount(std::size_t size)
+static XBT_ALWAYS_INLINE std::size_t chunk_count(std::size_t size)
{
size_t page_count = size >> xbt_pagebits;
if (size & (xbt_pagesize - 1))
return {offset >> xbt_pagebits, offset & (xbt_pagesize - 1)};
}
-/** Merge chunk number and remaining offset info a global offset */
+/** Merge chunk number and remaining offset into a global offset */
static XBT_ALWAYS_INLINE std::uintptr_t join(std::size_t page, std::uintptr_t offset)
{
return ((std::uintptr_t)page << xbt_pagebits) + offset;
return join(value.first, value.second);
}
-static XBT_ALWAYS_INLINE bool sameChunk(std::uintptr_t a, std::uintptr_t b)
+static XBT_ALWAYS_INLINE bool same_chunk(std::uintptr_t a, std::uintptr_t b)
{
return (a >> xbt_pagebits) == (b >> xbt_pagebits);
}
simgrid::mc::ObjectInformation* result);
XBT_PRIVATE
-int snapshot_compare(int num1, simgrid::mc::Snapshot* s1, int num2, simgrid::mc::Snapshot* s2);
+int snapshot_compare(Snapshot* s1, Snapshot* s2);
// Move is somewhere else (in the LivenessChecker class, in the Session class?):
extern XBT_PRIVATE xbt_automaton_t property_automaton;
#include "src/mc/mc_state.hpp"
#endif
-XBT_LOG_NEW_DEFAULT_SUBCATEGORY(mc_record, mc,
- " Logging specific to MC record/replay facility");
+XBT_LOG_NEW_DEFAULT_SUBCATEGORY(mc_record, mc, "Logging specific to MC record/replay facility");
namespace simgrid {
namespace mc {
void dumpRecordPath()
{
- if (MC_record_is_active()) {
- RecordTrace trace = mc_model_checker->getChecker()->getRecordTrace();
- XBT_INFO("Path = %s", traceToString(trace).c_str());
- }
+ RecordTrace trace = mc_model_checker->getChecker()->getRecordTrace();
+ XBT_INFO("Path = %s", traceToString(trace).c_str());
}
#endif
/** \file mc_record.hpp
*
* This file contains the MC replay/record functionnality.
- * A MC path may be recorded by using ``-cfg=model-check/record:1`'`.
- * The path is written in the log output and an be replayed with MC disabled
- * (even with an non-MC build) with `--cfg=model-check/replay:$replayPath`.
+ * The recorded path is written in the log output and can be replayed with MC disabled
+ * (even with an non-MC build) using `--cfg=model-check/replay:$replayPath`.
*
* The same version of Simgrid should be used and the same arguments should be
* passed to the application (without the MC specific arguments).
}
}
-/** Whether the MC record mode is enabled
- *
- * The behaviour is not changed. The only real difference is that
- * the path is writtent in the log when an interesting path is found.
- */
-#define MC_record_is_active() _sg_do_model_check_record
-
// **** Data conversion
#endif
if (mc_model_checker == nullptr)
return smpi_process_count();
int res;
- mc_model_checker->process().read_variable("process_count",
- &res, sizeof(res));
+ mc_model_checker->process().read_variable("process_count", &res, sizeof(res));
return res;
}
#endif
namespace simgrid {
namespace mc {
-static inline const char* to_cstr(RegionType region)
-{
- switch (region) {
- case RegionType::Unknown:
- return "unknown";
- case RegionType::Heap:
- return "Heap";
- case RegionType::Data:
- return "Data";
- default:
- return "?";
- }
-}
-
-Buffer::Buffer(std::size_t size, Type type) : size_(size), type_(type)
-{
- switch (type_) {
- case Type::Malloc:
- data_ = ::operator new(size_);
- break;
- case Type::Mmap:
- data_ = ::mmap(nullptr, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_POPULATE, -1, 0);
- if (data_ == MAP_FAILED) {
- data_ = nullptr;
- size_ = 0;
- type_ = Type::Malloc;
- throw std::bad_alloc();
- }
- break;
- default:
- abort();
- }
-}
-
-void Buffer::clear() noexcept
-{
- switch (type_) {
- case Type::Malloc:
- ::operator delete(data_);
- break;
- case Type::Mmap:
- if (munmap(data_, size_) != 0)
- abort();
- break;
- default:
- abort();
- }
- data_ = nullptr;
- size_ = 0;
- type_ = Type::Malloc;
-}
-
RegionSnapshot dense_region(RegionType region_type, void* start_addr, void* permanent_addr, size_t size)
{
- // When KSM support is enables, we allocate memory using mmap:
- // * we don't want to advise bits of the heap as mergable
- // * mmap gives data aligned on page boundaries which is merge friendly
- simgrid::mc::Buffer data;
- if (_sg_mc_ksm)
- data = Buffer::mmap(size);
- else
- data = Buffer::malloc(size);
+ simgrid::mc::Buffer data = Buffer::malloc(size);
mc_model_checker->process().read_bytes(data.get(), size, remote(permanent_addr), simgrid::mc::ProcessIndexDisabled);
-#ifdef __linux__
- if (_sg_mc_ksm)
- // Mark the region as mergeable *after* we have written into it.
- // Trying to merge them before is useless/counterproductive.
- madvise(data.get(), size, MADV_MERGEABLE);
-#endif
-
simgrid::mc::RegionSnapshot region(region_type, start_addr, permanent_addr, size);
region.flat_data(std::move(data));
- XBT_DEBUG("New region : type : %s, data : %p (real addr %p), size : %zu", to_cstr(region_type),
+ XBT_DEBUG("New region : type : %s, data : %p (real addr %p), size : %zu",
+ (region_type == RegionType::Heap ? "Heap" : (region_type == RegionType::Data ? "Data" : "?")),
region.flat_data().get(), permanent_addr, size);
return region;
}
simgrid::mc::RemoteClient* process = &mc_model_checker->process();
assert(process != nullptr);
- xbt_assert((((uintptr_t)start_addr) & (xbt_pagesize - 1)) == 0, "Not at the beginning of a page");
- xbt_assert((((uintptr_t)permanent_addr) & (xbt_pagesize - 1)) == 0, "Not at the beginning of a page");
- size_t page_count = simgrid::mc::mmu::chunkCount(size);
+ xbt_assert((((uintptr_t)start_addr) & (xbt_pagesize - 1)) == 0, "Start address not at the beginning of a page");
+ xbt_assert((((uintptr_t)permanent_addr) & (xbt_pagesize - 1)) == 0,
+ "Permanent address not at the beginning of a page");
+ size_t page_count = simgrid::mc::mmu::chunk_count(size);
simgrid::mc::ChunkedData page_data(mc_model_checker->page_store(), *process, RemotePtr<void>(permanent_addr),
page_count);
class Buffer {
private:
- enum class Type { Malloc, Mmap };
void* data_ = nullptr;
std::size_t size_;
- Type type_ = Type::Malloc;
- Buffer(std::size_t size, Type type = Type::Malloc);
- Buffer(void* data, std::size_t size, Type type = Type::Malloc) : data_(data), size_(size), type_(type) {}
+ Buffer(std::size_t size) : size_(size) { data_ = ::operator new(size_); }
+
+ Buffer(void* data, std::size_t size) : data_(data), size_(size) {}
public:
Buffer() = default;
- void clear() noexcept;
+ void clear() noexcept
+ {
+ ::operator delete(data_);
+ data_ = nullptr;
+ size_ = 0;
+ }
+
~Buffer() noexcept { clear(); }
- static Buffer malloc(std::size_t size) { return Buffer(size, Type::Malloc); }
- static Buffer mmap(std::size_t size) { return Buffer(size, Type::Mmap); }
+ static Buffer malloc(std::size_t size) { return Buffer(size); }
// No copy
Buffer(Buffer const& buffer) = delete;
Buffer& operator=(Buffer const& buffer) = delete;
// Move
- Buffer(Buffer&& that) noexcept : data_(that.data_), size_(that.size_), type_(that.type_)
+ Buffer(Buffer&& that) noexcept : data_(that.data_), size_(that.size_)
{
that.data_ = nullptr;
that.size_ = 0;
- that.type_ = Type::Malloc;
}
Buffer& operator=(Buffer&& that) noexcept
{
clear();
data_ = that.data_;
size_ = that.size_;
- type_ = that.type_;
that.data_ = nullptr;
that.size_ = 0;
- that.type_ = Type::Malloc;
return *this;
}
} // namespace mc
} // namespace simgrid
-typedef simgrid::mc::RegionSnapshot s_mc_mem_region_t;
-typedef s_mc_mem_region_t* mc_mem_region_t;
#endif
*
* @param region Target region
*/
-static void restore(mc_mem_region_t region)
+static void restore(RegionSnapshot* region)
{
switch (region->storage_type()) {
case simgrid::mc::StorageType::Flat:
break;
case simgrid::mc::StorageType::Chunked:
- mc_region_restore_sparse(&mc_model_checker->process(), region);
+ xbt_assert(((region->permanent_address().address()) & (xbt_pagesize - 1)) == 0, "Not at the beginning of a page");
+ xbt_assert(simgrid::mc::mmu::chunk_count(region->size()) == region->page_data().page_count());
+
+ for (size_t i = 0; i != region->page_data().page_count(); ++i) {
+ void* target_page =
+ (void*)simgrid::mc::mmu::join(i, (std::uintptr_t)(void*)region->permanent_address().address());
+ const void* source_page = region->page_data().page(i);
+ mc_model_checker->process().write_bytes(source_page, xbt_pagesize, remote(target_page));
+ }
+
break;
case simgrid::mc::StorageType::Privatized:
mc_model_checker->process().read_bytes(&privatization_regions, sizeof(privatization_regions),
remote(remote_smpi_privatization_regions));
- std::vector<simgrid::mc::RegionSnapshot> data;
+ std::vector<RegionSnapshot> data;
data.reserve(process_count);
for (size_t i = 0; i < process_count; i++)
- data.push_back(simgrid::mc::region(region_type, start_addr, privatization_regions[i].address, size));
+ data.push_back(region(region_type, start_addr, privatization_regions[i].address, size));
- simgrid::mc::RegionSnapshot region = simgrid::mc::RegionSnapshot(region_type, start_addr, permanent_addr, size);
+ RegionSnapshot region = RegionSnapshot(region_type, start_addr, permanent_addr, size);
region.privatized_data(std::move(data));
return region;
}
#endif
-static void add_region(int index, simgrid::mc::Snapshot* snapshot, simgrid::mc::RegionType type,
+static void add_region(simgrid::mc::Snapshot* snapshot, simgrid::mc::RegionType type,
simgrid::mc::ObjectInformation* object_info, void* start_addr, void* permanent_addr,
std::size_t size)
{
region = simgrid::mc::region(type, start_addr, permanent_addr, size);
region.object_info(object_info);
- snapshot->snapshot_regions[index] =
- std::unique_ptr<simgrid::mc::RegionSnapshot>(new simgrid::mc::RegionSnapshot(std::move(region)));
+ snapshot->snapshot_regions.push_back(
+ std::unique_ptr<simgrid::mc::RegionSnapshot>(new simgrid::mc::RegionSnapshot(std::move(region))));
}
static void get_memory_regions(simgrid::mc::RemoteClient* process, simgrid::mc::Snapshot* snapshot)
{
- const size_t n = process->object_infos.size();
- snapshot->snapshot_regions.resize(n + 1);
- int i = 0;
- for (auto const& object_info : process->object_infos) {
- add_region(i, snapshot, simgrid::mc::RegionType::Data, object_info.get(), object_info->start_rw,
- object_info->start_rw, object_info->end_rw - object_info->start_rw);
- ++i;
- }
+ snapshot->snapshot_regions.clear();
+
+ for (auto const& object_info : process->object_infos)
+ add_region(snapshot, simgrid::mc::RegionType::Data, object_info.get(), object_info->start_rw, object_info->start_rw,
+ object_info->end_rw - object_info->start_rw);
xbt_mheap_t heap = process->get_heap();
void* start_heap = heap->base;
void* end_heap = heap->breakval;
- add_region(n, snapshot, simgrid::mc::RegionType::Heap, nullptr, start_heap, start_heap,
+ add_region(snapshot, simgrid::mc::RegionType::Heap, nullptr, start_heap, start_heap,
(char*)end_heap - (char*)start_heap);
snapshot->heap_bytes_used = mmalloc_get_bytes_used_remote(heap->heaplimit, process->get_malloc_info());
snapshot->stacks = take_snapshot_stacks(snapshot.get());
if (_sg_mc_hash)
snapshot->hash = simgrid::mc::hash(*snapshot);
- else
- snapshot->hash = 0;
- } else
- snapshot->hash = 0;
+ }
snapshot_ignore_restore(snapshot.get());
return snapshot;
static inline void restore_snapshot_regions(simgrid::mc::Snapshot* snapshot)
{
- for (std::unique_ptr<s_mc_mem_region_t> const& region : snapshot->snapshot_regions) {
+ for (std::unique_ptr<simgrid::mc::RegionSnapshot> const& region : snapshot->snapshot_regions) {
// For privatized, variables we decided it was not necessary to take the snapshot:
if (region)
restore(region.get());
+++ /dev/null
-/* Copyright (c) 2014-2019. The SimGrid Team. All rights reserved. */
-
-/* This program is free software; you can redistribute it and/or modify it
- * under the terms of the license (GNU LGPL) which comes with this package. */
-
-/* MC interface: definitions that non-MC modules must see, but not the user */
-
-#include <unistd.h> // pread, pwrite
-
-#include "src/mc/mc_mmu.hpp"
-#include "src/mc/mc_private.hpp"
-#include "src/mc/sosp/PageStore.hpp"
-#include "src/mc/sosp/mc_snapshot.hpp"
-
-#include "src/mc/sosp/ChunkedData.hpp"
-#include <xbt/mmalloc.h>
-
-using simgrid::mc::remote;
-
-/** @brief Restore a snapshot of a region
- *
- * If possible, the restoration will be incremental
- * (the modified pages will not be touched).
- *
- * @param start_addr
- * @param page_count Number of pages of the region
- * @param pagenos
- */
-void mc_restore_page_snapshot_region(simgrid::mc::RemoteClient* process, void* start_addr,
- simgrid::mc::ChunkedData const& pages_copy)
-{
- for (size_t i = 0; i != pages_copy.page_count(); ++i) {
- // Otherwise, copy the page:
- void* target_page = (void*)simgrid::mc::mmu::join(i, (std::uintptr_t)start_addr);
- const void* source_page = pages_copy.page(i);
- process->write_bytes(source_page, xbt_pagesize, remote(target_page));
- }
-}
-
-// ***** High level API
-
-void mc_region_restore_sparse(simgrid::mc::RemoteClient* process, mc_mem_region_t reg)
-{
- xbt_assert(((reg->permanent_address().address()) & (xbt_pagesize - 1)) == 0, "Not at the beginning of a page");
- xbt_assert(simgrid::mc::mmu::chunkCount(reg->size()) == reg->page_data().page_count());
- mc_restore_page_snapshot_region(process, (void*)reg->permanent_address().address(), reg->page_data());
-}
* @param snapshot Snapshot
* @param process_index rank requesting the region
* */
-mc_mem_region_t mc_get_snapshot_region(const void* addr, const simgrid::mc::Snapshot* snapshot, int process_index)
+simgrid::mc::RegionSnapshot* mc_get_snapshot_region(const void* addr, const simgrid::mc::Snapshot* snapshot,
+ int process_index)
{
size_t n = snapshot->snapshot_regions.size();
for (size_t i = 0; i != n; ++i) {
- mc_mem_region_t region = snapshot->snapshot_regions[i].get();
+ simgrid::mc::RegionSnapshot* region = snapshot->snapshot_regions[i].get();
if (not(region && region->contain(simgrid::mc::remote(addr))))
continue;
* @param size Size of the data to read in bytes
* @return Pointer where the data is located (target buffer of original location)
*/
-const void* MC_region_read_fragmented(mc_mem_region_t region, void* target, const void* addr, size_t size)
+const void* MC_region_read_fragmented(simgrid::mc::RegionSnapshot* region, void* target, const void* addr, size_t size)
{
// Last byte of the memory area:
void* end = (char*)addr + size - 1;
* @param region2 Region of the address in the second snapshot
* @return same semantic as memcmp
*/
-int MC_snapshot_region_memcmp(const void* addr1, mc_mem_region_t region1, const void* addr2, mc_mem_region_t region2,
- size_t size)
+int MC_snapshot_region_memcmp(const void* addr1, simgrid::mc::RegionSnapshot* region1, const void* addr2,
+ simgrid::mc::RegionSnapshot* region2, size_t size)
{
// Using alloca() for large allocations may trigger stack overflow:
// use malloc if the buffer is too big.
const void* Snapshot::read_bytes(void* buffer, std::size_t size, RemotePtr<void> address, int process_index,
ReadOptions options) const
{
- mc_mem_region_t region = mc_get_snapshot_region((void*)address.address(), this, process_index);
+ RegionSnapshot* region = mc_get_snapshot_region((void*)address.address(), this, process_index);
if (region) {
const void* res = MC_region_read(region, buffer, (void*)address.address(), size);
if (buffer == res || options & ReadOptions::lazy())
// ***** Snapshot region
-XBT_PRIVATE void mc_region_restore_sparse(simgrid::mc::RemoteClient* process, mc_mem_region_t reg);
-
-static XBT_ALWAYS_INLINE void* mc_translate_address_region_chunked(uintptr_t addr, mc_mem_region_t region)
+static XBT_ALWAYS_INLINE void* mc_translate_address_region_chunked(uintptr_t addr, simgrid::mc::RegionSnapshot* region)
{
auto split = simgrid::mc::mmu::split(addr - region->start().address());
auto pageno = split.first;
return (char*)snapshot_page + offset;
}
-static XBT_ALWAYS_INLINE void* mc_translate_address_region(uintptr_t addr, mc_mem_region_t region, int process_index)
+static XBT_ALWAYS_INLINE void* mc_translate_address_region(uintptr_t addr, simgrid::mc::RegionSnapshot* region,
+ int process_index)
{
switch (region->storage_type()) {
case simgrid::mc::StorageType::Flat: {
}
}
-XBT_PRIVATE mc_mem_region_t mc_get_snapshot_region(const void* addr, const simgrid::mc::Snapshot* snapshot,
- int process_index);
+XBT_PRIVATE simgrid::mc::RegionSnapshot* mc_get_snapshot_region(const void* addr, const simgrid::mc::Snapshot* snapshot,
+ int process_index);
// ***** MC Snapshot
// To be private
int num_state;
std::size_t heap_bytes_used;
- std::vector<std::unique_ptr<s_mc_mem_region_t>> snapshot_regions;
+ std::vector<std::unique_ptr<RegionSnapshot>> snapshot_regions;
std::set<pid_t> enabled_processes;
int privatization_index;
std::vector<std::size_t> stack_sizes;
std::vector<s_mc_snapshot_stack_t> stacks;
std::vector<simgrid::mc::IgnoredHeapRegion> to_ignore;
- std::uint64_t hash;
+ std::uint64_t hash = 0;
std::vector<s_mc_snapshot_ignored_data_t> ignored_data;
};
} // namespace mc
} // namespace simgrid
-static XBT_ALWAYS_INLINE mc_mem_region_t mc_get_region_hinted(void* addr, simgrid::mc::Snapshot* snapshot,
- int process_index, mc_mem_region_t region)
+static XBT_ALWAYS_INLINE simgrid::mc::RegionSnapshot* mc_get_region_hinted(void* addr, simgrid::mc::Snapshot* snapshot,
+ int process_index,
+ simgrid::mc::RegionSnapshot* region)
{
if (region->contain(simgrid::mc::remote(addr)))
return region;
namespace simgrid {
namespace mc {
-XBT_PRIVATE std::shared_ptr<simgrid::mc::Snapshot> take_snapshot(int num_state);
-XBT_PRIVATE void restore_snapshot(std::shared_ptr<simgrid::mc::Snapshot> snapshot);
+XBT_PRIVATE std::shared_ptr<Snapshot> take_snapshot(int num_state);
+XBT_PRIVATE void restore_snapshot(std::shared_ptr<Snapshot> snapshot);
} // namespace mc
} // namespace simgrid
-XBT_PRIVATE void mc_restore_page_snapshot_region(simgrid::mc::RemoteClient* process, void* start_addr,
- simgrid::mc::ChunkedData const& pagenos);
-
-const void* MC_region_read_fragmented(mc_mem_region_t region, void* target, const void* addr, std::size_t size);
+const void* MC_region_read_fragmented(simgrid::mc::RegionSnapshot* region, void* target, const void* addr,
+ std::size_t size);
-int MC_snapshot_region_memcmp(const void* addr1, mc_mem_region_t region1, const void* addr2, mc_mem_region_t region2,
- std::size_t size);
+int MC_snapshot_region_memcmp(const void* addr1, simgrid::mc::RegionSnapshot* region1, const void* addr2,
+ simgrid::mc::RegionSnapshot* region2, std::size_t size);
static XBT_ALWAYS_INLINE const void* mc_snapshot_get_heap_end(simgrid::mc::Snapshot* snapshot)
{
* @param size Size of the data to read in bytes
* @return Pointer where the data is located (target buffer of original location)
*/
-static XBT_ALWAYS_INLINE const void* MC_region_read(mc_mem_region_t region, void* target, const void* addr,
+static XBT_ALWAYS_INLINE const void* MC_region_read(simgrid::mc::RegionSnapshot* region, void* target, const void* addr,
std::size_t size)
{
xbt_assert(region);
case simgrid::mc::StorageType::Chunked: {
// Last byte of the region:
void* end = (char*)addr + size - 1;
- if (simgrid::mc::mmu::sameChunk((std::uintptr_t)addr, (std::uintptr_t)end)) {
+ if (simgrid::mc::mmu::same_chunk((std::uintptr_t)addr, (std::uintptr_t)end)) {
// The memory is contained in a single page:
return mc_translate_address_region_chunked((uintptr_t)addr, region);
}
}
}
-static XBT_ALWAYS_INLINE void* MC_region_read_pointer(mc_mem_region_t region, const void* addr)
+static XBT_ALWAYS_INLINE void* MC_region_read_pointer(simgrid::mc::RegionSnapshot* region, const void* addr)
{
void* res;
return *(void**)MC_region_read(region, &res, addr, sizeof(void*));
return MPI_SUCCESS;
}
-int PMPI_Error_string(int errorcode, char* string, int* resultlen){
- if (errorcode<0 || errorcode>= MPI_MAX_ERROR_STRING || string ==nullptr){
+int PMPI_Error_string(int errorcode, char* string, int* resultlen)
+{
+ static const char* smpi_error_string[] = {FOREACH_ERROR(GENERATE_STRING)};
+ constexpr int nerrors = (sizeof smpi_error_string) / (sizeof smpi_error_string[0]);
+ if (errorcode < 0 || errorcode >= nerrors || string == nullptr)
return MPI_ERR_ARG;
- } else {
- static const char *smpi_error_string[] = {
- FOREACH_ERROR(GENERATE_STRING)
- };
- *resultlen = strlen(smpi_error_string[errorcode]);
- strncpy(string, smpi_error_string[errorcode], *resultlen);
- return MPI_SUCCESS;
- }
+
+ int len = snprintf(string, MPI_MAX_ERROR_STRING, "%s", smpi_error_string[errorcode]);
+ *resultlen = std::min(len, MPI_MAX_ERROR_STRING - 1);
+ return MPI_SUCCESS;
}
int PMPI_Keyval_create(MPI_Copy_function* copy_fn, MPI_Delete_function* delete_fn, int* keyval, void* extra_state) {
static int is_2dmesh(int num, int *i, int *j)
{
int x, max = num / 2;
- x = sqrt(num);
+ x = sqrt(double(num));
while (x <= max) {
if ((num % x) == 0) {
static int alltoall_check_is_2dmesh(int num, int *i, int *j)
{
int x, max = num / 2;
- x = sqrt(num);
+ x = sqrt(double(num));
while (x <= max) {
if ((num % x) == 0) {
MPI_Comm comm)
{
int tag = -COLL_TAG_BCAST;//in order to use ANY_TAG, make this one positive
- int header_tag = 10;
+ int header_tag = -10;
MPI_Status status;
int curr_remainder;
# define MAC_OS_X_VERSION_10_12 101200
# endif
constexpr bool HAVE_WORKING_MMAP = (MAC_OS_X_VERSION_MIN_REQUIRED >= MAC_OS_X_VERSION_10_12);
-#elif defined(__FreeBSD__) || defined(__FreeBSD_kernel__)
+#elif defined(__FreeBSD__) || defined(__FreeBSD_kernel__) || defined(__sun)
constexpr bool HAVE_WORKING_MMAP = false;
#else
constexpr bool HAVE_WORKING_MMAP = true;
{
if (this == MPI_COMM_UNINITIALIZED)
return smpi_process()->comm_world()->split(color, key);
- int system_tag = 123;
+ int system_tag = -123;
int* recvbuf;
MPI_Group group_root = nullptr;
xbt_assert(ref, "Cannot match recv against null reference");
xbt_assert(req, "Cannot match recv against null request");
- if((ref->src_ == MPI_ANY_SOURCE || req->src_ == ref->src_)
+ if(((ref->src_ == MPI_ANY_SOURCE && (ref->comm_->group()->rank(req->src_) != MPI_UNDEFINED)) || req->src_ == ref->src_)
&& ((ref->tag_ == MPI_ANY_TAG && req->tag_ >=0) || req->tag_ == ref->tag_)){
//we match, we can transfer some values
if(ref->src_ == MPI_ANY_SOURCE)
xbt_assert(ref, "Cannot match send against null reference");
xbt_assert(req, "Cannot match send against null request");
- if((req->src_ == MPI_ANY_SOURCE || req->src_ == ref->src_)
+ if(((req->src_ == MPI_ANY_SOURCE && (req->comm_->group()->rank(ref->src_) != MPI_UNDEFINED)) || req->src_ == ref->src_)
&& ((req->tag_ == MPI_ANY_TAG && ref->tag_ >=0)|| req->tag_ == ref->tag_)){
if(req->src_ == MPI_ANY_SOURCE)
req->real_src_ = ref->src_;
try{
i = simcall_comm_testany(comms.data(), comms.size()); // The i-th element in comms matches!
} catch (const xbt_ex&) {
+ XBT_DEBUG("Exception in testany");
return 0;
}
if (requests[*index] != MPI_REQUEST_NULL && (requests[*index]->flags_ & MPI_REQ_NON_PERSISTENT))
requests[*index] = MPI_REQUEST_NULL;
+ XBT_DEBUG("Testany - returning with index %d", *index);
*flag=1;
}
nsleeps = 1;
nsleeps++;
}
} else {
+ XBT_DEBUG("Testany on inactive handles, returning flag=1 but empty status");
//all requests are null or inactive, return true
*flag = 1;
+ *index = MPI_UNDEFINED;
Status::empty(status);
}
return MPI_SUCCESS;
}
/* Allocate the array of prime factors which cannot exceed log_2(num) entries */
- int sqrtnum = ceil(sqrt(num));
- int size = ceil(log(num) / log(2));
+ int sqrtnum = ceil(sqrt(double(num)));
+ int size = ceil(log(double(num)) / log(2.0));
*factors = new int[size];
int i = 0;
# Create a temporary file, with its name of the form $1_XXX$2, where XXX is replaced by an unique string.
# $1: prefix, $2: suffix
mymktemp () {
- tmp=$(mktemp --suffix="$2" "$1_XXXXXXXXXX" 2> /dev/null)
- if [ -z "$tmp" ]; then
+ local_tmp=$(mktemp --suffix="$2" "$1_XXXXXXXXXX" 2> /dev/null)
+ if [ -z "$local_tmp" ]; then
# mktemp failed (unsupported --suffix ?), try unsafe mode
- tmp=$(mktemp -u "$1_XXXXXXXXXX" 2> /dev/null)
- if [ -z "$tmp" ]; then
+ local_tmp=$(mktemp -u "$1_XXXXXXXXXX" 2> /dev/null)
+ if [ -z "$local_tmp" ]; then
# mktemp failed again (doesn't exist ?), try very unsafe mode
if [ -z "${mymktemp_seq}" ]; then
mymktemp_seq=$(date +%d%H%M%S)
fi
- tmp="$1_$$x${mymktemp_seq}"
+ local_tmp="$1_$$x${mymktemp_seq}"
mymktemp_seq=$((mymktemp_seq + 1))
fi
- tmp="${tmp}$2"
+ local_tmp="${local_tmp}$2"
# create temp file, and exit if it existed before
- sh -C -c "true > \"${tmp}\"" || exit 1
+ sh -C -c "true > \"${local_tmp}\"" || exit 1
fi
- echo "${tmp}"
+ echo "${local_tmp}"
}
# Add a word to the end of a list (words separated by LISTSEP)
# $1: list, $2...: words to add
list_add () {
- local list content newcontent
- list="$1"
+ local_list="$1"
shift
if [ $# -gt 0 ]; then
- eval content=\"\${$list}\"
+ eval local_content=\"\${$local_list}\"
IFS="$LISTSEP"
- newcontent="$*"
+ local_newcontent="$*"
IFS="$SAVEIFS"
- if [ -z "$content" ]; then
- content="$newcontent"
+ if [ -z "$local_content" ]; then
+ local_content="$local_newcontent"
else
- content="$content${LISTSEP}$newcontent"
+ local_content="$local_content${LISTSEP}$local_newcontent"
fi
- eval $list=\"\${content}\"
+ eval $local_list=\"\${local_content}\"
fi
}
{
std::vector<const char*> storages;
for (auto const& s : storage_)
- if (s.second->getHost() == piface_->get_cname())
+ if (s.second->get_host() == piface_->get_cname())
storages.push_back(s.second->piface_.get_cname());
return storages;
}
{
StorageImpl::turn_on();
XBT_DEBUG("Create resource with Bread '%f' Bwrite '%f' and Size '%llu'", bread, bwrite, size);
- constraintRead_ = maxminSystem->constraint_new(this, bread);
- constraintWrite_ = maxminSystem->constraint_new(this, bwrite);
+ constraint_read_ = maxminSystem->constraint_new(this, bread);
+ constraint_write_ = maxminSystem->constraint_new(this, bwrite);
}
StorageImpl::~StorageImpl()
{
- xbt_assert(currentlyDestroying_, "Don't delete Storages directly. Call destroy() instead.");
+ xbt_assert(currently_destroying_, "Don't delete Storages directly. Call destroy() instead.");
}
/** @brief Fire the required callbacks and destroy the object
*/
void StorageImpl::destroy()
{
- if (not currentlyDestroying_) {
- currentlyDestroying_ = true;
+ if (not currently_destroying_) {
+ currently_destroying_ = true;
s4u::Storage::on_destruction(this->piface_);
delete this;
}
void turn_off() override;
void destroy(); // Must be called instead of the destructor
- virtual Action* io_start(sg_size_t size, s4u::Io::OpType type) = 0;
+ virtual StorageAction* io_start(sg_size_t size, s4u::Io::OpType type) = 0;
/**
* @brief Read a file
*
* @return The StorageAction corresponding to the writing
*/
virtual StorageAction* write(sg_size_t size) = 0;
- virtual std::string getHost() { return attach_; }
+ const std::string& get_host() const { return attach_; }
- lmm::Constraint* constraintWrite_; /* Constraint for maximum write bandwidth*/
- lmm::Constraint* constraintRead_; /* Constraint for maximum write bandwidth*/
+ lmm::Constraint* constraint_write_; /* Constraint for maximum write bandwidth*/
+ lmm::Constraint* constraint_read_; /* Constraint for maximum write bandwidth*/
std::string typeId_;
std::string content_name_; // Only used at parsing time then goes to the FileSystemExtension
sg_size_t size_; // Only used at parsing time then goes to the FileSystemExtension
private:
- bool currentlyDestroying_ = false;
+ bool currently_destroying_ = false;
// Name of the host to which this storage is attached. Only used at platform parsing time, then the interface stores
// the Host directly.
std::string attach_;
* Resource *
************/
+class CpuAction;
+
/** @ingroup SURF_cpu_interface
* @brief SURF cpu resource interface class
* @details A Cpu represent a cpu associated to a host
* @param size The value of the processing amount (in flop) needed to process
* @return The CpuAction corresponding to the processing
*/
- virtual Action* execution_start(double size) = 0;
+ virtual CpuAction* execution_start(double size) = 0;
/**
* @brief Execute some quantity of computation on more than one core
* @param requested_cores The desired amount of cores. Must be >= 1
* @return The CpuAction corresponding to the processing
*/
- virtual Action* execution_start(double size, int requested_cores) = 0;
+ virtual CpuAction* execution_start(double size, int requested_cores) = 0;
/**
* @brief Make a process sleep for duration (in seconds)
* @param duration The number of seconds to sleep
* @return The CpuAction corresponding to the sleeping
*/
- virtual Action* sleep(double duration) = 0;
+ virtual CpuAction* sleep(double duration) = 0;
/** @brief Get the amount of cores */
virtual int get_core_count();
bool is_used() override;
CpuAction* execution_start(double size) override;
- Action* execution_start(double, int) override
+ CpuAction* execution_start(double, int) override
{
THROW_UNIMPLEMENTED;
return nullptr;
}
}
-kernel::resource::Action* HostL07Model::execute_parallel(const std::vector<s4u::Host*>& host_list,
- const double* flops_amount, const double* bytes_amount,
- double rate)
+kernel::resource::CpuAction* HostL07Model::execute_parallel(const std::vector<s4u::Host*>& host_list,
+ const double* flops_amount, const double* bytes_amount,
+ double rate)
{
return new L07Action(this, host_list, flops_amount, bytes_amount, rate);
}
s4u::Link::on_creation(this->piface_);
}
-kernel::resource::Action* CpuL07::execution_start(double size)
+kernel::resource::CpuAction* CpuL07::execution_start(double size)
{
std::vector<s4u::Host*> host_list = {get_host()};
double* flops_amount = new double[host_list.size()]();
flops_amount[0] = size;
- kernel::resource::Action* res =
+ kernel::resource::CpuAction* res =
static_cast<CpuL07Model*>(get_model())->hostModel_->execute_parallel(host_list, flops_amount, nullptr, -1);
static_cast<L07Action*>(res)->free_arrays_ = true;
return res;
}
-kernel::resource::Action* CpuL07::sleep(double duration)
+kernel::resource::CpuAction* CpuL07::sleep(double duration)
{
L07Action *action = static_cast<L07Action*>(execution_start(1.0));
action->set_max_duration(duration);
double next_occuring_event(double now) override;
void update_actions_state(double now, double delta) override;
- kernel::resource::Action* execute_parallel(const std::vector<s4u::Host*>& host_list, const double* flops_amount,
- const double* bytes_amount, double rate) override;
+ kernel::resource::CpuAction* execute_parallel(const std::vector<s4u::Host*>& host_list, const double* flops_amount,
+ const double* bytes_amount, double rate) override;
};
class CpuL07Model : public kernel::resource::CpuModel {
~CpuL07() override;
bool is_used() override;
void apply_event(kernel::profile::Event* event, double value) override;
- kernel::resource::Action* execution_start(double size) override;
- kernel::resource::Action* execution_start(double, int) override
+ kernel::resource::CpuAction* execution_start(double size) override;
+ kernel::resource::CpuAction* execution_start(double, int) override
{
THROW_UNIMPLEMENTED;
return nullptr;
}
- kernel::resource::Action* sleep(double duration) override;
+ kernel::resource::CpuAction* sleep(double duration) override;
protected:
void on_speed_change() override;
* Action *
**********/
class L07Action : public kernel::resource::CpuAction {
- friend Action *CpuL07::execution_start(double size);
- friend Action *CpuL07::sleep(double duration);
- friend Action* HostL07Model::execute_parallel(const std::vector<s4u::Host*>& host_list, const double* flops_amount,
- const double* bytes_amount, double rate);
+ friend CpuAction* CpuL07::execution_start(double size);
+ friend CpuAction* CpuL07::sleep(double duration);
+ friend CpuAction* HostL07Model::execute_parallel(const std::vector<s4u::Host*>& host_list, const double* flops_amount,
+ const double* bytes_amount, double rate);
friend Action* NetworkL07Model::communicate(s4u::Host* src, s4u::Host* dst, double size, double rate);
public:
{
for (auto const& s : simgrid::s4u::Engine::get_instance()->get_all_storages()) {
simgrid::kernel::routing::NetPoint* host_elm =
- simgrid::s4u::Engine::get_instance()->netpoint_by_name_or_null(s->get_impl()->getHost());
+ simgrid::s4u::Engine::get_instance()->netpoint_by_name_or_null(s->get_impl()->get_host());
if (not host_elm)
surf_parse_error(std::string("Unable to attach storage ") + s->get_cname() + ": host " +
- s->get_impl()->getHost() + " does not exist.");
+ s->get_impl()->get_host() + " does not exist.");
else
- s->set_host(simgrid::s4u::Host::by_name(s->get_impl()->getHost()));
+ s->set_host(simgrid::s4u::Host::by_name(s->get_impl()->get_host()));
}
}
model->get_maxmin_system()->expand(storage->get_constraint(), get_variable(), 1.0);
switch(type) {
case s4u::Io::OpType::READ:
- model->get_maxmin_system()->expand(storage->constraintRead_, get_variable(), 1.0);
+ model->get_maxmin_system()->expand(storage->constraint_read_, get_variable(), 1.0);
break;
case s4u::Io::OpType::WRITE:
- model->get_maxmin_system()->expand(storage->constraintWrite_, get_variable(), 1.0);
+ model->get_maxmin_system()->expand(storage->constraint_write_, get_variable(), 1.0);
break;
default:
THROW_UNIMPLEMENTED;
double bwrite, const std::string& type_id, const std::string& content_name, sg_size_t size,
const std::string& attach);
virtual ~StorageN11() = default;
- StorageAction* io_start(sg_size_t size, s4u::Io::OpType type);
- StorageAction* read(sg_size_t size);
- StorageAction* write(sg_size_t size);
+ StorageAction* io_start(sg_size_t size, s4u::Io::OpType type) override;
+ StorageAction* read(sg_size_t size) override;
+ StorageAction* write(sg_size_t size) override;
};
/**********
#endif
#include <cinttypes>
-#include <xbt/base.h>
-#include <xbt/log.h>
-#include <xbt/sysdep.h>
#include "memory_map.hpp"
-XBT_LOG_NEW_DEFAULT_SUBCATEGORY(xbt_memory_map, xbt, "Logging specific to algorithms for memory_map");
-
namespace simgrid {
namespace xbt {
/**
* \todo This function contains many cases that do not allow for a
- * recovery. Currently, xbt_abort() is called but we should
+ * recovery. Currently, abort() is called but we should
* much rather die with the specific reason so that it's easier
* to find out what's going on.
*/
-XBT_PRIVATE std::vector<VmMap> get_memory_map(pid_t pid)
+std::vector<VmMap> get_memory_map(pid_t pid)
{
std::vector<VmMap> ret;
#if defined __APPLE__
/* Request authorization to read mappings */
if (task_for_pid(mach_task_self(), pid, &map) != KERN_SUCCESS) {
std::perror("task_for_pid failed");
- xbt_die("Cannot request authorization for kernel information access");
+ std::fprintf(stderr, "Cannot request authorization for kernel information access\n");
+ abort();
}
/*
}
else if (kr != KERN_SUCCESS) {
std::perror("mach_vm_region failed");
- xbt_die("Cannot request authorization for kernel information access");
+ std::fprintf(stderr, "Cannot request authorization for kernel information access\n");
+ abort();
}
VmMap memreg;
if (dladdr(reinterpret_cast<void*>(address), &dlinfo))
memreg.pathname = dlinfo.dli_fname;
- XBT_DEBUG("Region: %016" PRIx64 "-%016" PRIx64 " | %c%c%c | %s", memreg.start_addr, memreg.end_addr,
+#if 0 /* debug */
+ std::fprintf(stderr, "Region: %016" PRIx64 "-%016" PRIx64 " | %c%c%c | %s\n", memreg.start_addr, memreg.end_addr,
(memreg.prot & PROT_READ) ? 'r' : '-', (memreg.prot & PROT_WRITE) ? 'w' : '-',
(memreg.prot & PROT_EXEC) ? 'x' : '-', memreg.pathname.c_str());
+#endif
ret.push_back(std::move(memreg));
address += size;
fp.open(path);
if (not fp) {
std::perror("open failed");
- xbt_die("Cannot open %s to investigate the memory map of the process.", path.c_str());
+ std::fprintf(stderr, "Cannot open %s to investigate the memory map of the process.\n", path.c_str());
+ abort();
}
/* Read one line at the time, parse it and add it to the memory map to be returned */
}
/* Check to see if we got the expected amount of columns */
- if (i < 6)
- xbt_die("The memory map apparently only supplied less than 6 columns. Recovery impossible.");
+ if (i < 6) {
+ std::fprintf(stderr, "The memory map apparently only supplied less than 6 columns. Recovery impossible.\n");
+ abort();
+ }
/* Ok we are good enough to try to get the info we need */
/* First get the start and the end address of the map */
char* tok = strtok_r(lfields[0], "-", &saveptr);
- if (tok == nullptr)
- xbt_die("Start and end address of the map are not concatenated by a hyphen (-). Recovery impossible.");
+ if (tok == nullptr) {
+ std::fprintf(stderr,
+ "Start and end address of the map are not concatenated by a hyphen (-). Recovery impossible.\n");
+ abort();
+ }
VmMap memreg;
char *endptr;
memreg.start_addr = std::strtoull(tok, &endptr, 16);
/* Make sure that the entire string was an hex number */
if (*endptr != '\0')
- xbt_abort();
+ abort();
tok = strtok_r(nullptr, "-", &saveptr);
if (tok == nullptr)
- xbt_abort();
+ abort();
memreg.end_addr = std::strtoull(tok, &endptr, 16);
/* Make sure that the entire string was an hex number */
if (*endptr != '\0')
- xbt_abort();
+ abort();
/* Get the permissions flags */
if (std::strlen(lfields[1]) < 4)
- xbt_abort();
+ abort();
memreg.prot = 0;
for (i = 0; i < 3; i++){
} else {
memreg.flags |= MAP_SHARED;
if (lfields[1][3] != 's')
- XBT_WARN("The protection is neither 'p' (private) nor 's' (shared) but '%s'. Let's assume shared, as on b0rken "
- "win-ubuntu systems.\nFull line: %s\n",
- lfields[1], line);
+ fprintf(stderr,
+ "The protection is neither 'p' (private) nor 's' (shared) but '%s'. Let's assume shared, as on b0rken "
+ "win-ubuntu systems.\nFull line: %s\n",
+ lfields[1], line);
}
/* Get the offset value */
memreg.offset = std::strtoull(lfields[2], &endptr, 16);
/* Make sure that the entire string was an hex number */
if (*endptr != '\0')
- xbt_abort();
+ abort();
/* Get the device major:minor bytes */
tok = strtok_r(lfields[3], ":", &saveptr);
if (tok == nullptr)
- xbt_abort();
+ abort();
memreg.dev_major = (char) strtoul(tok, &endptr, 16);
/* Make sure that the entire string was an hex number */
if (*endptr != '\0')
- xbt_abort();
+ abort();
tok = strtok_r(nullptr, ":", &saveptr);
if (tok == nullptr)
- xbt_abort();
+ abort();
memreg.dev_minor = (char) std::strtoul(tok, &endptr, 16);
/* Make sure that the entire string was an hex number */
if (*endptr != '\0')
- xbt_abort();
+ abort();
/* Get the inode number and make sure that the entire string was a long int */
memreg.inode = strtoul(lfields[4], &endptr, 10);
if (*endptr != '\0')
- xbt_abort();
+ abort();
/* And finally get the pathname */
if (lfields[5])
/* Create space for a new map region in the region's array and copy the */
/* parsed stuff from the temporal memreg variable */
- XBT_DEBUG("Found region for %s", not memreg.pathname.empty() ? memreg.pathname.c_str() : "(null)");
+ // std::fprintf(stderr, "Found region for %s\n", not memreg.pathname.empty() ? memreg.pathname.c_str() : "(null)");
ret.push_back(std::move(memreg));
}
if ((prstat = procstat_open_sysctl()) == NULL) {
std::perror("procstat_open_sysctl failed");
- xbt_die("Cannot access kernel state information");
+ std::fprintf(stderr, "Cannot access kernel state information\n");
+ abort();
}
if ((proc = procstat_getprocs(prstat, KERN_PROC_PID, pid, &cnt)) == NULL) {
std::perror("procstat_open_sysctl failed");
- xbt_die("Cannot access process information");
+ std::fprintf(stderr, "Cannot access process information\n");
+ abort();
}
if ((vmentries = procstat_getvmmap(prstat, proc, &cnt)) == NULL) {
std::perror("procstat_getvmmap failed");
- xbt_die("Cannot access process memory mappings");
+ std::fprintf(stderr, "Cannot access process memory mappings\n");
+ abort();
}
for (unsigned int i = 0; i < cnt; i++) {
VmMap memreg;
procstat_freeprocs(prstat, proc);
procstat_close(prstat);
#else
- xbt_die("Could not get memory map from process %lli", (long long int) pid);
+ std::fprintf(stderr, "Could not get memory map from process %lli\n", (long long int)pid);
+ abort();
#endif
return ret;
}
#include <string>
#include <vector>
-#include <xbt/base.h>
-#include <sys/types.h>
-
namespace simgrid {
namespace xbt {
std::string pathname; /* Path name of the mapped file */
};
-XBT_PRIVATE std::vector<VmMap> get_memory_map(pid_t pid);
-
+std::vector<VmMap> get_memory_map(pid_t pid);
}
}
> [Bourassa:worker:(6) 5.133855] [msg_app_masterworker/INFO] I'm done. See you!
> [5.133855] [msg_app_masterworker/INFO] Simulation time 5.13386
-$ $SG_TEST_EXENV ${bindir:=.}/teshsuite/msg/app-token-ring/app-token-ring ${srcdir:=.}/examples/platforms/routing_cluster.lua "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/teshsuite/msg/app-token-ring/app-token-ring ${srcdir:=.}/examples/platforms/routing_cluster.lua "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Number of hosts '6'
> [ 0.000000] (1:0@host1) Host "0" send 'Token' to Host "1"
> [ 0.017354] (2:1@host2) Host "1" received "Token"
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/dwarf-expression
+$ ${bindir:=.}/dwarf-expression
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/dwarf
+$ ${bindir:=.}/dwarf
#!/usr/bin/env tesh
! expect return 1
-$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/random-bug ${srcdir:=.}/examples/platforms/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --log=xbt_cfg.thresh:warning --cfg=model-check/record:1
+$ ${bindir:=.}/../../../bin/simgrid-mc ${bindir:=.}/random-bug ${srcdir:=.}/examples/platforms/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --log=xbt_cfg.thresh:warning
> [ 0.000000] (0:maestro@) Check a safety property. Reduction is: dpor.
> [ 0.000000] (0:maestro@) **************************
> [ 0.000000] (0:maestro@) *** PROPERTY NOT VALID ***
> [ 0.000000] (0:maestro@) **************************
> [ 0.000000] (0:maestro@) Counter-example execution trace:
+> [ 0.000000] (0:maestro@) [(1)Tremblay (app)] MC_RANDOM(3)
+> [ 0.000000] (0:maestro@) [(1)Tremblay (app)] MC_RANDOM(4)
> [ 0.000000] (0:maestro@) Path = 1/3;1/4
-> [ 0.000000] (0:maestro@) [(1)Tremblay (app)] MC_RANDOM(3)
-> [ 0.000000] (0:maestro@) [(1)Tremblay (app)] MC_RANDOM(4)
> [ 0.000000] (0:maestro@) Expanded states = 27
> [ 0.000000] (0:maestro@) Visited states = 68
> [ 0.000000] (0:maestro@) Executed transitions = 46
! timeout 60
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/chainsend ${platfdir}/cluster_backbone.xml app-chainsend_d.xml "--log=root.fmt:[%12.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/chainsend ${platfdir}/cluster_backbone.xml app-chainsend_d.xml "--log=root.fmt:[%12.6r]%e(%i:%P@%h)%e%m%n"
> [ 2.214423] (2:peer@node-1.simgrid.org) ### 2.214423 16777216 bytes (Avg 7.225359 MB/s); copy finished (simulated).
> [ 2.222796] (3:peer@node-2.simgrid.org) ### 2.222796 16777216 bytes (Avg 7.198141 MB/s); copy finished (simulated).
> [ 2.231170] (4:peer@node-3.simgrid.org) ### 2.231170 16777216 bytes (Avg 7.171126 MB/s); copy finished (simulated).
p Testing with default compound
-$ $SG_TEST_EXENV ${bindir:=.}/app-pingpong$EXEEXT ${platfdir}/small_platform.xml app-pingpong_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/app-pingpong ${platfdir}/small_platform.xml app-pingpong_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:pinger@Tremblay) Ping -> Jupiter
> [ 0.000000] (2:ponger@Jupiter) Pong -> Tremblay
> [ 0.019014] (2:ponger@Jupiter) Task received : small communication (latency bound)
p Testing with default compound and Full network optimization
-$ $SG_TEST_EXENV ${bindir:=.}/app-pingpong$EXEEXT ${platfdir}/small_platform.xml app-pingpong_d.xml "--cfg=network/optim:Full" "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/app-pingpong ${platfdir}/small_platform.xml app-pingpong_d.xml "--cfg=network/optim:Full" "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/optim' to 'Full'
> [ 0.000000] (1:pinger@Tremblay) Ping -> Jupiter
> [ 0.000000] (2:ponger@Jupiter) Pong -> Tremblay
p Testing the deprecated CM02 network model
-$ $SG_TEST_EXENV ${bindir:=.}/app-pingpong$EXEEXT ${platfdir}/small_platform.xml app-pingpong_d.xml --cfg=cpu/model:Cas01 --cfg=network/model:CM02 "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/app-pingpong ${platfdir}/small_platform.xml app-pingpong_d.xml --cfg=cpu/model:Cas01 --cfg=network/model:CM02 "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'cpu/model' to 'Cas01'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/model' to 'CM02'
> [ 0.000000] (1:pinger@Tremblay) Ping -> Jupiter
p Testing the surf network Reno fairness model using lagrangian approach
-$ $SG_TEST_EXENV ${bindir:=.}/app-pingpong$EXEEXT ${platfdir}/small_platform.xml app-pingpong_d.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Reno" --log=surf_lagrange.thres=critical "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/app-pingpong ${platfdir}/small_platform.xml app-pingpong_d.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Reno" --log=surf_lagrange.thres=critical "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'host/model' to 'compound'
> [ 0.000000] (0:maestro@) Configuration change: Set 'cpu/model' to 'Cas01'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/model' to 'Reno'
p Testing the surf network Reno2 fairness model using lagrangian approach
-$ $SG_TEST_EXENV ${bindir:=.}/app-pingpong$EXEEXT ${platfdir}/small_platform.xml app-pingpong_d.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Reno2" --log=surf_lagrange.thres=critical "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/app-pingpong ${platfdir}/small_platform.xml app-pingpong_d.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Reno2" --log=surf_lagrange.thres=critical "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'host/model' to 'compound'
> [ 0.000000] (0:maestro@) Configuration change: Set 'cpu/model' to 'Cas01'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/model' to 'Reno2'
p Testing the surf network Vegas fairness model using lagrangian approach
-$ $SG_TEST_EXENV ${bindir:=.}/app-pingpong$EXEEXT ${platfdir}/small_platform.xml app-pingpong_d.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Vegas" "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/app-pingpong ${platfdir}/small_platform.xml app-pingpong_d.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Vegas" "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'host/model' to 'compound'
> [ 0.000000] (0:maestro@) Configuration change: Set 'cpu/model' to 'Cas01'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/model' to 'Vegas'
p Testing the surf network constant model
-$ $SG_TEST_EXENV ${bindir:=.}/app-pingpong$EXEEXT ${platfdir}/small_platform_constant.xml app-pingpong_d.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Constant" "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/app-pingpong ${platfdir}/small_platform_constant.xml app-pingpong_d.xml "--cfg=host/model:compound cpu/model:Cas01 network/model:Constant" "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Configuration change: Set 'host/model' to 'compound'
> [ 0.000000] (0:maestro@) Configuration change: Set 'cpu/model' to 'Cas01'
> [ 0.000000] (0:maestro@) Configuration change: Set 'network/model' to 'Constant'
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/app-token-ring ${platfdir}/routing_cluster.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/app-token-ring ${platfdir}/routing_cluster.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Number of hosts '6'
> [ 0.000000] (1:0@host1) Host "0" send 'Token' to Host "1"
> [ 0.017354] (2:1@host2) Host "1" received "Token"
> [ 0.131796] (1:0@host1) Host "0" received "Token"
> [ 0.131796] (0:maestro@) Simulation time 0.131796
-$ $SG_TEST_EXENV ${bindir:=.}/app-token-ring ${platfdir}/two_peers.xml "--log=root.fmt:[%12.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/app-token-ring ${platfdir}/two_peers.xml "--log=root.fmt:[%12.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Number of hosts '2'
> [ 0.000000] (1:0@100030591) Host "0" send 'Token' to Host "1"
> [ 0.624423] (2:1@100036570) Host "1" received "Token"
> [ 1.248846] (1:0@100030591) Host "0" received "Token"
> [ 1.248846] (0:maestro@) Simulation time 1.24885
-$ $SG_TEST_EXENV ${bindir:=.}/app-token-ring ${platfdir}/meta_cluster.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/app-token-ring ${platfdir}/meta_cluster.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Number of hosts '60'
> [ 0.000000] (1:0@host-1.cluster1) Host "0" send 'Token' to Host "1"
> [ 0.030364] (2:1@host-1.cluster2) Host "1" received "Token"
p Test1 MSG_comm_test() with Sleep_sender > Sleep_receiver
-$ $SG_TEST_EXENV ${bindir:=.}/async-wait ${platfdir:=.}/small_platform_fatpipe.xml ${srcdir:=.}/async-wait_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/async-wait ${platfdir:=.}/small_platform_fatpipe.xml ${srcdir:=.}/async-wait_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sender@Tremblay) sleep_start_time : 5.000000 , sleep_test_time : 0.100000
> [ 0.000000] (2:receiver@Ruby) sleep_start_time : 1.000000 , sleep_test_time : 0.100000
> [ 1.000000] (2:receiver@Ruby) Wait to receive a task
p Test2 MSG_comm_test() with Sleep_sender < Sleep_receiver
-$ $SG_TEST_EXENV ${bindir:=.}/async-wait ${platfdir:=.}/small_platform_fatpipe.xml ${srcdir:=.}/async-wait2_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/async-wait ${platfdir:=.}/small_platform_fatpipe.xml ${srcdir:=.}/async-wait2_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sender@Tremblay) sleep_start_time : 1.000000 , sleep_test_time : 0.100000
> [ 0.000000] (2:receiver@Ruby) sleep_start_time : 5.000000 , sleep_test_time : 0.100000
> [ 1.000000] (1:sender@Tremblay) Send to receiver-0 Task_0
p Test1 MSG_comm_wait() with Sleep_sender > Sleep_receiver
-$ $SG_TEST_EXENV ${bindir:=.}/async-wait ${platfdir:=.}/small_platform_fatpipe.xml ${srcdir:=.}/async-wait3_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/async-wait ${platfdir:=.}/small_platform_fatpipe.xml ${srcdir:=.}/async-wait3_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sender@Tremblay) sleep_start_time : 5.000000 , sleep_test_time : 0.000000
> [ 0.000000] (2:receiver@Ruby) sleep_start_time : 1.000000 , sleep_test_time : 0.000000
> [ 1.000000] (2:receiver@Ruby) Wait to receive a task
p Test2 MSG_comm_wait() with Sleep_sender < Sleep_receiver
-$ $SG_TEST_EXENV ${bindir:=.}/async-wait ${platfdir:=.}/small_platform_fatpipe.xml ${srcdir:=.}/async-wait4_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/async-wait ${platfdir:=.}/small_platform_fatpipe.xml ${srcdir:=.}/async-wait4_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sender@Tremblay) sleep_start_time : 1.000000 , sleep_test_time : 0.000000
> [ 0.000000] (2:receiver@Ruby) sleep_start_time : 5.000000 , sleep_test_time : 0.000000
> [ 1.000000] (1:sender@Tremblay) Send to receiver-0 Task_0
p Test1 MSG_comm_waitall() for sender
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/async-waitall ${platfdir:=.}/small_platform_fatpipe.xml ${srcdir:=.}/async-waitall_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/async-waitall ${platfdir:=.}/small_platform_fatpipe.xml ${srcdir:=.}/async-waitall_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sender@Tremblay) Send to receiver-0 Task_0
> [ 0.000000] (1:sender@Tremblay) Send to receiver-0 Task_1
> [ 0.000000] (1:sender@Tremblay) Send to receiver-0 Task_2
p Testing the MSG_comm_waitany function
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/async-waitany ${platfdir:=.}/small_platform.xml ${srcdir:=.}/async-waitany_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/async-waitany ${platfdir:=.}/small_platform.xml ${srcdir:=.}/async-waitany_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sender@Tremblay) Send to receiver-0 Task_0 comm_size 1000000.000000
> [ 0.000000] (1:sender@Tremblay) Send to receiver-1 Task_1 comm_size 1000000.000000
> [ 0.000000] (1:sender@Tremblay) Send to receiver-0 Task_2 comm_size 1000000.000000
-$ $SG_TEST_EXENV ${bindir:=.}/cloud-capping ${platfdir}/small_platform.xml --log=no_loc "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/cloud-capping ${platfdir}/small_platform.xml --log=no_loc "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:master_@Fafard) # 1. Put a single task on a PM.
> [ 0.000000] (1:master_@Fafard) ### Test: with/without MSG_task_set_bound
> [ 0.000000] (1:master_@Fafard) ### Test: no bound for Task1@Fafard
-$ $SG_TEST_EXENV ${bindir:=.}/cloud-migration ${platfdir}/small_platform.xml --log=no_loc "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/cloud-migration ${platfdir}/small_platform.xml --log=no_loc "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:master_@Fafard) Test: Migrate a VM with 1000 Mbytes RAM
> [132.765801] (1:master_@Fafard) VM0 migrated: Fafard->Tremblay in 132.766 s
> [132.765801] (1:master_@Fafard) Test: Migrate a VM with 100 Mbytes RAM
p Testing a vm with two successive tasks
-$ $SG_TEST_EXENV ${bindir:=.}/cloud-simple$EXEEXT --log=no_loc ${platfdir}/small_platform.xml
+$ ${bindir:=.}/cloud-simple --log=no_loc ${platfdir}/small_platform.xml
> [Fafard:master_:(1) 0.000000] [msg_test/INFO] ## Test 1 (started): check computation on normal PMs
> [Fafard:master_:(1) 0.000000] [msg_test/INFO] ### Put a task on a PM
> [Fafard:compute:(2) 0.013107] [msg_test/INFO] Fafard:compute task executed 0.0131068
p Testing a vm with two successive tasks
-$ $SG_TEST_EXENV ${bindir:=.}/cloud-two-tasks$EXEEXT ${platfdir}/small_platform.xml
+$ ${bindir:=.}/cloud-two-tasks ${platfdir}/small_platform.xml
> [VM0:compute:(2) 0.000000] [msg_test/INFO] VM0:compute task 1 created 0
> [Fafard:master_:(1) 0.000000] [msg_test/INFO] aTask remaining duration: 1e+09
> [Fafard:master_:(1) 1.000000] [msg_test/INFO] aTask remaining duration: 9.23704e+08
p Testing the mechanism for computing host energy consumption
-$ ${bindir:=.}/energy-consumption$EXEEXT ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/energy-consumption ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:dvfs_test@MyHost1) Energetic profile: 100.0:120.0:200.0, 93.0:110.0:170.0, 90.0:105.0:150.0
> [ 0.000000] (1:dvfs_test@MyHost1) Initial peak speed=1E+08 flop/s; Energy dissipated =0E+00 J
> [ 0.000000] (1:dvfs_test@MyHost1) Sleep for 10 seconds
> [ 30.000000] (0:maestro@) Energy consumption of host MyHost2: 2100.000000 Joules
> [ 30.000000] (0:maestro@) Energy consumption of host MyHost3: 3000.000000 Joules
-$ ${bindir:=.}/energy-consumption$EXEEXT ${platfdir}/energy_cluster.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=host/model:ptask_L07
+$ ${bindir:=.}/energy-consumption ${platfdir}/energy_cluster.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=host/model:ptask_L07
> [ 0.000000] (0:maestro@) Configuration change: Set 'host/model' to 'ptask_L07'
> [ 0.000000] (0:maestro@) Switching to the L07 model to handle parallel tasks.
> [ 0.000000] (1:dvfs_test@MyHost1) Energetic profile: 100.0:120.0:200.0, 93.0:110.0:170.0, 90.0:105.0:150.0
p Testing the DVFS-related functions
-$ ${bindir:=.}/energy-pstate$EXEEXT ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/energy-pstate ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:dvfs_test@MyHost1) Count of Processor states=3
> [ 0.000000] (1:dvfs_test@MyHost1) Current power peak=100000000.000000
> [ 0.000000] (2:dvfs_test@MyHost2) Count of Processor states=3
> [ 6.000000] (2:dvfs_test@MyHost2) Current power peak=20000000.000000
> [ 6.000000] (0:maestro@) Total simulation time: 6.000000e+00
-$ ${bindir:=.}/energy-pstate$EXEEXT ${platfdir}/energy_cluster.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=host/model:ptask_L07
+$ ${bindir:=.}/energy-pstate ${platfdir}/energy_cluster.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n" --cfg=host/model:ptask_L07
> [ 0.000000] (0:maestro@) Configuration change: Set 'host/model' to 'ptask_L07'
> [ 0.000000] (0:maestro@) Switching to the L07 model to handle parallel tasks.
> [ 0.000000] (1:dvfs_test@MyHost1) Count of Processor states=3
#!/usr/bin/env tesh
-$ ${bindir:=.}/energy-ptask$EXEEXT ${platfdir:=.}/energy_platform.xml --energy "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/energy-ptask ${platfdir:=.}/energy_platform.xml --energy "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) Switching to the L07 model to handle parallel tasks.
> [ 0.000000] (1:test@MyHost1) First, build a classical parallel task, with 1 Gflop to execute on each node, and 10MB to exchange between each pair
> [300.000000] (1:test@MyHost1) We can do the same with a timeout of one second enabled.
test = 6;
if (xbt_dynar_search_or_negative(tests, &test) != -1) {
XBT_INFO("Test 6: Turn on Jupiter, assign a VM on Jupiter, launch a process inside the VM, and turn off the node");
- MSG_process_set_data_cleanup(nullptr); // If set for test 6, cleanup handler gives double-free errors.
// Create VM0
msg_vm_t vm0 = MSG_vm_create_core(jupiter, "vm0");
> [13.000000] [msg_test/INFO] Simulation time 13
! expect signal SIGIOT
-$ ${bindir}/host_on_off_processes ${platfdir}/small_platform.xml 2 --log=no_loc
+$ $VALGRIND_NO_LEAK_CHECK ${bindir}/host_on_off_processes ${platfdir}/small_platform.xml 2 --log=no_loc
> [Tremblay:test_launcher:(1) 0.000000] [msg_test/INFO] Test 2:
> [Tremblay:test_launcher:(1) 0.000000] [msg_test/INFO] Turn off Jupiter
> [0.000000] [simix_process/WARNING] Cannot launch actor 'process_daemon' on failed host 'Jupiter'
#!/usr/bin/env tesh
-$ ${bindir:=.}/io-file-remote$EXEEXT ${platfdir:=.}/storage/remote_io.xml ${srcdir:=.}/io-file-remote_d.xml "--log=root.fmt:[%10.6r]%e(%i@%5h)%e%m%n"
+$ ${bindir:=.}/io-file-remote ${platfdir:=.}/storage/remote_io.xml ${srcdir:=.}/io-file-remote_d.xml "--log=root.fmt:[%10.6r]%e(%i@%5h)%e%m%n"
> [ 0.000000] (0@ ) Init: 12/476824 MiB used/free on 'Disk1'
> [ 0.000000] (0@ ) Init: 2280/474556 MiB used/free on 'Disk2'
> [ 0.000000] (1@alice) Opened file 'c:\Windows\setupact.log'
#!/usr/bin/env tesh
-$ ${bindir}/io-raw-storage$EXEEXT ${platfdir}/storage/storage.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir}/io-raw-storage ${platfdir}/storage/storage.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:@denise) *** Storage info on denise ***
> [ 0.000000] (1:@denise) Storage name: Disk2, mount name: c:
> [ 0.000000] (1:@denise) Storage name: Disk4, mount name: /home
p Testing a MSG application with properties in the XML for Hosts, Links and Processes
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/platform-properties$EXEEXT ${platfdir}/prop.xml ${srcdir:=.}/platform-properties_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/platform-properties ${platfdir}/prop.xml ${srcdir:=.}/platform-properties_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (0:maestro@) There are 7 hosts in the environment
> [ 0.000000] (0:maestro@) Host 'host1' runs at 1000000000 flops/s
> [ 0.000000] (0:maestro@) Host 'host2' runs at 1000000000 flops/s
p This tests the HostLoad plugin (this allows the user to get the current load of a host and the computed flops)
-$ ${bindir:=.}/plugin-hostload$EXEEXT ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/plugin-hostload ${platfdir}/energy_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:load_test@MyHost1) Initial peak speed: 1E+08 flop/s; number of flops computed so far: 0E+00 (should be 0)
> [ 0.000000] (1:load_test@MyHost1) Sleep for 10 seconds
> [ 10.000000] (1:load_test@MyHost1) Done sleeping 10.00s; peak speed: 1E+08 flop/s; number of flops computed so far: 0E+00 (nothing should have changed)
p Testing the process daemonization feature of MSG
-$ $SG_TEST_EXENV ${bindir:=.}/process-daemon ${platfdir:=.}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/process-daemon ${platfdir:=.}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:worker@Boivin) Let's do some work (for 10 sec on Boivin).
> [ 0.000000] (2:daemon@Tremblay) Hello from the infinite loop
> [ 3.000000] (2:daemon@Tremblay) Hello from the infinite loop
-$ $SG_TEST_EXENV ${bindir:=.}/process-join$EXEEXT ${platfdir}/small_platform.xml
+$ ${bindir:=.}/process-join ${platfdir}/small_platform.xml
> [Tremblay:master:(1) 0.000000] [msg_test/INFO] Start slave
> [Tremblay:slave from master:(2) 0.000000] [msg_test/INFO] Slave started
> [Tremblay:master:(1) 0.000000] [msg_test/INFO] Join the slave (timeout 2)
p Testing a MSG_process_kill function
-$ $SG_TEST_EXENV ${bindir:=.}/process-kill ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/process-kill ${platfdir}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:killer@Tremblay) Hello!
> [ 0.000000] (2:victim@Fafard) Hello!
> [ 0.000000] (2:victim@Fafard) Suspending myself
p Test0 Process without time
-$ $SG_TEST_EXENV ${bindir:=.}/process-lifetime ${platfdir}/cluster_backbone.xml ${srcdir:=.}/baseline_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/process-lifetime ${platfdir}/cluster_backbone.xml ${srcdir:=.}/baseline_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sleeper@node-0.simgrid.org) Hello! I go to sleep.
> [ 10.000000] (1:sleeper@node-0.simgrid.org) Done sleeping.
> [ 10.000000] (1:sleeper@node-0.simgrid.org) Exiting now (done sleeping or got killed).
p Test1 Process with start time
-$ $SG_TEST_EXENV ${bindir:=.}/process-lifetime ${platfdir}/cluster_backbone.xml ${srcdir:=.}/start_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/process-lifetime ${platfdir}/cluster_backbone.xml ${srcdir:=.}/start_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sleeper@node-0.simgrid.org) Hello! I go to sleep.
> [ 1.000000] (2:sleeper@node-1.simgrid.org) Hello! I go to sleep.
> [ 2.000000] (3:sleeper@node-2.simgrid.org) Hello! I go to sleep.
p Test1 Process with kill time
! output sort
-$ $SG_TEST_EXENV ${bindir:=.}/process-lifetime ${platfdir}/cluster_backbone.xml ${srcdir:=.}/kill_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/process-lifetime ${platfdir}/cluster_backbone.xml ${srcdir:=.}/kill_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sleeper@node-1.simgrid.org) Hello! I go to sleep.
> [ 0.000000] (2:sleeper@node-2.simgrid.org) Hello! I go to sleep.
> [ 0.000000] (3:sleeper@node-3.simgrid.org) Hello! I go to sleep.
p Test2 Process with start and kill times
! output sort
-$ $SG_TEST_EXENV ${bindir:=.}/process-lifetime ${platfdir}/cluster_backbone.xml ${srcdir:=.}/start_kill_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/process-lifetime ${platfdir}/cluster_backbone.xml ${srcdir:=.}/start_kill_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:sleeper@node-0.simgrid.org) Hello! I go to sleep.
> [ 1.000000] (2:sleeper@node-1.simgrid.org) Hello! I go to sleep.
> [ 2.000000] (3:sleeper@node-2.simgrid.org) Hello! I go to sleep.
p Testing the migration feature of MSG
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/process-migration ${platfdir:=.}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/process-migration ${platfdir:=.}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:emigrant@Jacquelin) I'll look for a new job on another machine ('Boivin') where the grass is greener.
> [ 0.000000] (1:emigrant@Boivin) Yeah, found something to do
> [ 0.000000] (2:policeman@Boivin) Wait at the checkpoint.
p Testing the suspend/resume feature of MSG
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/process-suspend ${platfdir:=.}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/process-suspend ${platfdir:=.}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:dream_master@Boivin) Let's create a lazy guy.
> [ 0.000000] (2:Lazy@Boivin) Nobody's watching me ? Let's go to sleep.
> [ 0.000000] (1:dream_master@Boivin) Let's wait a little bit...
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/process-yield ${platfdir}/small_platform_fatpipe.xml ${srcdir:=.}/process-yield_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/process-yield ${platfdir}/small_platform_fatpipe.xml ${srcdir:=.}/process-yield_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:yielder@Tremblay) I yielded 10 times. Goodbye now!
> [ 0.000000] (2:yielder@Ruby) I yielded 15 times. Goodbye now!
#!/usr/bin/env tesh
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/task-priority$EXEEXT ${platfdir}/small_platform.xml ${srcdir:=.}/task-priority_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/task-priority ${platfdir}/small_platform.xml ${srcdir:=.}/task-priority_d.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:test@Fafard) Hello! Running a task of size 7.6296e+07 with priority 1
> [ 0.000000] (2:test@Fafard) Hello! Running a task of size 7.6296e+07 with priority 2
> [ 1.500000] (2:test@Fafard) Goodbye now!
p Testing the migration feature of S4U
! output sort 19
-$ $SG_TEST_EXENV ${bindir:=.}/actor-migration ${platfdir:=.}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/actor-migration ${platfdir:=.}/small_platform.xml "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:emigrant@Jacquelin) I'll look for a new job on another machine ('Boivin') where the grass is greener.
> [ 0.000000] (1:emigrant@Boivin) Yeah, found something to do
> [ 0.000000] (2:policeman@Boivin) Wait at the checkpoint.
-$ $SG_TEST_EXENV ${bindir:=.}/cloud-interrupt-migration ${platfdir}/small_platform.xml --log=no_loc "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
+$ ${bindir:=.}/cloud-interrupt-migration ${platfdir}/small_platform.xml --log=no_loc "--log=root.fmt:[%10.6r]%e(%i:%P@%h)%e%m%n"
> [ 0.000000] (1:master_@Fafard) Start the migration of VM0 from Fafard to Tremblay
> [ 2.000000] (1:master_@Fafard) Wait! change my mind, shutdown VM0. This ends the migration
> [ 10.000000] (1:master_@Fafard) Start again the migration of VM0 from Fafard to Tremblay
-$ ./concurrent_rw$EXEEXT ${platfdir}/storage/storage.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
+$ ./concurrent_rw ${platfdir}/storage/storage.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
> [ 0.000000] (host@bob) process 1 is writing!
> [ 0.000000] (host@bob) process 2 is writing!
> [ 0.000000] (host@bob) process 3 is writing!
-$ ./storage_client_server$EXEEXT ${platfdir}/storage/storage.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
+$ ./storage_client_server ${platfdir}/storage/storage.xml "--log=root.fmt:[%10.6r]%e(%P@%h)%e%m%n"
> [ 0.000000] (server@alice) *** Storage info on alice ***
> [ 0.000000] (server@alice) Storage name: Disk2, mount name: c:
> [ 0.000000] (server@alice) Free size: 534479374867 bytes
ADD_TEST(tesh-simdag-full-links02 ${CMAKE_BINARY_DIR}/teshsuite/simdag/basic-parsing-test/basic-parsing-test ${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/platforms/two_clusters_one_name.xml FULL_LINK)
ADD_TEST(tesh-simdag-one-link-g5k ${CMAKE_BINARY_DIR}/teshsuite/simdag/basic-parsing-test/basic-parsing-test ${CMAKE_HOME_DIRECTORY}/examples/platforms/g5k.xml ONE_LINK)
-if(enable_debug AND NOT enable_memcheck)
- # these tests need assertions. Exclude them from memcheck, as they normally die, leaving lots of unfree'd objects
+if(enable_debug)
+ # these tests need assertions
ADD_TESH(tesh-parser-bogus-symmetric --setenv bindir=${CMAKE_BINARY_DIR}/teshsuite/simdag/flatifier --cd ${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/flatifier ${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/flatifier/bogus_two_hosts_asymetric.tesh)
ADD_TESH(tesh-parser-bogus-missing-gw --setenv bindir=${CMAKE_BINARY_DIR}/teshsuite/simdag/flatifier --cd ${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/flatifier ${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/flatifier/bogus_missing_gateway.tesh)
ADD_TESH(tesh-parser-bogus-disk-attachment --setenv bindir=${CMAKE_BINARY_DIR}/teshsuite/simdag/flatifier --cd ${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/flatifier ${CMAKE_HOME_DIRECTORY}/teshsuite/simdag/flatifier/bogus_disk_attachment.tesh)
! expect signal SIGABRT
-$ ${bindir:=.}/flatifier ../platforms/bogus_disk_attachment.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/flatifier ../platforms/bogus_disk_attachment.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> [ 0.000000] [0:maestro@] Parse error at ../platforms/bogus_disk_attachment.xml:19: Unable to attach storage cdisk: host plouf does not exist.
> [ 0.000000] [0:maestro@] Exiting now
! expect signal SIGABRT
-$ ${bindir:=.}/flatifier ../platforms/bogus_missing_src_gateway.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/flatifier ../platforms/bogus_missing_src_gateway.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> [ 0.000000] [0:maestro@] Parse error at ../platforms/bogus_missing_src_gateway.xml:14: zoneRoute gw_src='nod-cluster_router.cluster.us' does name a node. Existing netpoints:
> 'node-1.cluster.us','node-2.cluster.us','node-3.cluster.us','node-4.cluster.us','node-cluster_router.cluster.us','noeud-1.grappe.fr','noeud-2.grappe.fr','noeud-3.grappe.fr','noeud-4.grappe.fr','noeud-grappe_router.grappe.fr'
> [ 0.000000] [0:maestro@] Exiting now
! expect signal SIGABRT
-$ ${bindir:=.}/flatifier ../platforms/bogus_missing_dst_gateway.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/flatifier ../platforms/bogus_missing_dst_gateway.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> [ 0.000000] [0:maestro@] Parse error at ../platforms/bogus_missing_dst_gateway.xml:14: zoneRoute gw_dst='neud-grappe_router.grappe.fr' does name a node. Existing netpoints:
> 'node-1.cluster.us','node-2.cluster.us','node-3.cluster.us','node-4.cluster.us','node-cluster_router.cluster.us','noeud-1.grappe.fr','noeud-2.grappe.fr','noeud-3.grappe.fr','noeud-4.grappe.fr','noeud-grappe_router.grappe.fr'
! expect signal SIGABRT
-$ ${bindir:=.}/flatifier ../platforms/bogus_two_hosts_asymetric.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n" --log=no_loc
+$ $VALGRIND_NO_LEAK_CHECK ${bindir:=.}/flatifier ../platforms/bogus_two_hosts_asymetric.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n" --log=no_loc
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> [ 0.000000] [0:maestro@] The route between alice and bob already exists (Rq: routes are symmetrical by default).
#!/usr/bin/env tesh
-$ ${bindir:=.}/flatifier$EXEEXT ../platforms/one_cluster.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ ${bindir:=.}/flatifier ../platforms/one_cluster.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> <?xml version='1.0'?>
> <!DOCTYPE platform SYSTEM "https://simgrid.org/simgrid.dtd">
> </AS>
> </platform>
-$ ${bindir:=.}/flatifier$EXEEXT ../platforms/one_cluster_multicore.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ ${bindir:=.}/flatifier ../platforms/one_cluster_multicore.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> <?xml version='1.0'?>
> <!DOCTYPE platform SYSTEM "https://simgrid.org/simgrid.dtd">
> </AS>
> </platform>
-$ ${bindir:=.}/flatifier$EXEEXT ../platforms/host_attributes.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ ${bindir:=.}/flatifier ../platforms/host_attributes.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> <?xml version='1.0'?>
> <!DOCTYPE platform SYSTEM "https://simgrid.org/simgrid.dtd">
> </AS>
> </platform>
-$ ${bindir:=.}/flatifier$EXEEXT ../platforms/link_attributes.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ ${bindir:=.}/flatifier ../platforms/link_attributes.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> <?xml version='1.0'?>
> <!DOCTYPE platform SYSTEM "https://simgrid.org/simgrid.dtd">
> </AS>
> </platform>
-$ ${bindir:=.}/flatifier$EXEEXT ../platforms/three_hosts_non_symmetric_route.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ ${bindir:=.}/flatifier ../platforms/three_hosts_non_symmetric_route.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> <?xml version='1.0'?>
> <!DOCTYPE platform SYSTEM "https://simgrid.org/simgrid.dtd">
> </AS>
> </platform>
-$ ${bindir:=.}/flatifier$EXEEXT ../platforms/two_clusters.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ ${bindir:=.}/flatifier ../platforms/two_clusters.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> <?xml version='1.0'?>
> <!DOCTYPE platform SYSTEM "https://simgrid.org/simgrid.dtd">
> </AS>
> </platform>
-$ ${bindir:=.}/flatifier$EXEEXT ../platforms/two_hosts_multi_hop.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ ${bindir:=.}/flatifier ../platforms/two_hosts_multi_hop.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> <?xml version='1.0'?>
> <!DOCTYPE platform SYSTEM "https://simgrid.org/simgrid.dtd">
> </AS>
> </platform>
-$ ${bindir:=.}/flatifier$EXEEXT ../platforms/two_hosts_one_link.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ ${bindir:=.}/flatifier ../platforms/two_hosts_one_link.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> <?xml version='1.0'?>
> <!DOCTYPE platform SYSTEM "https://simgrid.org/simgrid.dtd">
> </AS>
> </platform>
-$ ${bindir:=.}/flatifier$EXEEXT ${srcdir:=.}/examples/platforms/bypassASroute.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ ${bindir:=.}/flatifier ${srcdir:=.}/examples/platforms/bypassASroute.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> <?xml version='1.0'?>
> <!DOCTYPE platform SYSTEM "https://simgrid.org/simgrid.dtd">
> </AS>
> </platform>
-$ ${bindir:=.}/flatifier$EXEEXT ${srcdir:=.}/examples/platforms/cluster_torus.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ ${bindir:=.}/flatifier ${srcdir:=.}/examples/platforms/cluster_torus.xml "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Switching to the L07 model to handle parallel tasks.
> <?xml version='1.0'?>
> <!DOCTYPE platform SYSTEM "https://simgrid.org/simgrid.dtd">
SET_TESH_PROPERTIES(stack-overflow "ucontext;raw;boost" WILL_FAIL true)
endif()
endif()
-if (NOT enable_memcheck)
- ADD_TESH_FACTORIES(generic-simcalls "thread;ucontext;raw;boost" --setenv bindir=${CMAKE_BINARY_DIR}/teshsuite/simix/generic-simcalls --setenv srcdir=${CMAKE_HOME_DIRECTORY} --cd ${CMAKE_HOME_DIRECTORY}/teshsuite/simix/generic-simcalls generic-simcalls.tesh)
-endif()
+ADD_TESH_FACTORIES(generic-simcalls "thread;ucontext;raw;boost" --setenv bindir=${CMAKE_BINARY_DIR}/teshsuite/simix/generic-simcalls --setenv srcdir=${CMAKE_HOME_DIRECTORY} --cd ${CMAKE_HOME_DIRECTORY}/teshsuite/simix/generic-simcalls generic-simcalls.tesh)
foreach (factory raw thread boost ucontext)
string (TOUPPER have_${factory}_contexts VARNAME)
static void test_opts(int* argc, char **argv[]){
int found = 0;
static struct option long_options[] = {
- {"long", no_argument, 0, 0 },
+ {(char*)"long", no_argument, 0, 0 },
{0, 0, 0, 0 }
};
while (1) {
int found = 0;
int option_index = 0;
static struct option long_options[] = {
- {"long", no_argument, 0, 0 },
+ {(char*)"long", no_argument, 0, 0 },
{0, 0, 0, 0 }
};
while (1) {
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/lmm_usage
+$ ${bindir:=.}/lmm_usage
> [0.000000] [surf_test/INFO] ***** Test 1 (Max-Min)
> [0.000000] [surf_test/INFO] ***** Test 1 (Lagrange - Vegas)
> [0.000000] [surf_test/INFO] ***** Test 1 (Lagrange - Reno)
! timeout 300
! expect return 0
! output sort
-$ $SG_TEST_EXENV ${bindir:=.}/maxmin_bench big 1
+$ ${bindir:=.}/maxmin_bench big 1
> Starting 0: (807)
> Starting to solve(812)
> 1x One shot execution time for a total of 2000 constraints, 2000 variables with 96 active constraint each, concurrency in [32,288] and max concurrency share 2
! timeout 50
! expect return 0
! output sort
-$ $SG_TEST_EXENV ${bindir:=.}/maxmin_bench medium 5 test
+$ ${bindir:=.}/maxmin_bench medium 5 test
> [0.000000]: [surf_maxmin/DEBUG] Setting selective_update_active flag to 0
> [0.000000]: [surf_maxmin/DEBUG] Active constraints : 100
> [0.000000]: [surf_maxmin/DEBUG] Constraint '98' usage: 13.060939 remaining: 3.166833 concurrency: 7<=8<=10
! timeout 10
! expect return 0
! output sort
-$ $SG_TEST_EXENV ${bindir:=.}/maxmin_bench small 10 test
+$ ${bindir:=.}/maxmin_bench small 10 test
> [0.000000]: [surf_maxmin/DEBUG] Setting selective_update_active flag to 0
> [0.000000]: [surf_maxmin/DEBUG] Active constraints : 10
> [0.000000]: [surf_maxmin/DEBUG] Constraint '9' usage: 4.703796 remaining: 7.082917 concurrency: 2<=2<=-1
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/surf_usage ${platfdir}/two_hosts_profiles.xml
+$ ${bindir:=.}/surf_usage ${platfdir}/two_hosts_profiles.xml
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'cpu/model' to 'Cas01'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'network/model' to 'CM02'
> [0.000000] [surf_test/INFO] actionA state: SURF_ACTION_RUNNING
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/surf_usage2 ${platfdir}/two_hosts_profiles.xml
+$ ${bindir:=.}/surf_usage2 ${platfdir}/two_hosts_profiles.xml
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'network/model' to 'CM02'
> [0.000000] [xbt_cfg/INFO] Configuration change: Set 'cpu/model' to 'Cas01'
> [0.200000] [surf_test/INFO] Next Event : 0.2
#!/usr/bin/env tesh
p Check different log thresholds
-$ $SG_TEST_EXENV ${bindir:=.}/log_usage "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ ${bindir:=.}/log_usage "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Test with the settings ''
> [ 0.000000] [0:maestro@] val=2
> [ 0.000000] [0:maestro@] false alarm!
> [ 0.000000] [0:maestro@] false alarm!
p Check the "file" log appender
-$ $SG_TEST_EXENV ${bindir:=.}/log_usage "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n" --log=root.app:file:${bindir:=.}/log_usage.log
+$ ${bindir:=.}/log_usage "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n" --log=root.app:file:${bindir:=.}/log_usage.log
$ cat ${bindir:=.}/log_usage.log
> [ 0.000000] [0:maestro@] Test with the settings ''
> [ 0.000000] [0:maestro@] val=2
> [ 0.000000] [0:maestro@] false alarm!
p Check the "rollfile" log appender
-$ $SG_TEST_EXENV ${bindir:=.}/log_usage "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n" --log=root.app:rollfile:500:${bindir:=.}/log_usage.log
+$ ${bindir:=.}/log_usage "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n" --log=root.app:rollfile:500:${bindir:=.}/log_usage.log
$ cat ${bindir:=.}/log_usage.log
> [ 0.000000] [0:maestro@] val=2
> [ 0.000000] [0:maestro@] false alarm!
>
p Check the "splitfile" log appender
-$ $SG_TEST_EXENV ${bindir:=.}/log_usage "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n" --log=root.app:splitfile:500:${bindir:=.}/log_usage_%.log
+$ ${bindir:=.}/log_usage "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n" --log=root.app:splitfile:500:${bindir:=.}/log_usage_%.log
$ cat ${bindir:=.}/log_usage_0.log
> [ 0.000000] [0:maestro@] Test with the settings ''
> [ 0.000000] [0:maestro@] val=2
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/log_usage "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
+$ ${bindir:=.}/log_usage "--log=root.fmt:[%10.6r]%e[%i:%P@%h]%e%m%n"
> [ 0.000000] [0:maestro@] Test with the settings ''
> [ 0.000000] [0:maestro@] val=2
> [ 0.000000] [0:maestro@] false alarm!
#!/usr/bin/env tesh
-$ $SG_TEST_EXENV ${bindir:=.}/parmap_bench 4 0.25 --log=parmap_bench.thres:warning
+$ ${bindir:=.}/parmap_bench 4 0.25 --log=parmap_bench.thres:warning
src/mc/sosp/mc_checkpoint.cpp
src/mc/sosp/mc_snapshot.hpp
src/mc/sosp/mc_snapshot.cpp
- src/mc/sosp/mc_page_snapshot.cpp
src/mc/AddressSpace.hpp
src/mc/Frame.hpp
# Compute the dependencies of SimGrid
#####################################
# search for dlopen
-if("${CMAKE_SYSTEM_NAME}" MATCHES "kFreeBSD|Linux")
+if("${CMAKE_SYSTEM_NAME}" MATCHES "kFreeBSD|Linux|SunOS")
find_library(DL_LIBRARY dl)
endif()
mark_as_advanced(DL_LIBRARY)
# Optional modules
###
-option(enable_documentation "Whether to produce documentation" on)
+option(enable_documentation "Whether to produce documentation" off)
option(enable_ns3 "Whether ns3 model is activated." off)
option(enable_java "Whether the Java bindings are activated." off)
if(enable_memcheck_xml)
SET(TESH_WRAPPER ${TESH_WRAPPER}\ --xml=yes\ --xml-file=memcheck_test_%p.memcheck\ --child-silent-after-fork=yes\ )
endif()
+ set(TESH_OPTION ${TESH_OPTION} --setenv VALGRIND_NO_LEAK_CHECK=--leak-check=no\ --show-leak-kinds=none)
# message(STATUS "tesh wrapper: ${TESH_WRAPPER}")
# map { print "$_ " } @argv;
# print "\n";
-system @argv;
+exec @argv;
$line =~ s/\$\{srcdir\:\=\.\}/./g;
$line =~ s/\(/\\(/g;
$line =~ s/\)/\\)/g;
- $line =~ s/\$SG_TEST_EXENV//g;
- $line =~ s/\$EXEEXT//g;
$line =~ s/^\$\ */.\//g;
$line =~ s/^.\/lua/lua/g;
$line =~ s/^.\/ruby/ruby/g;
--- /dev/null
+# Base image
+FROM debian:testing
+
+# - Install SimGrid's dependencies
+# - Compile and install SimGrid itself.
+# - Remove everything that was installed, and re-install what's needed by the SimGrid libraries before the Gran Final Cleanup
+# - Keep g++ gcc gfortran as any MC user will use (some of) them
+RUN apt update && apt -y upgrade && \
+ apt install -y g++ gcc git valgrind gfortran libboost-dev libboost-all-dev cmake dpkg-dev libunwind-dev libdw-dev libelf-dev libevent-dev && \
+ mkdir /source/ && cd /source && git clone --depth=1 https://framagit.org/simgrid/simgrid.git simgrid.git && \
+ cd simgrid.git && \
+ cmake -DCMAKE_INSTALL_PREFIX=/usr/ -Denable_model-checking=ON -Denable_documentation=OFF -Denable_java=OFF -Denable_smpi=ON -Denable_compile_optimizations=ON . && \
+ make -j4 install \
+ mkdir debian/ && touch debian/control && dpkg-shlibdeps --ignore-missing-info lib/*.so -llib/ -O/tmp/deps && \
+ apt remove -y git valgrind libboost-dev libboost-all-dev cmake dpkg-dev libunwind-dev libdw-dev libelf-dev libevent-dev && \
+ apt install `sed -e 's/shlibs:Depends=//' -e 's/([^)]*)//g' -e 's/,//g' /tmp/deps` && rm /tmp/deps && \
+ apt autoremove -y && apt autoclean && apt clean
+
+# The build and dependencies are not cleaned in this image since it's it's highly experimental so far
+# git reset --hard master && git clean -dfx && \
+
@echo " make unstable -> build the git version of SimGrid (with SMPI, w/o MC)"
@echo " make tuto-s4u -> build all what you need to take the S4U tutorial"
@echo " make tuto-smpi -> build all what you need to take the SMPI tutorial"
+ @echo " make tuto-mc -> build the git version of SimGrid (with SMPI and MC)"
@echo " make all -> build all but stable (ie, build-deps unstable tuto-s4u tuto-smpi)"
@echo " make push -> push all images to the cloud"
@echo "All our images are based on debian:testing"
stable:
export last_tag=`wget https://framagit.org/simgrid/simgrid/tags 2>/dev/null -O - | grep /simgrid/simgrid/tags/v | head -n1 | sed 's/[^>]*>//' | sed 's/<.*//'`; \
- export url=`wget https://framagit.org/simgrid/simgrid/tags/$${last_tag} 2>/dev/null -O - | grep SimGrid- | perl -pe 's/.*?<li><a href="//' | sed 's/tar.gz.*/tar.gz/'` ;\
+ export url=`wget https://framagit.org/simgrid/simgrid/tags/$${last_tag} 2>/dev/null -O - | grep SimGrid- | perl -pe 's/.*?<a href="//' | sed 's/tar.gz.*/tar.gz/'` ;\
echo URL:$${url} ; \
docker build -f Dockerfile.stable \
--build-arg DLURL=$${url} \
$(DOCKER_EXTRA) \
. | tee > build-deps.log
+tuto-mc:
+ docker build -f Dockerfile.tuto-mc \
+ -t simgrid/tuto-mc:latest \
+ -t simgrid/tuto-mc:$$(date --iso-8601) \
+ $(DOCKER_EXTRA) \
+ . | tee > tuto-mc.log
+
build-deps-stable:
docker build -f Dockerfile.build-deps-stable \
-t simgrid/build-deps-stable:latest \
docker push simgrid/unstable
docker push simgrid/tuto-s4u
docker push simgrid/tuto-smpi
+ docker push simgrid/tuto-mc
# usage: die status message...
die () {
- local status=${1:-1}
+ status=${1:-1}
shift
[ $# -gt 0 ] || set -- "Error - Halting"
echo "$@" >&2
echo "XX"
cmake -G"$GENERATOR" -Denable_documentation=OFF $WORKSPACE
-make dist -j$NUMBER_OF_PROCESSORS
+make dist -j $NUMBER_OF_PROCESSORS
SIMGRID_VERSION=$(cat VERSION)
echo "XX"
# -Denable_lua=$(onoff test "$build_mode" != "DynamicAnalysis") \
set +x
-make -j$NUMBER_OF_PROCESSORS VERBOSE=1 tests
+make -j $NUMBER_OF_PROCESSORS VERBOSE=1 tests
echo "XX"
echo "XX Run the tests"
echo "<br>Description of the nodes - Automatically updated by project_description.sh script - Don't edit here<br><br>
-<table id="configuration-matrix">
-<tr class="matrix-row"> <td class="matrix-header" style="min-width:75px">Name of the Builder</td><td class="matrix-header" style="min-width:75px">OS</td><td class="matrix-header" style="min-width:75px">Compiler</td><td class="matrix-header" style="min-width:75px">Boost</td><td class="matrix-header" style="min-width:75px">Java</td><td class="matrix-header" style="min-width:75px">Cmake</td><td class="matrix-header" style="min-width:50px">NS3</td><td class="matrix-header" style="min-width:50px">Python</td></tr>"
+<script>
+function compareVersion(v1, v2) {
+ if (typeof v1 !== 'string') return false;
+ if (typeof v2 !== 'string') return false;
+ v1 = v1.split('.');
+ v2 = v2.split('.');
+ const k = Math.min(v1.length, v2.length);
+ for (let i = 0; i < k; ++ i) {
+ v1[i] = parseInt(v1[i], 10);
+ v2[i] = parseInt(v2[i], 10);
+ if (v1[i] > v2[i]) return 1;
+ if (v1[i] < v2[i]) return -1;
+ }
+ return v1.length == v2.length ? 0: (v1.length < v2.length ? -1 : 1);
+}</script>
+<script>
+function sortTable(n, type) {
+ var table, rows, switching, i, x, y, shouldSwitch, dir, switchcount = 0;
+ table = document.getElementById('configuration-matrix');
+ switching = true;
+ //Set the sorting direction to ascending:
+ dir = 'asc';
+ /*Make a loop that will continue until
+ no switching has been done:*/
+ while (switching) {
+ //start by saying: no switching is done:
+ switching = false;
+ rows = table.rows;
+ /*Loop through all table rows (except the
+ first, which contains table headers):*/
+ for (i = 1; i < (rows.length - 1); i++) {
+ //start by saying there should be no switching:
+ shouldSwitch = false;
+ /*Get the two elements you want to compare,
+ one from current row and one from the next:*/
+ x = rows[i].getElementsByTagName('TD')[n];
+ y = rows[i + 1].getElementsByTagName('TD')[n];
+ /*check if the two rows should switch place,
+ based on the direction, asc or desc:*/
+ if (dir == 'asc') {
+ if(type == 'version'){
+ shouldSwitch = (compareVersion(x.innerHTML.toLowerCase(), y.innerHTML.toLowerCase()) > 0);
+ }else{
+ shouldSwitch = (x.innerHTML.toLowerCase() > y.innerHTML.toLowerCase());
+ }
+ } else if (dir == 'desc') {
+ if(type == 'version'){
+ shouldSwitch = (compareVersion(x.innerHTML.toLowerCase(), y.innerHTML.toLowerCase()) < 0);
+ }else{
+ shouldSwitch = (x.innerHTML.toLowerCase() < y.innerHTML.toLowerCase());
+ }
+ }
+ if (shouldSwitch)
+ break;
+ }
+ if (shouldSwitch) {
+ /*If a switch has been marked, make the switch
+ and mark that a switch has been done:*/
+ rows[i].parentNode.insertBefore(rows[i + 1], rows[i]);
+ switching = true;
+ //Each time a switch is done, increase this count by 1:
+ switchcount ++;
+ } else {
+ /*If no switching has been done AND the direction is 'asc',
+ set the direction to 'desc' and run the while loop again.*/
+ if (switchcount == 0 && dir == 'asc') {
+ dir = 'desc';
+ switching = true;
+ }
+ }
+ }
+}</script>
+<table id=configuration-matrix>
+<tr class=matrix-row> <td class=matrix-header style=min-width:75px onclick='sortTable(0);'>Name of the Builder</td><td class=matrix-header style=min-width:75px onclick='sortTable(1);'>OS</td><td class=matrix-header style=min-width:75px onclick='sortTable(2);'>Compiler</td><td class=matrix-header style=min-width:75px onclick=\"sortTable(3, 'version');\">Boost</td><td class=matrix-header style=min-width:75px onclick=\"sortTable(4,'version');\">Java</td><td class=matrix-header style=min-width:75px onclick=\"sortTable(5,'version');\">Cmake</td><td class=matrix-header style=min-width:50px onclick='sortTable(6);'>NS3</td><td class=matrix-header style=min-width:50px onclick='sortTable(7);'>Python</td></tr>"
for node in "${nodes[@]}"
do