and run the tests in parallel. Also, you want to save the build output
to disk, for further reference. This is exactly what the
BuildSimGrid.sh script does. It is upper-cased so that the shell
-completion works and allow to run it in 4 key press: `./B<tab>`
+completion works and allows one to run it in 4 key press: `./B<tab>`
Note that if you build out of tree (as you should, see below), the
script builds the build/default directory. I usually copy the file in
@subsubsection log_use_conf_add Category additivity
-The <tt>add</tt> keyword allows to specify the additivity of a
+The <tt>add</tt> keyword allows one to specify the additivity of a
category (see @ref log_in_app). '0', '1', 'no', 'yes', 'on'
and 'off' are all valid values, with 'yes' as default.
The default appender function currently prints to stderr.
-*/
\ No newline at end of file
+*/
Here, a set of <b>host</b>s is defined. Each of them has a <b>link</b>
to a central backbone (backbone is a link itself, as a link can
be used to represent a switch, see the switch / link section
-below for more details about it). A <b>router</b> allows to connect a
+below for more details about it). A <b>router</b> allows one to connect a
<b>cluster</b> to the outside world. Internally,
SimGrid treats a cluster as a network zone containing all hosts: the router is the default
gateway for the cluster.
Attribute name | Mandatory | Values | Description
--------------- | --------- | ------ | -----------
id | yes | string | Identifier of this storage_type; used when referring to it
-model | no | string | In the future, this will allow to change the performance model to use
+model | no | string | In the future, this will allow one to change the performance model to use
size | yes | string | Specifies the amount of available storage space; you can specify storage like "500GiB" or "500GB" if you want. (TODO add a link to all the available abbreviations)
content | yes | string | Path to a @ref pf_storage_content_file "Storage Content File" on your system. This file must exist.
a model, it (most likely; the constant network model, for example, doesn't) calculates routes for you. But maybe you want to
define some of your routes, which will be specific. You may also want
to bypass some routes defined in lower level zone at an upper stage:
-<b>bypasszoneroute</b> is the tag you're looking for. It allows to
+<b>bypasszoneroute</b> is the tag you're looking for. It allows one to
bypass routes defined between already defined between zone (if you want
to bypass route for a specific host, you should just use byPassRoute).
The principle is the same as zoneroute: <b>bypasszoneroute</b> contains
a model, it (most likely; the constant network model, for example, doesn't) calculates routes for you. But maybe you want to
define some of your routes, which will be specific. You may also want
to bypass some routes defined in lower level zone at an upper stage:
-<b>bypassRoute</b> is the tag you're looking for. It allows to bypass
+<b>bypassRoute</b> is the tag you're looking for. It allows one to bypass
routes defined between <b>host/router</b>. The principle is the same
as route: <b>bypassRoute</b> contains list of links references of
links that are in the path between src and dst.
mandatory to the model-checker. The simcalls, representing actors'
actions, are the transitions of the formal system. Verifying the
system requires to manipulate these transitions explicitly. This also
-allows to run safely the actors in parallel, even if this is less
+allows one to run the actors safely in parallel, even if this is less
commonly used by our users.
So, the key ideas here are:
in the simulation which we would like to avoid.
`std::try_lock()` should be safe to use though.
-*/
\ No newline at end of file
+*/
you have your argument between ').
Another solution is to use the ``<config>`` tag in the platform file. The
-only restriction is that this tag must occure before the first
+only restriction is that this tag must occur before the first
platform element (be it ``<zone>``, ``<cluster>``, ``<peer>`` or whatever).
The ``<config>`` tag takes an ``id`` attribute, but it is currently
ignored so you don't really need to pass it. The important part is that
network card. Three models exists, but actually, only 2 of them are
interesting. The "compound" one is simply due to the way our
internal code is organized, and can easily be ignored. So at the
- end, you have two host models: The default one allows to aggregate
+ end, you have two host models: The default one allows aggregation of
an existing CPU model with an existing network model, but does not
allow parallel tasks because these beasts need some collaboration
between the network and CPU model. That is why, ptask_07 is used by
.. _cfg=smpi/async-small-thresh:
-Simulating Asyncronous Send
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Simulating Asynchronous Send
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-(this configuration item is experimental and may change or disapear)
+(this configuration item is experimental and may change or disappear)
It is possible to specify that messages below a certain size will be
sent as soon as the call to MPI_Send is issued, without waiting for
Activating Plugins
------------------
-SimGrid plugins allow to extend the framework without changing its
+SimGrid plugins allow one to extend the framework without changing its
source code directly. Read the source code of the existing plugins to
learn how to do so (in ``src/plugins``), and ask your questions to the
usual channels (Stack Overflow, Mailing list, IRC). The basic idea is
Size of Cycle Detection Set
...........................
-In order to detect cycles, the model-checker needs to check if a new
+In order to detect cycles, the model checker needs to check if a new
explored state is in fact the same state than a previous one. For
-that, the model-checker can take a snapshot of each visited state:
+that, the model checker can take a snapshot of each visited state:
this snapshot is then used to compare it with subsequent states in the
exploration graph.
.......................
The ``model-checker/max-depth`` can set the maximum depth of the
-exploration graph of the model-checker. If this limit is reached, a
+exploration graph of the model checker. If this limit is reached, a
logging message is sent and the results might not be exact.
By default, there is not depth limit.
Handling of Timeouts
....................
-By default, the model-checker does not handle timeout conditions: the `wait`
+By default, the model checker does not handle timeout conditions: the `wait`
operations never time out. With the ``model-check/timeout`` configuration item
-set to **yes**, the model-checker will explore timeouts of `wait` operations.
+set to **yes**, the model checker will explore timeouts of `wait` operations.
.. _cfg=model-check/communications-determinism:
.. _cfg=model-check/send-determinism:
The ``model-check/communications-determinism`` and
``model-check/send-determinism`` items can be used to select the
-communication determinism mode of the model-checker which checks
+communication determinism mode of the model checker which checks
determinism properties of the communications of an application.
Verification Performance Considerations
.. _cfg=model-check/replay:
-Replaying buggy execution paths out of the model-checker
-........................................................
+Replaying buggy execution paths from the model checker
+......................................................
-Debugging the problems reported by the model-checker is challenging: First, the
+Debugging the problems reported by the model checker is challenging: First, the
application under verification cannot be debugged with gdb because the
-model-checker already traces it. Then, the model-checker may explore several
+model checker already traces it. Then, the model checker may explore several
execution paths before encountering the issue, making it very difficult to
understand the outputs. Fortunately, SimGrid provides the execution path leading
to any reported issue so that you can replay this path out of the model checker,
enabling the usage of classical debugging tools.
-When the model-checker finds an interesting path in the application
+When the model checker finds an interesting path in the application
execution graph (where a safety or liveness property is violated), it
generates an identifier for this path. Here is an example of output:
The interesting line is ``Path = 1/3;1/4``, which means that you should use
``--cfg=model-check/replay:1/3;1/4`` to replay your application on the buggy
-execution path. All options (but the model-checker related ones) must
+execution path. All options (but the model checker related ones) must
remain the same. In particular, if you ran your application with
``smpirun -wrapper simgrid-mc``, then do it again. Remove all
MC-related options, keep the other ones and add
The main reason to change this setting is when the debugging tools get
fooled by the optimized context factories. Threads are the most
-debugging-friendly contextes, as they allow to set breakpoints
+debugging-friendly contexts, as they allow one to set breakpoints
anywhere with gdb and visualize backtraces for all processes, in order
to debug concurrency issues. Valgrind is also more comfortable with
threads, but it should be usable with all factories (Exception: the
as 16 KiB, for example. This *setting is ignored* when using the
thread factory. Instead, you should compile SimGrid and your
application with ``-fsplit-stack``. Note that this compilation flag is
-not compatible with the model-checker right now.
+not compatible with the model checker right now.
The operating system should only allocate memory for the pages of the
stack which are actually used and you might not need to use this in
If you are using the **ucontext** or **raw** context factories, you can
request to execute the user code in parallel. Several threads are
-launched, each of them handling as much user contexts at each run. To
-actiave this, set the ``contexts/nthreads`` item to the amount of
-cores that you have in your computer (or lower than 1 to have
-the amount of cores auto-detected).
+launched, each of them handling the same number of user contexts at each
+run. To activate this, set the ``contexts/nthreads`` item to the amount
+of cores that you have in your computer (or lower than 1 to have the
+amount of cores auto-detected).
Even if you asked several worker threads using the previous option,
you can request to start the parallel execution (and pay the
this code, and create an execution task within the simulator to take
this into account. For that, the actual duration is measured on the
host machine and then scaled to the power of the corresponding
-simulated machine. The variable ``smpi/host-speed`` allows to specify
+simulated machine. The variable ``smpi/host-speed`` allows one to specify
the computational speed of the host machine (in flop/s) to use when
scaling the execution times. It defaults to 20000, but you really want
to update it to get accurate simulation results.
possible to avoid this, as described in the main `SMPI publication
<https://hal.inria.fr/hal-01415484>`_ and in the :ref:`SMPI
documentation <SMPI_what_globals>`. SimGrid provides two ways of
-automatically privatizing the globals, and this option allows to
+automatically privatizing the globals, and this option allows one to
choose between them.
- **no** (default when not using smpirun): Do not automatically
for each shared bloc.
With the ``global`` algorithm, each call to SMPI_SHARED_MALLOC()
-returns a new adress, but it only points to a shadow bloc: its memory
+returns a new address, but it only points to a shadow bloc: its memory
area is mapped on a 1MiB file on disk. If the returned bloc is of size
N MiB, then the same file is mapped N times to cover the whole bloc.
At the end, no matter how many SMPI_SHARED_MALLOC you do, this will
study on your :ref:`simulated platform <platform>`, i.e. to specify which actor
should be started on which host. You can do so directly in your program (as
shown in :ref:`these examples <s4u_ex_actors>`), or using an XML deployment
-file. Unless you have a good reason, you should keep your application appart
+file. Unless you have a good reason, you should keep your application apart
from the deployment as it will :ref:`ease your experimental campain afterward
<howto_science>`.
</actor>
<!-- Carole runs on 'host3', has 1 parameter "42" in its argv and one property.
- -- Use simgrid::s4u::Actor::get_property() to retrive it.-->
+ -- Use simgrid::s4u::Actor::get_property() to retrieve it.-->
<actor host="host3" function="carol">
<argument value="42"/>
<prop id="SomeProp" value="SomeValue"/>
enable_model-checking (on/OFF)
Activates the formal verification mode. This will **hinder
- simulation speed** even when the model-checker is not activated at
+ simulation speed** even when the model checker is not activated at
run time.
enable_ns3 (on/OFF)
Allows one to run MPI code on top of SimGrid.
enable_smpi_ISP_testsuite (on/OFF)
- Adds many extra tests for the model-checker module.
+ Adds many extra tests for the model checker module.
enable_smpi_MPICH3_testsuite (on/OFF)
Adds many extra tests for the MPI module.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The best solution to get SimGrid working on windows is to install the
-Ubuntu subsystem of Windows 10. All of SimGrid (but the model-checker)
+Ubuntu subsystem of Windows 10. All of SimGrid (but the model checker)
works in this setting.
Native builds not very well supported. Have a look to our `appveypor
possible platforms that you could imagine (and more).
You just provide the application and its deployment (number of
-processes and parameters), and the model-checker will literally
+processes and parameters), and the model checker will literally
explore all possible outcomes by testing all possible message
interleavings: if at some point a given process can either receive the
message A first or the message B depending on the platform
-characteristics, the model-checker will explore the scenario where A
+characteristics, the model checker will explore the scenario where A
arrives first, and then rewind to the same point to explore the
scenario where B arrives first.
SimGrid is also used to debug, improve, and tune several large
applications.
`BigDFT <http://bigdft.org>`_ (a massively parallel code
-computing the electronic structure of chemical elements developped by
+computing the electronic structure of chemical elements developed by
the CEA), `StarPU <http://starpu.gforge.inria.fr/>`_ (a
Unified Runtime System for Heterogeneous Multicore Architectures
-developped by Inria Bordeaux) and
+developed by Inria Bordeaux) and
`TomP2P <https://tomp2p.net/dev/simgrid/>`_ (a high performance
key-value pair storage library developed at the University of Zurich).
Some of these applications enjoy large user communities themselves.
hierarchical algorithm, with some forwarders taking large pools of
tasks from the master, each of them distributing their tasks to a
sub-pool of workers? Or should we introduce super-peers,
- dupplicating the master's role in a peer-to-peer manner? Do the
+ duplicating the master's role in a peer-to-peer manner? Do the
algorithms require a perfect knowledge of the network?
- How is such an algorithm sensitive to external workload variation?
appreciate their power. They are only used to match the
communications, but have no impact on the communication
timing. ``put()`` and ``get()`` are matched regardless of their
-initiators' location and then the real communication occures between
+initiators' location and then the real communication occurs between
the involved parties.
Please refer to the full `Mailboxes' documentation
trend. This simplification is another application of the good old DRY/SPOT
programming principle (`Don't Repeat Yourself / Single Point Of Truth
<https://en.wikipedia.org/wiki/Don%27t_repeat_yourself>`_), and you
-really want your programming artefacts to follow these software
+really want your programming artifacts to follow these software
engineering principles.
But at the same time, you should be careful in separating your
scientific contribution (the master/workers algorithm) and the
-artefacts used to test it (platform, deployment and workload). This is
+artifacts used to test it (platform, deployment and workload). This is
why SimGrid forces you to express your platform and deployment files
in XML instead of using a programming interface: it forces a clear
separation of concerns between things of very different nature.
for more work. We will move to a First-Come First-Served mechanism
instead.
-For that, your workers should explicitely request for work with a
+For that, your workers should explicitly request for work with a
message sent to a channel that is specific to their master. The name
of that private channel can be the one used to categorize the
executions, as it is already specific to each master.
The master should serve in a round-robin manner the requests it
receives, until the time is up. Changing the communication schema can
be a bit hairy, but once it works, you will see that such as simple
-FCFS schema allows to double the amount of tasks handled over time
+FCFS schema allows one to double the amount of tasks handled over time
here. Things may be different with another platform file.
Further Improvements
modifications to `run on top of SMPI
<https://framagit.org/simgrid/SMPI-proxy-apps>`_.
-This setting permits to debug your MPI applications in a perfectly
-reproducible setup, with no Heisenbugs. Enjoy the full Clairevoyance
-provided by the simulator while running what-if analysis on platforms
+This setting permits one to debug your MPI applications in a perfectly
+reproducible setup, with no Heisenbugs. Enjoy the full Clairvoyance
+provided by the simulator while running what-if analyses on platforms
that are still to be built! Several `production-grade MPI applications
<https://framagit.org/simgrid/SMPI-proxy-apps#full-scale-applications>`_
use SimGrid for their integration and performance testing.
In SMPI, communications are simulated while computations are
emulated. This means that while computations occur as they would in
-the real systems, communication calls are intercepted and achived by
+the real systems, communication calls are intercepted and achieved by
the simulator.
To start using SMPI, you just need to compile your application with
communication calls are implemented using SimGrid: data is exchanged
through memory copy, while the simulator's performance models are used
to predict the time taken by each communications. Any computations
-occuring between two MPI calls are benchmarked, and the corresponding
+occurring between two MPI calls are benchmarked, and the corresponding
time is reported into the simulator.
.. image:: /tuto_smpi/img/big-picture.svg
The elements basic elements (with :ref:`pf_tag_host` and
:ref:`pf_tag_link`) are described first, and then the routes between
-any pair of hosts are explicitely given with :ref:`pf_tag_route`.
+any pair of hosts are explicitly given with :ref:`pf_tag_route`.
Any host must be given a computational speed in flops while links must
be given a latency and a bandwidth. You can write 1Gf for
Routes defined with :ref:`pf_tag_route` are symmetrical by default,
meaning that the list of traversed links from A to B is the same as
-from B to A. Explicitely define non-symmetrical routes if you prefer.
+from B to A. Explicitly define non-symmetrical routes if you prefer.
Cluster with a Crossbar
.......................
**Attributes:**
:``version``: Version of the DTD, describing the whole XML format.
- This versionning allow future evolutions, even if we
+ This versioning allow future evolutions, even if we
avoid backward-incompatible changes. The current version
is **4.1**. The ``simgrid_update_xml`` program can
upgrade most of the past platform files to the most recent
<route>
-------
-A path between two network locations, composed of several occurences
+A path between two network locations, composed of several occurrences
of :ref:`pf_tag_link` .
**Parent tags:** :ref:`pf_tag_zone` |br|
**********
Activities represent the actions that consume a resource, such as a
-:ref:`s4u::Comm <API_s4u_Comm>` that consumes the *transmiting power* of
+:ref:`s4u::Comm <API_s4u_Comm>` that consumes the *transmitting power* of
:ref:`s4u::Link <API_s4u_Link>` resources.
=======================
wait_for and wait_until are currently not implemented for Exec and Io activities.
-Every kind of activities can be asynchronous:
+Every kind of activity can be asynchronous:
- :ref:`s4u::CommPtr <API_s4u_Comm>` are created with
:cpp:func:`s4u::Mailbox::put_async() <simgrid::s4u::Mailbox::put_async>` and
to be exchanged only when both the sender and the receiver are
announced (it waits until both :cpp:func:`put() <simgrid::s4u::Mailbox::put()>`
and :cpp:func:`get() <simgrid::s4u::Mailbox::get()>` are posted).
-In TCP, since you establish connexions beforehand, the data starts to
+In TCP, since you establish connections beforehand, the data starts to
flow as soon as the sender posts it, even if the receiver did not post
its :cpp:func:`recv() <simgrid::s4u::Mailbox::recv()>` yet.
objects of type Foo directly but always FooPtr references (which are
defined as `boost::intrusive_ptr
<http://www.boost.org/doc/libs/1_61_0/libs/smart_ptr/intrusive_ptr.html>`_
-<Foo>), you will never have to explicitely release the resource that
+<Foo>), you will never have to explicitly release the resource that
you use nor to free the memory of unused objects.
Here is a little example:
- redbcast: reduce then broadcast, using default or tuned algorithms if specified
- ompi_ring_segmented: ring algorithm used by OpenMPI
- mvapich2_rs: rdb for small messages, reduce-scatter then allgather else
- - mvapich2_two_level: SMP-aware algorithm, with mpich as intra algoritm, and rdb as inter (Change this behavior by using mvapich2 selector to use tuned values)
+ - mvapich2_two_level: SMP-aware algorithm, with mpich as intra algorithm, and rdb as inter (Change this behavior by using mvapich2 selector to use tuned values)
- rab: default `Rabenseifner <https://fs.hlrs.de/projects/par/mpi//myreduce.html>`_ implementation
MPI_Reduce_scatter
With the **mmap approach**, SMPI duplicates and dynamically switch the
``.data`` and ``.bss`` segments of the ELF process when switching the
MPI ranks. This allows each ranks to have its own copy of the global
-variables. No copy actually occures as this mechanism uses ``mmap()``
+variables. No copy actually occurs as this mechanism uses ``mmap()``
for efficiency. This mechanism is considered to be very robust on all
systems supporting ``mmap()`` (Linux and most BSDs). Its performance
is questionable since each context switch between MPI ranks induces
With the **dlopen approach**, SMPI loads several copies of the same
executable in memory as if it were a library, so that the global
-variables get naturally dupplicated. It first requires the executable
+variables get naturally duplicated. It first requires the executable
to be compiled as a relocatable binary, which is less common for
programs than for libraries. But most distributions are now compiled
-this way for security reason as it allows to randomize the address
+this way for security reason as it allows one to randomize the address
space layout. It should thus be safe to compile most (any?) program
this way. The second trick is that the dynamic linker refuses to link
the exact same file several times, be it a library or a relocatable
to circumvent this rule of thumb in our case. To that extend, the
binary is copied in a temporary file before being re-linked against.
``dlmopen()`` cannot be used as it only allows 256 contextes, and as it
-would also dupplicate simgrid itself.
+would also duplicate simgrid itself.
This approach greatly speeds up the context switching, down to about
40 CPU cycles with our raw contextes, instead of requesting several
syscalls with the ``mmap()`` approach. Another advantage is that it
-permits to run the SMPI contexts in parallel, which is obviously not
+permits one to run the SMPI contexts in parallel, which is obviously not
possible with the ``mmap()`` approach. It was tricky to implement, but
we are not aware of any flaws, so smpirun activates it by default.
If you get short on memory (the whole app is executed on a single node when
simulated), you should have a look at the SMPI_SHARED_MALLOC and
-SMPI_SHARED_FREE macros. It allows to share memory areas between processes: The
+SMPI_SHARED_FREE macros. It allows one to share memory areas between processes: The
purpose of these macro is that the same line malloc on each process will point
to the exact same memory area. So if you have a malloc of 2M and you have 16
processes, this macro will change your memory consumption from 2M*16 to 2M
..............................................
In addition to the previous answers, some projects also need to be
-explicitely told what compiler to use, as follows:
+explicitly told what compiler to use, as follows:
.. code-block:: shell
your application is decomposed as a list of event handlers that are
fired according to the trace. SimGrid comes with a build-in support
for MPI traces (with solutions to import traces captured by several
- MPI profilers). You can reuse this mecanism for any kind of trace
+ MPI profilers). You can reuse this mechanism for any kind of trace
that you want to replay, for example to study how a P2P DHT overlay
reacts to a given workload.
- Simulating algorithms with one of the legacy interfaces: :ref:`MSG
when we have a guest.
Be warned that even if many people are connected to
- the chanel, they may not be staring at their IRC windows.
+ the channel, they may not be staring at their IRC windows.
So don't be surprised if you don't get an answer in the
second, and turn to the mailing lists if nobody seems to be there.
The logs of this channel are publicly
There are many ways to help the SimGrid project. The first and most
natural one is to **use SimGrid for your research, and say so**. Cite
the SimGrid framework in your papers and discuss of its advantages with
-your colleagues to spread the word. When we ask for new fundings to
-sustain the project, the amount of publications enabled by SimGrid is
+your colleagues to spread the word. When we ask for new funding to
+sustain the project, the number of publications enabled by SimGrid is
always the first question we get. The more you use the framework,
the better for us.
- cleanup, refactoring, usage of C++ features.
- - The state comparison code works by infering types of blocks allocated on the
+ - The state comparison code works by inferring types of blocks allocated on the
heap by following pointers from known roots (global variables, local
variables). Usually the first type found for a given block is used even if
a better one could be found later. By using a first pass of type inference,
- We might benefit from adding logic for handling some known types. For
example, both `std::string` and `std::vector` have a capacity which might
be larger than the current size of the container. We should ignore
- the corresponding elements when comparing the states and infering the types.
+ the corresponding elements when comparing the states and inferring the types.
- Another difficulty in the state comparison code is the detection of
dangling pointers. We cannot easily know if a pointer is dangling and
- dangling pointers might lead us to choose the wrong type when infering
+ dangling pointers might lead us to choose the wrong type when inferring
heap blocks. We might mitigate this problem by delaying the reallocation of
a freed block until there is no blocks pointing to it anymore using some
sort of basic garbage-collector.
In order to speed up the state comparison an idea was to create a hash of the
state. Only states with the same hash would need to be compared using the
-state comparison algorithm. Some information should not be inclueded in the
+state comparison algorithm. Some information should not be included in the
hash in order to avoid considering different states which would otherwise
would have been considered equal.
by the number of processes and the amount of heap currently allocated
(see `DerefAndCompareByNbProcessesAndUsedHeap`).
-Good candidate informations for the state hashing:
+Good candidate information for the state hashing:
- number of processes;
Interface with the model-checked processes
""""""""""""""""""""""""""""""""""""""""""
-The model-checker reads many information about the model-checked process by
+The model checker reads many information about the model-checked process by
`process_vm_readv()`-ing brutally the data structure of the model-checked
process leading to some inefficient code such as maintaining copies of complex
C++ structures in XBT dynars. We need a sane way to expose the relevant
-information to the model-checker.
+information to the model checker.
Generic simcalls
""""""""""""""""
We have introduced some generic simcalls which can be used to execute a
-callback in SimGrid Maestro context. It makes it a lot easier to interface
+callback in a SimGrid Maestro context. It makes it a lot easier to interface
the simulated process with the maestro. However, the callbacks for the
-model-checker which cannot decide how it should handle them. We would need a
+model checker which cannot decide how it should handle them. We would need a
solution for this if we want to be able to replace the simcalls the
-model-checker cares about by generic simcalls.
+model checker cares about by generic simcalls.
Defining an API for writing Model-Checking algorithms
"""""""""""""""""""""""""""""""""""""""""""""""""""""
Currently, writing a new model-checking algorithms in SimGridMC is quite
difficult: the logic of the model-checking algorithm is mixed with a lot of
-low-level concerns about the way the model-checker is implemented. This makes it
+low-level concerns about the way the model checker is implemented. This makes it
difficult to write new algorithms and difficult to understand, debug, and modify
the existing ones. We need a clean API to express the model-checking algorithms
in a form which is closer to the text-book/paper description. This API must
**highly scalable** (`🖹 <http://hal.inria.fr/inria-00602216/>`__) while
**theoretically sound and experimentally assessed** (`🖹 <http://doi.acm.org/10.1145/2517448>`__).
Most of the time, SimGrid is used to predict the performance (time and energy) of a
-given IT infrastructure, and it includes a prototypal model-checker to formally
+given IT infrastructure, and it includes a prototype model checker to formally
assess these systems.
Technically speaking, SimGrid is a library. It is neither a graphical
SimGrid is a powerful tool, and this documentation will help you taking the best
of it. Check its contents on the left. Each tutorial presents a classical use
-case, in a fast and practical manner. The user manual containts more
-throughfully information. In each part, the important concepts are concisely
+case, in a fast and practical manner. The user manual contains more
+thorough information. In each part, the important concepts are concisely
introduced, before the reference manual. SimGrid is also described in several
`scientific papers <https://simgrid.org/Publications.html>`_.
</platform>
This can be reformulated as follows to make it usable with the ns-3 binding.
-There is no direct connexion from alice to bob, but that's OK because
+There is no direct connection from alice to bob, but that's OK because
ns-3 automatically routes from point to point.
.. code-block:: shell
The most important elements are the basic ones: :ref:`pf_tag_host`,
:ref:`pf_tag_link`, and similar. Then come the routes between any pair
-of hosts, that are given explicitely with :ref:`pf_tag_route` (routes
+of hosts, that are given explicitly with :ref:`pf_tag_route` (routes
are symmetrical by default). Any host must be given a computational
speed (in flops) while links must be given a latency (in seconds) and
a bandwidth (in bytes per second). Note that you can write 1Gflops
/* There is two very different ways of being informed when an actor exits.
*
- * The this_actor::on_exit() function allows to register a function to be
+ * The this_actor::on_exit() function allows one to register a function to be
* executed when this very actor exits. The registered function will run
* when this actor terminates (either because its main function returns, or
* because it's killed in any way). No simcall are allowed here: your actor
};
/* This functor is a bit more complex, as it saves the current state when created.
- * Then, it allows to easily retrieve the hosts which frequency changed since the functor creation.
+ * Then, it allows one to easily retrieve the hosts which frequency changed since the functor creation.
*/
class FrequencyChanged {
std::map<simgrid::s4u::Host*, int> host_list;
$ ../../../smpi_script/bin/smpirun -quiet -wrapper "${bindir:=.}/../../../bin/simgrid-mc" -np 2 -platform ${platfdir:=.}/cluster_backbone.xml --cfg=smpi/buffering:zero --log=xbt_cfg.thresh:warning ./smpi_sendsend
> [0.000000] [mc_safety/INFO] Check a safety property. Reduction is: dpor.
> [0.000000] [mc_global/INFO] **************************
-> [0.000000] [mc_global/INFO] *** DEAD-LOCK DETECTED ***
+> [0.000000] [mc_global/INFO] *** DEADLOCK DETECTED ***
> [0.000000] [mc_global/INFO] **************************
> [0.000000] [mc_global/INFO] Counter-example execution trace:
> [0.000000] [mc_global/INFO] [(1)node-0.simgrid.org (0)] iSend(src=(1)node-0.simgrid.org (0), buff=(verbose only), size=(verbose only))
/** @brief Initialize the MSG internal data.
* @hideinitializer
*
- * It also check that the link-time and compile-time versions of SimGrid do
+ * It also checks that the link-time and compile-time versions of SimGrid do
* match, so you should use this version instead of the #MSG_init_nocheck
* function that does the same initializations, but without this check.
*
- * We allow to link against compiled versions that differ in the patch level.
+ * We allow linking against compiled versions that differ in the patch level.
*/
#define MSG_init(argc, argv) \
do { \
/** @brief Returns the name of the current actor as a C string. */
XBT_PUBLIC const char* get_cname();
-/** @brief Returns the name of the host on which the curret actor is running. */
+/** @brief Returns the name of the host on which the current actor is running. */
XBT_PUBLIC Host* get_host();
/** @brief Suspend the current actor, that is blocked until resume()ed by another actor. */
/* metatable.__index = simgrid.host
* we put the host functions inside the host userdata itself:
- * this allows to write my_host:method(args) for
+ * this allows one to write my_host:method(args) for
* simgrid.host.method(my_host, args) */
lua_setfield(L, -2, "__index"); /* simgrid simgrid.host mt */
{
/* This function runs the DFS algorithm the state space.
* We do so iteratively instead of recursively, dealing with the call stack manually.
- * This allows to explore the call stack at wish. */
+ * This allows one to explore the call stack at will. */
while (not stack_.empty()) {
void MC_show_deadlock()
{
XBT_INFO("**************************");
- XBT_INFO("*** DEAD-LOCK DETECTED ***");
+ XBT_INFO("*** DEADLOCK DETECTED ***");
XBT_INFO("**************************");
XBT_INFO("Counter-example execution trace:");
for (auto const& s : mc_model_checker->getChecker()->get_textual_trace())
return comm->wait_for(timeout);
}
-/** @brief This function is called by a sender and permit to wait for each communication
+/** @brief This function is called by a sender and permits waiting for each communication
*
* @param comm a vector of communication
* @param nb_elem is the size of the comm vector
kernel::actor::simcall([this, new_host]() {
if (pimpl_->waiting_synchro != nullptr) {
// The actor is blocked on an activity. If it's an exec, migrate it too.
- // FIXME: implement the migration of other kind of activities
+ // FIXME: implement the migration of other kinds of activities
kernel::activity::ExecImplPtr exec =
boost::dynamic_pointer_cast<kernel::activity::ExecImpl>(pimpl_->waiting_synchro);
xbt_assert(exec.get() != nullptr, "We can only migrate blocked actors when they are blocked on executions.");
/** @brief create a end-to-end communication task that can then be auto-scheduled
*
- * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows to specify the task costs at
+ * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows one to specify the task costs at
* creation, and decouple them from the scheduling process where you just specify which resource should deliver the
* mandatory power.
*
/** @brief create a sequential computation task that can then be auto-scheduled
*
- * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows to specify the task costs at
+ * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows one to specify the task costs at
* creation, and decouple them from the scheduling process where you just specify which resource should deliver the
* mandatory power.
*
/** @brief create a parallel computation task that can then be auto-scheduled
*
- * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows to specify the task costs at
+ * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows one to specify the task costs at
* creation, and decouple them from the scheduling process where you just specify which resource should deliver the
* mandatory power.
*
/** @brief create a complex data redistribution task that can then be auto-scheduled
*
* Auto-scheduling mean that the task can be used with SD_task_schedulev().
- * This allows to specify the task costs at creation, and decouple them from the scheduling process where you just
+ * This allows one to specify the task costs at creation, and decouple them from the scheduling process where you just
* specify which resource should communicate.
*
* A data redistribution can be scheduled on any number of host.
/** @brief Auto-schedules a task.
*
- * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows to specify the task costs at
+ * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows one to specify the task costs at
* creation, and decouple them from the scheduling process where you just specify which resource should deliver the
* mandatory power.
*
s4u::Mailbox* mailbox;
request->print_request("New iprobe");
- // We have to test both mailboxes as we don't know if we will receive one one or another
+ // We have to test both mailboxes as we don't know if we will receive one or another
if (simgrid::config::get_value<int>("smpi/async-small-thresh") > 0) {
mailbox = smpi_process()->mailbox_small();
XBT_DEBUG("Trying to probe the perm recv mailbox");
*
* - <b>energy-ptask/energy-ptask.c</b>: Demonstrates the use of @ref MSG_parallel_task_create, to create special
* tasks that run on several hosts at the same time. The resulting simulations are very close to what can be
- * achieved in @ref SD_API, but still allows to use the other features of MSG (it'd be cool to be able to mix
+ * achieved in @ref SD_API, but still allows one to use the other features of MSG (it'd be cool to be able to mix
* interfaces, but it's not possible ATM).
*/
void MTestInitFullDatatypes(void)
{
- /* Do not allow to change datatype test level during loop.
+ /* Do not allow the datatype test level to change during loop.
* Otherwise indexes will be wrong.
* Test must explicitly call reset or wait for current datatype loop being
* done before changing to another test level. */
void MTestInitMinDatatypes(void)
{
- /* Do not allow to change datatype test level during loop.
+ /* Do not allow the datatype test level to change during loop.
* Otherwise indexes will be wrong.
* Test must explicitly call reset or wait for current datatype loop being
* done before changing to another test level. */
void MTestInitBasicDatatypes(void)
{
- /* Do not allow to change datatype test level during loop.
+ /* Do not allow the datatype test level to change during loop.
* Otherwise indexes will be wrong.
* Test must explicitly call reset or wait for current datatype loop being
* done before changing to another test level. */