From: Gene Cooperman Date: Sat, 31 Aug 2019 23:56:03 +0000 (-0400) Subject: Spelling fixes & a few cases of polishing the English (#329) X-Git-Tag: v3.24~124 X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/commitdiff_plain/08f744b9a55745ac1b1dcf0ed2ea735471cd7f89 Spelling fixes & a few cases of polishing the English (#329) Some of the more specialized fixes were: - 'DEAD-LOCK' -> 'DEADLOCK' (appears when MC fnids a deadlock. - 'allows to' -> 'allows one to' (and so on for allow/allows/permit/permits) 'model-checker' -> 'model checker' ('the model-checker software', but 'the model checker') --- diff --git a/doc/doxygen/inside.doc b/doc/doxygen/inside.doc index a5c25914f8..3945ed00c6 100644 --- a/doc/doxygen/inside.doc +++ b/doc/doxygen/inside.doc @@ -88,7 +88,7 @@ Launching all tests can be very time consuming, so you want to build and run the tests in parallel. Also, you want to save the build output to disk, for further reference. This is exactly what the BuildSimGrid.sh script does. It is upper-cased so that the shell -completion works and allow to run it in 4 key press: `./B` +completion works and allows one to run it in 4 key press: `./B` Note that if you build out of tree (as you should, see below), the script builds the build/default directory. I usually copy the file in diff --git a/doc/doxygen/outcomes_logs.doc b/doc/doxygen/outcomes_logs.doc index 131fe3655f..e22541c2f3 100644 --- a/doc/doxygen/outcomes_logs.doc +++ b/doc/doxygen/outcomes_logs.doc @@ -357,7 +357,7 @@ manually. @subsubsection log_use_conf_add Category additivity -The add keyword allows to specify the additivity of a +The add keyword allows one to specify the additivity of a category (see @ref log_in_app). '0', '1', 'no', 'yes', 'on' and 'off' are all valid values, with 'yes' as default. @@ -432,4 +432,4 @@ category's appender. The default appender function currently prints to stderr. -*/ \ No newline at end of file +*/ diff --git a/doc/doxygen/platform.doc b/doc/doxygen/platform.doc index db0a969f33..ec3781f11a 100644 --- a/doc/doxygen/platform.doc +++ b/doc/doxygen/platform.doc @@ -29,7 +29,7 @@ The default inner organization of the cluster is as follow: Here, a set of hosts is defined. Each of them has a link to a central backbone (backbone is a link itself, as a link can be used to represent a switch, see the switch / link section -below for more details about it). A router allows to connect a +below for more details about it). A router allows one to connect a cluster to the outside world. Internally, SimGrid treats a cluster as a network zone containing all hosts: the router is the default gateway for the cluster. @@ -305,7 +305,7 @@ these might also help you to get started. Attribute name | Mandatory | Values | Description --------------- | --------- | ------ | ----------- id | yes | string | Identifier of this storage_type; used when referring to it -model | no | string | In the future, this will allow to change the performance model to use +model | no | string | In the future, this will allow one to change the performance model to use size | yes | string | Specifies the amount of available storage space; you can specify storage like "500GiB" or "500GB" if you want. (TODO add a link to all the available abbreviations) content | yes | string | Path to a @ref pf_storage_content_file "Storage Content File" on your system. This file must exist. @@ -865,7 +865,7 @@ As said before, once you choose a model, it (most likely; the constant network model, for example, doesn't) calculates routes for you. But maybe you want to define some of your routes, which will be specific. You may also want to bypass some routes defined in lower level zone at an upper stage: -bypasszoneroute is the tag you're looking for. It allows to +bypasszoneroute is the tag you're looking for. It allows one to bypass routes defined between already defined between zone (if you want to bypass route for a specific host, you should just use byPassRoute). The principle is the same as zoneroute: bypasszoneroute contains @@ -902,7 +902,7 @@ As said before, once you choose a model, it (most likely; the constant network model, for example, doesn't) calculates routes for you. But maybe you want to define some of your routes, which will be specific. You may also want to bypass some routes defined in lower level zone at an upper stage: -bypassRoute is the tag you're looking for. It allows to bypass +bypassRoute is the tag you're looking for. It allows one to bypass routes defined between host/router. The principle is the same as route: bypassRoute contains list of links references of links that are in the path between src and dst. diff --git a/doc/doxygen/uhood_switch.doc b/doc/doxygen/uhood_switch.doc index b26b31590c..72a86050ea 100644 --- a/doc/doxygen/uhood_switch.doc +++ b/doc/doxygen/uhood_switch.doc @@ -27,7 +27,7 @@ Mimicking the OS behavior may seem over-engineered here, but this is mandatory to the model-checker. The simcalls, representing actors' actions, are the transitions of the formal system. Verifying the system requires to manipulate these transitions explicitly. This also -allows to run safely the actors in parallel, even if this is less +allows one to run the actors safely in parallel, even if this is less commonly used by our users. So, the key ideas here are: @@ -953,4 +953,4 @@ auto makeTask(F code, Args... args) in the simulation which we would like to avoid. `std::try_lock()` should be safe to use though. -*/ \ No newline at end of file +*/ diff --git a/docs/source/Configuring_SimGrid.rst b/docs/source/Configuring_SimGrid.rst index 5c296c5732..e05bfc3aa6 100644 --- a/docs/source/Configuring_SimGrid.rst +++ b/docs/source/Configuring_SimGrid.rst @@ -42,7 +42,7 @@ argument. You can even escape the included quotes (write @' for ' if you have your argument between '). Another solution is to use the ```` tag in the platform file. The -only restriction is that this tag must occure before the first +only restriction is that this tag must occur before the first platform element (be it ````, ````, ```` or whatever). The ```` tag takes an ``id`` attribute, but it is currently ignored so you don't really need to pass it. The important part is that @@ -231,7 +231,7 @@ models for all existing resources. network card. Three models exists, but actually, only 2 of them are interesting. The "compound" one is simply due to the way our internal code is organized, and can easily be ignored. So at the - end, you have two host models: The default one allows to aggregate + end, you have two host models: The default one allows aggregation of an existing CPU model with an existing network model, but does not allow parallel tasks because these beasts need some collaboration between the network and CPU model. That is why, ptask_07 is used by @@ -397,10 +397,10 @@ Note that with the default host model this option is activated by default. .. _cfg=smpi/async-small-thresh: -Simulating Asyncronous Send -^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Simulating Asynchronous Send +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -(this configuration item is experimental and may change or disapear) +(this configuration item is experimental and may change or disappear) It is possible to specify that messages below a certain size will be sent as soon as the call to MPI_Send is issued, without waiting for @@ -447,7 +447,7 @@ application requires it or to reduce it to save memory space. Activating Plugins ------------------ -SimGrid plugins allow to extend the framework without changing its +SimGrid plugins allow one to extend the framework without changing its source code directly. Read the source code of the existing plugins to learn how to do so (in ``src/plugins``), and ask your questions to the usual channels (Stack Overflow, Mailing list, IRC). The basic idea is @@ -572,9 +572,9 @@ properties. Size of Cycle Detection Set ........................... -In order to detect cycles, the model-checker needs to check if a new +In order to detect cycles, the model checker needs to check if a new explored state is in fact the same state than a previous one. For -that, the model-checker can take a snapshot of each visited state: +that, the model checker can take a snapshot of each visited state: this snapshot is then used to compare it with subsequent states in the exploration graph. @@ -616,7 +616,7 @@ Exploration Depth Limit ....................... The ``model-checker/max-depth`` can set the maximum depth of the -exploration graph of the model-checker. If this limit is reached, a +exploration graph of the model checker. If this limit is reached, a logging message is sent and the results might not be exact. By default, there is not depth limit. @@ -626,9 +626,9 @@ By default, there is not depth limit. Handling of Timeouts .................... -By default, the model-checker does not handle timeout conditions: the `wait` +By default, the model checker does not handle timeout conditions: the `wait` operations never time out. With the ``model-check/timeout`` configuration item -set to **yes**, the model-checker will explore timeouts of `wait` operations. +set to **yes**, the model checker will explore timeouts of `wait` operations. .. _cfg=model-check/communications-determinism: .. _cfg=model-check/send-determinism: @@ -638,7 +638,7 @@ Communication Determinism The ``model-check/communications-determinism`` and ``model-check/send-determinism`` items can be used to select the -communication determinism mode of the model-checker which checks +communication determinism mode of the model checker which checks determinism properties of the communications of an application. Verification Performance Considerations @@ -658,18 +658,18 @@ memory (see :ref:`contexts/guard-size `). .. _cfg=model-check/replay: -Replaying buggy execution paths out of the model-checker -........................................................ +Replaying buggy execution paths from the model checker +...................................................... -Debugging the problems reported by the model-checker is challenging: First, the +Debugging the problems reported by the model checker is challenging: First, the application under verification cannot be debugged with gdb because the -model-checker already traces it. Then, the model-checker may explore several +model checker already traces it. Then, the model checker may explore several execution paths before encountering the issue, making it very difficult to understand the outputs. Fortunately, SimGrid provides the execution path leading to any reported issue so that you can replay this path out of the model checker, enabling the usage of classical debugging tools. -When the model-checker finds an interesting path in the application +When the model checker finds an interesting path in the application execution graph (where a safety or liveness property is violated), it generates an identifier for this path. Here is an example of output: @@ -689,7 +689,7 @@ generates an identifier for this path. Here is an example of output: The interesting line is ``Path = 1/3;1/4``, which means that you should use ``--cfg=model-check/replay:1/3;1/4`` to replay your application on the buggy -execution path. All options (but the model-checker related ones) must +execution path. All options (but the model checker related ones) must remain the same. In particular, if you ran your application with ``smpirun -wrapper simgrid-mc``, then do it again. Remove all MC-related options, keep the other ones and add @@ -749,7 +749,7 @@ the slowest to the most efficient: The main reason to change this setting is when the debugging tools get fooled by the optimized context factories. Threads are the most -debugging-friendly contextes, as they allow to set breakpoints +debugging-friendly contexts, as they allow one to set breakpoints anywhere with gdb and visualize backtraces for all processes, in order to debug concurrency issues. Valgrind is also more comfortable with threads, but it should be usable with all factories (Exception: the @@ -775,7 +775,7 @@ want to reduce the ``contexts/stack-size`` item. Its default value is as 16 KiB, for example. This *setting is ignored* when using the thread factory. Instead, you should compile SimGrid and your application with ``-fsplit-stack``. Note that this compilation flag is -not compatible with the model-checker right now. +not compatible with the model checker right now. The operating system should only allocate memory for the pages of the stack which are actually used and you might not need to use this in @@ -817,10 +817,10 @@ simulations may well fail in parallel mode. It is described in If you are using the **ucontext** or **raw** context factories, you can request to execute the user code in parallel. Several threads are -launched, each of them handling as much user contexts at each run. To -actiave this, set the ``contexts/nthreads`` item to the amount of -cores that you have in your computer (or lower than 1 to have -the amount of cores auto-detected). +launched, each of them handling the same number of user contexts at each +run. To activate this, set the ``contexts/nthreads`` item to the amount +of cores that you have in your computer (or lower than 1 to have the +amount of cores auto-detected). Even if you asked several worker threads using the previous option, you can request to start the parallel execution (and pay the @@ -951,7 +951,7 @@ a ``MPI_Send()``, SMPI will automatically benchmark the duration of this code, and create an execution task within the simulator to take this into account. For that, the actual duration is measured on the host machine and then scaled to the power of the corresponding -simulated machine. The variable ``smpi/host-speed`` allows to specify +simulated machine. The variable ``smpi/host-speed`` allows one to specify the computational speed of the host machine (in flop/s) to use when scaling the execution times. It defaults to 20000, but you really want to update it to get accurate simulation results. @@ -1134,7 +1134,7 @@ between processes, causing intricate bugs. Several options are possible to avoid this, as described in the main `SMPI publication `_ and in the :ref:`SMPI documentation `. SimGrid provides two ways of -automatically privatizing the globals, and this option allows to +automatically privatizing the globals, and this option allows one to choose between them. - **no** (default when not using smpirun): Do not automatically @@ -1370,7 +1370,7 @@ to create a new POSIX shared memory object (kept in RAM, in /dev/shm) for each shared bloc. With the ``global`` algorithm, each call to SMPI_SHARED_MALLOC() -returns a new adress, but it only points to a shadow bloc: its memory +returns a new address, but it only points to a shadow bloc: its memory area is mapped on a 1MiB file on disk. If the returned bloc is of size N MiB, then the same file is mapped N times to cover the whole bloc. At the end, no matter how many SMPI_SHARED_MALLOC you do, this will diff --git a/docs/source/Deploying_your_Application.rst b/docs/source/Deploying_your_Application.rst index 557d17ad4b..7c1a299828 100644 --- a/docs/source/Deploying_your_Application.rst +++ b/docs/source/Deploying_your_Application.rst @@ -19,7 +19,7 @@ There is several ways to deploy the :ref:`application ` you want to study on your :ref:`simulated platform `, i.e. to specify which actor should be started on which host. You can do so directly in your program (as shown in :ref:`these examples `), or using an XML deployment -file. Unless you have a good reason, you should keep your application appart +file. Unless you have a good reason, you should keep your application apart from the deployment as it will :ref:`ease your experimental campain afterward `. @@ -47,7 +47,7 @@ archive for files named ``???_d.xml`` for more): + -- Use simgrid::s4u::Actor::get_property() to retrieve it.--> diff --git a/docs/source/Installing_SimGrid.rst b/docs/source/Installing_SimGrid.rst index d4691fcb2e..8d2feb9ded 100644 --- a/docs/source/Installing_SimGrid.rst +++ b/docs/source/Installing_SimGrid.rst @@ -245,7 +245,7 @@ enable_mallocators (ON/off) enable_model-checking (on/OFF) Activates the formal verification mode. This will **hinder - simulation speed** even when the model-checker is not activated at + simulation speed** even when the model checker is not activated at run time. enable_ns3 (on/OFF) @@ -255,7 +255,7 @@ enable_smpi (ON/off) Allows one to run MPI code on top of SimGrid. enable_smpi_ISP_testsuite (on/OFF) - Adds many extra tests for the model-checker module. + Adds many extra tests for the model checker module. enable_smpi_MPICH3_testsuite (on/OFF) Adds many extra tests for the MPI module. @@ -384,7 +384,7 @@ Windows-specific instructions ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The best solution to get SimGrid working on windows is to install the -Ubuntu subsystem of Windows 10. All of SimGrid (but the model-checker) +Ubuntu subsystem of Windows 10. All of SimGrid (but the model checker) works in this setting. Native builds not very well supported. Have a look to our `appveypor diff --git a/docs/source/Introduction.rst b/docs/source/Introduction.rst index 3d8ec7e6cb..76070c959e 100644 --- a/docs/source/Introduction.rst +++ b/docs/source/Introduction.rst @@ -107,11 +107,11 @@ explored. In some sense, this mode tests your application for all possible platforms that you could imagine (and more). You just provide the application and its deployment (number of -processes and parameters), and the model-checker will literally +processes and parameters), and the model checker will literally explore all possible outcomes by testing all possible message interleavings: if at some point a given process can either receive the message A first or the message B depending on the platform -characteristics, the model-checker will explore the scenario where A +characteristics, the model checker will explore the scenario where A arrives first, and then rewind to the same point to explore the scenario where B arrives first. @@ -231,10 +231,10 @@ SimGrid could even be used to debug the real platform :) SimGrid is also used to debug, improve, and tune several large applications. `BigDFT `_ (a massively parallel code -computing the electronic structure of chemical elements developped by +computing the electronic structure of chemical elements developed by the CEA), `StarPU `_ (a Unified Runtime System for Heterogeneous Multicore Architectures -developped by Inria Bordeaux) and +developed by Inria Bordeaux) and `TomP2P `_ (a high performance key-value pair storage library developed at the University of Zurich). Some of these applications enjoy large user communities themselves. diff --git a/docs/source/Tutorial_Algorithms.rst b/docs/source/Tutorial_Algorithms.rst index 11eebf7d41..e8420ddce1 100644 --- a/docs/source/Tutorial_Algorithms.rst +++ b/docs/source/Tutorial_Algorithms.rst @@ -263,7 +263,7 @@ This very simple setting raises many interesting questions: hierarchical algorithm, with some forwarders taking large pools of tasks from the master, each of them distributing their tasks to a sub-pool of workers? Or should we introduce super-peers, - dupplicating the master's role in a peer-to-peer manner? Do the + duplicating the master's role in a peer-to-peer manner? Do the algorithms require a perfect knowledge of the network? - How is such an algorithm sensitive to external workload variation? @@ -479,7 +479,7 @@ used to BSD sockets or other classical systems, but you will soon appreciate their power. They are only used to match the communications, but have no impact on the communication timing. ``put()`` and ``get()`` are matched regardless of their -initiators' location and then the real communication occures between +initiators' location and then the real communication occurs between the involved parties. Please refer to the full `Mailboxes' documentation @@ -562,12 +562,12 @@ simulator requests. This is both a good idea, and a dangerous trend. This simplification is another application of the good old DRY/SPOT programming principle (`Don't Repeat Yourself / Single Point Of Truth `_), and you -really want your programming artefacts to follow these software +really want your programming artifacts to follow these software engineering principles. But at the same time, you should be careful in separating your scientific contribution (the master/workers algorithm) and the -artefacts used to test it (platform, deployment and workload). This is +artifacts used to test it (platform, deployment and workload). This is why SimGrid forces you to express your platform and deployment files in XML instead of using a programming interface: it forces a clear separation of concerns between things of very different nature. @@ -667,7 +667,7 @@ round-robin is completely suboptimal: most of the workers keep waiting for more work. We will move to a First-Come First-Served mechanism instead. -For that, your workers should explicitely request for work with a +For that, your workers should explicitly request for work with a message sent to a channel that is specific to their master. The name of that private channel can be the one used to categorize the executions, as it is already specific to each master. @@ -675,7 +675,7 @@ executions, as it is already specific to each master. The master should serve in a round-robin manner the requests it receives, until the time is up. Changing the communication schema can be a bit hairy, but once it works, you will see that such as simple -FCFS schema allows to double the amount of tasks handled over time +FCFS schema allows one to double the amount of tasks handled over time here. Things may be different with another platform file. Further Improvements diff --git a/docs/source/Tutorial_MPI_Applications.rst b/docs/source/Tutorial_MPI_Applications.rst index eae9d38532..b88ddbc237 100644 --- a/docs/source/Tutorial_MPI_Applications.rst +++ b/docs/source/Tutorial_MPI_Applications.rst @@ -15,9 +15,9 @@ Project `_ only require minor modifications to `run on top of SMPI `_. -This setting permits to debug your MPI applications in a perfectly -reproducible setup, with no Heisenbugs. Enjoy the full Clairevoyance -provided by the simulator while running what-if analysis on platforms +This setting permits one to debug your MPI applications in a perfectly +reproducible setup, with no Heisenbugs. Enjoy the full Clairvoyance +provided by the simulator while running what-if analyses on platforms that are still to be built! Several `production-grade MPI applications `_ use SimGrid for their integration and performance testing. @@ -41,7 +41,7 @@ How does it work? In SMPI, communications are simulated while computations are emulated. This means that while computations occur as they would in -the real systems, communication calls are intercepted and achived by +the real systems, communication calls are intercepted and achieved by the simulator. To start using SMPI, you just need to compile your application with @@ -58,7 +58,7 @@ per MPI rank as if it was another dynamic library. Then, MPI communication calls are implemented using SimGrid: data is exchanged through memory copy, while the simulator's performance models are used to predict the time taken by each communications. Any computations -occuring between two MPI calls are benchmarked, and the corresponding +occurring between two MPI calls are benchmarked, and the corresponding time is reported into the simulator. .. image:: /tuto_smpi/img/big-picture.svg @@ -95,7 +95,7 @@ simulated platform as a graph of hosts and network links. The elements basic elements (with :ref:`pf_tag_host` and :ref:`pf_tag_link`) are described first, and then the routes between -any pair of hosts are explicitely given with :ref:`pf_tag_route`. +any pair of hosts are explicitly given with :ref:`pf_tag_route`. Any host must be given a computational speed in flops while links must be given a latency and a bandwidth. You can write 1Gf for @@ -104,7 +104,7 @@ be given a latency and a bandwidth. You can write 1Gf for Routes defined with :ref:`pf_tag_route` are symmetrical by default, meaning that the list of traversed links from A to B is the same as -from B to A. Explicitely define non-symmetrical routes if you prefer. +from B to A. Explicitly define non-symmetrical routes if you prefer. Cluster with a Crossbar ....................... diff --git a/docs/source/XML_Reference.rst b/docs/source/XML_Reference.rst index 7e87d1247a..8c39d22684 100644 --- a/docs/source/XML_Reference.rst +++ b/docs/source/XML_Reference.rst @@ -304,7 +304,7 @@ and a download link. **Attributes:** :``version``: Version of the DTD, describing the whole XML format. - This versionning allow future evolutions, even if we + This versioning allow future evolutions, even if we avoid backward-incompatible changes. The current version is **4.1**. The ``simgrid_update_xml`` program can upgrade most of the past platform files to the most recent @@ -347,7 +347,7 @@ following functions: ------- -A path between two network locations, composed of several occurences +A path between two network locations, composed of several occurrences of :ref:`pf_tag_link` . **Parent tags:** :ref:`pf_tag_zone` |br| diff --git a/docs/source/app_s4u.rst b/docs/source/app_s4u.rst index 5bfdc53795..0f6b24a280 100644 --- a/docs/source/app_s4u.rst +++ b/docs/source/app_s4u.rst @@ -146,7 +146,7 @@ Activities ********** Activities represent the actions that consume a resource, such as a -:ref:`s4u::Comm ` that consumes the *transmiting power* of +:ref:`s4u::Comm ` that consumes the *transmitting power* of :ref:`s4u::Link ` resources. ======================= @@ -173,7 +173,7 @@ Finally, to wait at most until a specified time limit, use wait_for and wait_until are currently not implemented for Exec and Io activities. -Every kind of activities can be asynchronous: +Every kind of activity can be asynchronous: - :ref:`s4u::CommPtr ` are created with :cpp:func:`s4u::Mailbox::put_async() ` and @@ -319,7 +319,7 @@ The last twist is that by default in the simulator, the data starts to be exchanged only when both the sender and the receiver are announced (it waits until both :cpp:func:`put() ` and :cpp:func:`get() ` are posted). -In TCP, since you establish connexions beforehand, the data starts to +In TCP, since you establish connections beforehand, the data starts to flow as soon as the sender posts it, even if the receiver did not post its :cpp:func:`recv() ` yet. @@ -348,7 +348,7 @@ managed through the context. Provided that you never manipulate objects of type Foo directly but always FooPtr references (which are defined as `boost::intrusive_ptr `_ -), you will never have to explicitely release the resource that +), you will never have to explicitly release the resource that you use nor to free the memory of unused objects. Here is a little example: diff --git a/docs/source/app_smpi.rst b/docs/source/app_smpi.rst index 2cdd693447..775fce95c8 100644 --- a/docs/source/app_smpi.rst +++ b/docs/source/app_smpi.rst @@ -332,7 +332,7 @@ MPI_Allreduce - redbcast: reduce then broadcast, using default or tuned algorithms if specified - ompi_ring_segmented: ring algorithm used by OpenMPI - mvapich2_rs: rdb for small messages, reduce-scatter then allgather else - - mvapich2_two_level: SMP-aware algorithm, with mpich as intra algoritm, and rdb as inter (Change this behavior by using mvapich2 selector to use tuned values) + - mvapich2_two_level: SMP-aware algorithm, with mpich as intra algorithm, and rdb as inter (Change this behavior by using mvapich2 selector to use tuned values) - rab: default `Rabenseifner `_ implementation MPI_Reduce_scatter @@ -548,7 +548,7 @@ is an older approach that proves to be slower. With the **mmap approach**, SMPI duplicates and dynamically switch the ``.data`` and ``.bss`` segments of the ELF process when switching the MPI ranks. This allows each ranks to have its own copy of the global -variables. No copy actually occures as this mechanism uses ``mmap()`` +variables. No copy actually occurs as this mechanism uses ``mmap()`` for efficiency. This mechanism is considered to be very robust on all systems supporting ``mmap()`` (Linux and most BSDs). Its performance is questionable since each context switch between MPI ranks induces @@ -567,10 +567,10 @@ link against the SimGrid library itself. With the **dlopen approach**, SMPI loads several copies of the same executable in memory as if it were a library, so that the global -variables get naturally dupplicated. It first requires the executable +variables get naturally duplicated. It first requires the executable to be compiled as a relocatable binary, which is less common for programs than for libraries. But most distributions are now compiled -this way for security reason as it allows to randomize the address +this way for security reason as it allows one to randomize the address space layout. It should thus be safe to compile most (any?) program this way. The second trick is that the dynamic linker refuses to link the exact same file several times, be it a library or a relocatable @@ -578,12 +578,12 @@ executable. It makes perfectly sense in the general case, but we need to circumvent this rule of thumb in our case. To that extend, the binary is copied in a temporary file before being re-linked against. ``dlmopen()`` cannot be used as it only allows 256 contextes, and as it -would also dupplicate simgrid itself. +would also duplicate simgrid itself. This approach greatly speeds up the context switching, down to about 40 CPU cycles with our raw contextes, instead of requesting several syscalls with the ``mmap()`` approach. Another advantage is that it -permits to run the SMPI contexts in parallel, which is obviously not +permits one to run the SMPI contexts in parallel, which is obviously not possible with the ``mmap()`` approach. It was tricky to implement, but we are not aware of any flaws, so smpirun activates it by default. @@ -623,7 +623,7 @@ Reducing your memory footprint If you get short on memory (the whole app is executed on a single node when simulated), you should have a look at the SMPI_SHARED_MALLOC and -SMPI_SHARED_FREE macros. It allows to share memory areas between processes: The +SMPI_SHARED_FREE macros. It allows one to share memory areas between processes: The purpose of these macro is that the same line malloc on each process will point to the exact same memory area. So if you have a malloc of 2M and you have 16 processes, this macro will change your memory consumption from 2M*16 to 2M @@ -739,7 +739,7 @@ fail without ``smpirun``. .............................................. In addition to the previous answers, some projects also need to be -explicitely told what compiler to use, as follows: +explicitly told what compiler to use, as follows: .. code-block:: shell diff --git a/docs/source/application.rst b/docs/source/application.rst index 8de98fd7e0..37115930b0 100644 --- a/docs/source/application.rst +++ b/docs/source/application.rst @@ -28,7 +28,7 @@ to mix several interfaces in the same simulation. your application is decomposed as a list of event handlers that are fired according to the trace. SimGrid comes with a build-in support for MPI traces (with solutions to import traces captured by several - MPI profilers). You can reuse this mecanism for any kind of trace + MPI profilers). You can reuse this mechanism for any kind of trace that you want to replay, for example to study how a P2P DHT overlay reacts to a given workload. - Simulating algorithms with one of the legacy interfaces: :ref:`MSG diff --git a/docs/source/community.rst b/docs/source/community.rst index e012d35480..2367d69336 100644 --- a/docs/source/community.rst +++ b/docs/source/community.rst @@ -39,7 +39,7 @@ to us and say hello! We love earing about how people use SimGrid. when we have a guest. Be warned that even if many people are connected to - the chanel, they may not be staring at their IRC windows. + the channel, they may not be staring at their IRC windows. So don't be surprised if you don't get an answer in the second, and turn to the mailing lists if nobody seems to be there. The logs of this channel are publicly @@ -66,8 +66,8 @@ Spread the word There are many ways to help the SimGrid project. The first and most natural one is to **use SimGrid for your research, and say so**. Cite the SimGrid framework in your papers and discuss of its advantages with -your colleagues to spread the word. When we ask for new fundings to -sustain the project, the amount of publications enabled by SimGrid is +your colleagues to spread the word. When we ask for new funding to +sustain the project, the number of publications enabled by SimGrid is always the first question we get. The more you use the framework, the better for us. @@ -189,7 +189,7 @@ It is in need of an overhaul: - cleanup, refactoring, usage of C++ features. - - The state comparison code works by infering types of blocks allocated on the + - The state comparison code works by inferring types of blocks allocated on the heap by following pointers from known roots (global variables, local variables). Usually the first type found for a given block is used even if a better one could be found later. By using a first pass of type inference, @@ -199,11 +199,11 @@ It is in need of an overhaul: - We might benefit from adding logic for handling some known types. For example, both `std::string` and `std::vector` have a capacity which might be larger than the current size of the container. We should ignore - the corresponding elements when comparing the states and infering the types. + the corresponding elements when comparing the states and inferring the types. - Another difficulty in the state comparison code is the detection of dangling pointers. We cannot easily know if a pointer is dangling and - dangling pointers might lead us to choose the wrong type when infering + dangling pointers might lead us to choose the wrong type when inferring heap blocks. We might mitigate this problem by delaying the reallocation of a freed block until there is no blocks pointing to it anymore using some sort of basic garbage-collector. @@ -213,7 +213,7 @@ MC: Hashing the states In order to speed up the state comparison an idea was to create a hash of the state. Only states with the same hash would need to be compared using the -state comparison algorithm. Some information should not be inclueded in the +state comparison algorithm. Some information should not be included in the hash in order to avoid considering different states which would otherwise would have been considered equal. @@ -221,7 +221,7 @@ The states could be indexed by their hash. Currently they are indexed by the number of processes and the amount of heap currently allocated (see `DerefAndCompareByNbProcessesAndUsedHeap`). -Good candidate informations for the state hashing: +Good candidate information for the state hashing: - number of processes; @@ -241,28 +241,28 @@ but it is currently disabled. Interface with the model-checked processes """""""""""""""""""""""""""""""""""""""""" -The model-checker reads many information about the model-checked process by +The model checker reads many information about the model-checked process by `process_vm_readv()`-ing brutally the data structure of the model-checked process leading to some inefficient code such as maintaining copies of complex C++ structures in XBT dynars. We need a sane way to expose the relevant -information to the model-checker. +information to the model checker. Generic simcalls """""""""""""""" We have introduced some generic simcalls which can be used to execute a -callback in SimGrid Maestro context. It makes it a lot easier to interface +callback in a SimGrid Maestro context. It makes it a lot easier to interface the simulated process with the maestro. However, the callbacks for the -model-checker which cannot decide how it should handle them. We would need a +model checker which cannot decide how it should handle them. We would need a solution for this if we want to be able to replace the simcalls the -model-checker cares about by generic simcalls. +model checker cares about by generic simcalls. Defining an API for writing Model-Checking algorithms """"""""""""""""""""""""""""""""""""""""""""""""""""" Currently, writing a new model-checking algorithms in SimGridMC is quite difficult: the logic of the model-checking algorithm is mixed with a lot of -low-level concerns about the way the model-checker is implemented. This makes it +low-level concerns about the way the model checker is implemented. This makes it difficult to write new algorithms and difficult to understand, debug, and modify the existing ones. We need a clean API to express the model-checking algorithms in a form which is closer to the text-book/paper description. This API must diff --git a/docs/source/index.rst b/docs/source/index.rst index 38a9bee2c9..dd414c6651 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -20,7 +20,7 @@ The simulation models are **fast** (`🖹 `__) **highly scalable** (`🖹 `__) while **theoretically sound and experimentally assessed** (`🖹 `__). Most of the time, SimGrid is used to predict the performance (time and energy) of a -given IT infrastructure, and it includes a prototypal model-checker to formally +given IT infrastructure, and it includes a prototype model checker to formally assess these systems. Technically speaking, SimGrid is a library. It is neither a graphical @@ -38,8 +38,8 @@ come and join us! SimGrid is a powerful tool, and this documentation will help you taking the best of it. Check its contents on the left. Each tutorial presents a classical use -case, in a fast and practical manner. The user manual containts more -throughfully information. In each part, the important concepts are concisely +case, in a fast and practical manner. The user manual contains more +thorough information. In each part, the important concepts are concisely introduced, before the reference manual. SimGrid is also described in several `scientific papers `_. diff --git a/docs/source/ns3.rst b/docs/source/ns3.rst index b7a53397fd..c4e27ea7a7 100644 --- a/docs/source/ns3.rst +++ b/docs/source/ns3.rst @@ -98,7 +98,7 @@ example of invalid platform: This can be reformulated as follows to make it usable with the ns-3 binding. -There is no direct connexion from alice to bob, but that's OK because +There is no direct connection from alice to bob, but that's OK because ns-3 automatically routes from point to point. .. code-block:: shell diff --git a/docs/source/platform.rst b/docs/source/platform.rst index aa2574749c..d7e20e1da6 100644 --- a/docs/source/platform.rst +++ b/docs/source/platform.rst @@ -52,7 +52,7 @@ simulated platform as a graph of hosts and network links. The most important elements are the basic ones: :ref:`pf_tag_host`, :ref:`pf_tag_link`, and similar. Then come the routes between any pair -of hosts, that are given explicitely with :ref:`pf_tag_route` (routes +of hosts, that are given explicitly with :ref:`pf_tag_route` (routes are symmetrical by default). Any host must be given a computational speed (in flops) while links must be given a latency (in seconds) and a bandwidth (in bytes per second). Note that you can write 1Gflops diff --git a/examples/s4u/actor-exiting/s4u-actor-exiting.cpp b/examples/s4u/actor-exiting/s4u-actor-exiting.cpp index cb479137b5..554ebd241c 100644 --- a/examples/s4u/actor-exiting/s4u-actor-exiting.cpp +++ b/examples/s4u/actor-exiting/s4u-actor-exiting.cpp @@ -5,7 +5,7 @@ /* There is two very different ways of being informed when an actor exits. * - * The this_actor::on_exit() function allows to register a function to be + * The this_actor::on_exit() function allows one to register a function to be * executed when this very actor exits. The registered function will run * when this actor terminates (either because its main function returns, or * because it's killed in any way). No simcall are allowed here: your actor diff --git a/examples/s4u/engine-filtering/s4u-engine-filtering.cpp b/examples/s4u/engine-filtering/s4u-engine-filtering.cpp index f798b6c60c..d03ea86367 100644 --- a/examples/s4u/engine-filtering/s4u-engine-filtering.cpp +++ b/examples/s4u/engine-filtering/s4u-engine-filtering.cpp @@ -33,7 +33,7 @@ public: }; /* This functor is a bit more complex, as it saves the current state when created. - * Then, it allows to easily retrieve the hosts which frequency changed since the functor creation. + * Then, it allows one to easily retrieve the hosts which frequency changed since the functor creation. */ class FrequencyChanged { std::map host_list; diff --git a/examples/smpi/mc/sendsend.tesh b/examples/smpi/mc/sendsend.tesh index 23874aa771..7bb8a481df 100644 --- a/examples/smpi/mc/sendsend.tesh +++ b/examples/smpi/mc/sendsend.tesh @@ -15,7 +15,7 @@ p Testing the paranoid model $ ../../../smpi_script/bin/smpirun -quiet -wrapper "${bindir:=.}/../../../bin/simgrid-mc" -np 2 -platform ${platfdir:=.}/cluster_backbone.xml --cfg=smpi/buffering:zero --log=xbt_cfg.thresh:warning ./smpi_sendsend > [0.000000] [mc_safety/INFO] Check a safety property. Reduction is: dpor. > [0.000000] [mc_global/INFO] ************************** -> [0.000000] [mc_global/INFO] *** DEAD-LOCK DETECTED *** +> [0.000000] [mc_global/INFO] *** DEADLOCK DETECTED *** > [0.000000] [mc_global/INFO] ************************** > [0.000000] [mc_global/INFO] Counter-example execution trace: > [0.000000] [mc_global/INFO] [(1)node-0.simgrid.org (0)] iSend(src=(1)node-0.simgrid.org (0), buff=(verbose only), size=(verbose only)) diff --git a/include/simgrid/msg.h b/include/simgrid/msg.h index a1d5cecc9c..91c438df4f 100644 --- a/include/simgrid/msg.h +++ b/include/simgrid/msg.h @@ -270,11 +270,11 @@ XBT_PUBLIC void MSG_config(const char* key, const char* value); /** @brief Initialize the MSG internal data. * @hideinitializer * - * It also check that the link-time and compile-time versions of SimGrid do + * It also checks that the link-time and compile-time versions of SimGrid do * match, so you should use this version instead of the #MSG_init_nocheck * function that does the same initializations, but without this check. * - * We allow to link against compiled versions that differ in the patch level. + * We allow linking against compiled versions that differ in the patch level. */ #define MSG_init(argc, argv) \ do { \ diff --git a/include/simgrid/s4u/Actor.hpp b/include/simgrid/s4u/Actor.hpp index 2d03fe5cec..0e336bad5e 100644 --- a/include/simgrid/s4u/Actor.hpp +++ b/include/simgrid/s4u/Actor.hpp @@ -432,7 +432,7 @@ XBT_PUBLIC std::string get_name(); /** @brief Returns the name of the current actor as a C string. */ XBT_PUBLIC const char* get_cname(); -/** @brief Returns the name of the host on which the curret actor is running. */ +/** @brief Returns the name of the host on which the current actor is running. */ XBT_PUBLIC Host* get_host(); /** @brief Suspend the current actor, that is blocked until resume()ed by another actor. */ diff --git a/src/bindings/lua/lua_host.cpp b/src/bindings/lua/lua_host.cpp index 7adcf7755b..c9e15239b8 100644 --- a/src/bindings/lua/lua_host.cpp +++ b/src/bindings/lua/lua_host.cpp @@ -208,7 +208,7 @@ void sglua_register_host_functions(lua_State* L) /* metatable.__index = simgrid.host * we put the host functions inside the host userdata itself: - * this allows to write my_host:method(args) for + * this allows one to write my_host:method(args) for * simgrid.host.method(my_host, args) */ lua_setfield(L, -2, "__index"); /* simgrid simgrid.host mt */ diff --git a/src/mc/checker/SafetyChecker.cpp b/src/mc/checker/SafetyChecker.cpp index b72c6c9125..ecc8ca3556 100644 --- a/src/mc/checker/SafetyChecker.cpp +++ b/src/mc/checker/SafetyChecker.cpp @@ -81,7 +81,7 @@ void SafetyChecker::run() { /* This function runs the DFS algorithm the state space. * We do so iteratively instead of recursively, dealing with the call stack manually. - * This allows to explore the call stack at wish. */ + * This allows one to explore the call stack at will. */ while (not stack_.empty()) { diff --git a/src/mc/mc_global.cpp b/src/mc/mc_global.cpp index b63e1c1192..975d079063 100644 --- a/src/mc/mc_global.cpp +++ b/src/mc/mc_global.cpp @@ -86,7 +86,7 @@ void MC_run() void MC_show_deadlock() { XBT_INFO("**************************"); - XBT_INFO("*** DEAD-LOCK DETECTED ***"); + XBT_INFO("*** DEADLOCK DETECTED ***"); XBT_INFO("**************************"); XBT_INFO("Counter-example execution trace:"); for (auto const& s : mc_model_checker->getChecker()->get_textual_trace()) diff --git a/src/msg/msg_comm.cpp b/src/msg/msg_comm.cpp index 1ecb43d926..3bbc4c3a8b 100644 --- a/src/msg/msg_comm.cpp +++ b/src/msg/msg_comm.cpp @@ -144,7 +144,7 @@ msg_error_t MSG_comm_wait(msg_comm_t comm, double timeout) return comm->wait_for(timeout); } -/** @brief This function is called by a sender and permit to wait for each communication +/** @brief This function is called by a sender and permits waiting for each communication * * @param comm a vector of communication * @param nb_elem is the size of the comm vector diff --git a/src/s4u/s4u_Actor.cpp b/src/s4u/s4u_Actor.cpp index 9cfe9fa62d..360cc93eb5 100644 --- a/src/s4u/s4u_Actor.cpp +++ b/src/s4u/s4u_Actor.cpp @@ -137,7 +137,7 @@ void Actor::migrate(Host* new_host) kernel::actor::simcall([this, new_host]() { if (pimpl_->waiting_synchro != nullptr) { // The actor is blocked on an activity. If it's an exec, migrate it too. - // FIXME: implement the migration of other kind of activities + // FIXME: implement the migration of other kinds of activities kernel::activity::ExecImplPtr exec = boost::dynamic_pointer_cast(pimpl_->waiting_synchro); xbt_assert(exec.get() != nullptr, "We can only migrate blocked actors when they are blocked on executions."); diff --git a/src/simdag/sd_task.cpp b/src/simdag/sd_task.cpp index 85cc1f390e..c1b86b433d 100644 --- a/src/simdag/sd_task.cpp +++ b/src/simdag/sd_task.cpp @@ -69,7 +69,7 @@ static inline SD_task_t SD_task_create_sized(const char *name, void *data, doubl /** @brief create a end-to-end communication task that can then be auto-scheduled * - * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows to specify the task costs at + * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows one to specify the task costs at * creation, and decouple them from the scheduling process where you just specify which resource should deliver the * mandatory power. * @@ -87,7 +87,7 @@ SD_task_t SD_task_create_comm_e2e(const char *name, void *data, double amount) /** @brief create a sequential computation task that can then be auto-scheduled * - * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows to specify the task costs at + * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows one to specify the task costs at * creation, and decouple them from the scheduling process where you just specify which resource should deliver the * mandatory power. * @@ -109,7 +109,7 @@ SD_task_t SD_task_create_comp_seq(const char *name, void *data, double flops_amo /** @brief create a parallel computation task that can then be auto-scheduled * - * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows to specify the task costs at + * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows one to specify the task costs at * creation, and decouple them from the scheduling process where you just specify which resource should deliver the * mandatory power. * @@ -136,7 +136,7 @@ SD_task_t SD_task_create_comp_par_amdahl(const char *name, void *data, double fl /** @brief create a complex data redistribution task that can then be auto-scheduled * * Auto-scheduling mean that the task can be used with SD_task_schedulev(). - * This allows to specify the task costs at creation, and decouple them from the scheduling process where you just + * This allows one to specify the task costs at creation, and decouple them from the scheduling process where you just * specify which resource should communicate. * * A data redistribution can be scheduled on any number of host. @@ -882,7 +882,7 @@ void SD_task_build_MxN_1D_block_matrix(SD_task_t task, int src_nb, int dst_nb){ /** @brief Auto-schedules a task. * - * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows to specify the task costs at + * Auto-scheduling mean that the task can be used with SD_task_schedulev(). This allows one to specify the task costs at * creation, and decouple them from the scheduling process where you just specify which resource should deliver the * mandatory power. * diff --git a/src/smpi/mpi/smpi_request.cpp b/src/smpi/mpi/smpi_request.cpp index 04739915b9..bb5199b4e4 100644 --- a/src/smpi/mpi/smpi_request.cpp +++ b/src/smpi/mpi/smpi_request.cpp @@ -756,7 +756,7 @@ void Request::iprobe(int source, int tag, MPI_Comm comm, int* flag, MPI_Status* s4u::Mailbox* mailbox; request->print_request("New iprobe"); - // We have to test both mailboxes as we don't know if we will receive one one or another + // We have to test both mailboxes as we don't know if we will receive one or another if (simgrid::config::get_value("smpi/async-small-thresh") > 0) { mailbox = smpi_process()->mailbox_small(); XBT_DEBUG("Trying to probe the perm recv mailbox"); diff --git a/teshsuite/msg/energy-ptask/energy-ptask.c b/teshsuite/msg/energy-ptask/energy-ptask.c index 94f3a9b920..c1b73bb343 100644 --- a/teshsuite/msg/energy-ptask/energy-ptask.c +++ b/teshsuite/msg/energy-ptask/energy-ptask.c @@ -12,7 +12,7 @@ XBT_LOG_NEW_DEFAULT_CATEGORY(msg_test, "Messages specific for this msg example") * * - energy-ptask/energy-ptask.c: Demonstrates the use of @ref MSG_parallel_task_create, to create special * tasks that run on several hosts at the same time. The resulting simulations are very close to what can be - * achieved in @ref SD_API, but still allows to use the other features of MSG (it'd be cool to be able to mix + * achieved in @ref SD_API, but still allows one to use the other features of MSG (it'd be cool to be able to mix * interfaces, but it's not possible ATM). */ diff --git a/teshsuite/smpi/mpich3-test/util/mtest_datatype_gen.c b/teshsuite/smpi/mpich3-test/util/mtest_datatype_gen.c index 76dd032fab..17d3d6aed5 100644 --- a/teshsuite/smpi/mpich3-test/util/mtest_datatype_gen.c +++ b/teshsuite/smpi/mpich3-test/util/mtest_datatype_gen.c @@ -162,7 +162,7 @@ static void MTestResetDatatypeGen() void MTestInitFullDatatypes(void) { - /* Do not allow to change datatype test level during loop. + /* Do not allow the datatype test level to change during loop. * Otherwise indexes will be wrong. * Test must explicitly call reset or wait for current datatype loop being * done before changing to another test level. */ @@ -178,7 +178,7 @@ void MTestInitFullDatatypes(void) void MTestInitMinDatatypes(void) { - /* Do not allow to change datatype test level during loop. + /* Do not allow the datatype test level to change during loop. * Otherwise indexes will be wrong. * Test must explicitly call reset or wait for current datatype loop being * done before changing to another test level. */ @@ -194,7 +194,7 @@ void MTestInitMinDatatypes(void) void MTestInitBasicDatatypes(void) { - /* Do not allow to change datatype test level during loop. + /* Do not allow the datatype test level to change during loop. * Otherwise indexes will be wrong. * Test must explicitly call reset or wait for current datatype loop being * done before changing to another test level. */