2 @page inside_tests Testing SimGrid
4 This page will teach you how to run the tests, selecting the ones you
5 want, and how to add new tests to the archive.
9 SimGrid code coverage is usually between 70% and 80%, which is much
10 more than most projects out there. This is because we consider SimGrid
11 to be a rather complex project, and we want to modify it with less fear.
13 We have two sets of tests in SimGrid: Each of the 10,000+ unit tests
14 check one specific case for one specific function, while the 500+
15 integration tests run a given simulation specifically intended to
16 exercise a larger amount of functions together. Every example provided
17 in examples/ is used as an integration test, while some other torture
18 tests and corner cases integration tests are located in teshsuite/.
19 For each integration test, we ensure that the output exactly matches
20 the defined expectations. Since SimGrid displays the timestamp of
21 every logged line, this ensures that every change of the models'
22 prediction will be noticed. All these tests should ensure that SimGrid
23 is safe to use and to depend on.
25 @section inside_tests_runintegration Running the tests
27 Running the tests is done using the ctest binary that comes with
28 cmake. These tests are run for every commit and the result is publicly
29 <a href="https://ci.inria.fr/simgrid/">available</a>.
32 ctest # Launch all tests
33 ctest -R msg # Launch only the tests which name match the string "msg"
34 ctest -j4 # Launch all tests in parallel, at most 4 at the same time
35 ctest --verbose # Display all details on what's going on
36 ctest --output-on-failure # Only get verbose for the tests that fail
38 ctest -R msg- -j5 --output-on-failure # You changed MSG and want to check that you didn't break anything, huh?
39 # That's fine, I do so all the time myself.
42 @section inside_tests_rununit Running the unit tests
44 All unit tests are packed into the unit-tests binary, that lives at the
45 source root. These tests are run when you launch ctest, don't worry.
48 make unit-tests # Rebuild the test runner on need
49 ./unit-tests # Launch all tests
50 ./unit-tests --help # revise how it goes if you forgot
54 @section inside_tests_add_units Adding unit tests
56 Our unit tests are written using the Catch2 library, that is included
57 in the source tree. Please check for examples, listed at the end of
58 tools/cmake/Tests.cmake.
60 It is important to keep your tests fast. We run them very very often,
61 and you should strive to make them as fast as possible, to not bother
62 the other developers. Do not hesitate to stress test your code, but
63 make sure that it runs reasonably fast, or nobody will run "ctest"
64 before committing code.
66 @section inside_tests_add_integration Adding integration tests
68 TESH (the TEsting SHell) is the test runner that we wrote for our
69 integration tests. It is distributed with the SimGrid source file, and
70 even comes with a man page. TESH ensures that the output produced by a
71 command perfectly matches the expected output. This is very precious
72 to ensure that no change modifies the timings computed by the models
75 To add a new integration test, you thus have 3 things to do:
77 - <b>Write the code exercising the feature you target</b>. You should
78 strive to make this code clear, well documented and informative for
79 the users. If you manage to do so, put this somewhere under
80 examples/ and modify the cmake files as explained on this page:
81 @ref inside_cmake_examples. If you feel like you should write a
82 torture test that is not interesting to the users (because nobody
83 would sanely write something similar in user code), then put it under
86 - <b>Write the tesh file</b>, containing the command to run, the
87 provided input (if any, but almost no SimGrid test provide such an
88 input) and the expected output. Check the tesh man page for more
90 Tesh is sometimes annoying as you have to ensure that the expected
91 output will always be exactly the same. In particular, your should
92 not output machine dependent information such as absolute data
93 path, nor memory addresses as they would change on each run. Several
94 steps can be used here, such as the obfucation of the memory
95 addresses unless the verbose logs are displayed (using the
96 #XBT_LOG_ISENABLED() macro), or the modification of the log formats
97 to hide the timings when they depend on the host machine.@n
98 The script located in <project/directory>/tools/tesh/generate_tesh can
99 help you a lot in particular if the output is large (though a smaller output is preferable).
100 There are also example tesh files in the <project/directory>/tools/tesh/ directory, that can be useful to understand the tesh syntax.
102 - <b>Add your test in the cmake infrastructure</b>. For that, modify
105 <project/directory>/teshsuite/<interface eg msg>/CMakeLists.txt
107 Make sure to pick a wise name for your test. It is often useful to
108 check a category of tests together. The only way to do so in ctest
109 is to use the -R argument that specifies a regular expression that
110 the test names must match. For example, you can run all MSG test
111 with "ctest -R msg". That explains the importance of the test
114 Once the name is chosen, create a new test by adding a line similar to
115 the following (assuming that you use tesh as expected).
118 # Usage: ADD_TEST(test-name ${CMAKE_BINARY_DIR}/bin/tesh <options> <tesh-file>)
119 # option --setenv bindir set the directory containing the binary
120 # --setenv srcdir set the directory containing the source file
121 # --cd set the working directory
122 ADD_TEST(my-test-name ${CMAKE_BINARY_DIR}/bin/tesh
123 --setenv bindir=${CMAKE_BINARY_DIR}/examples/my-test/
124 --setenv srcdir=${CMAKE_HOME_DIRECTORY}/examples/my-test/
125 --cd ${CMAKE_HOME_DIRECTORY}/examples/my-test/
126 ${CMAKE_HOME_DIRECTORY}/examples/deprecated/msg/io/io.tesh
130 As usual, you must run "make distcheck" after modifying the cmake files,
131 to ensure that you did not forget any files in the distributed archive.
133 @section inside_tests_ci Continuous Integration
135 We use several systems to automatically test SimGrid with a large set
136 of parameters, across as many platforms as possible.
137 We use <a href="https://ci.inria.fr/simgrid/">Jenkins on Inria
138 servers</a> as a workhorse: it runs all of our tests for many
139 configurations. It takes a long time to answer, and it often reports
140 issues but when it's green, then you know that SimGrid is very fit!
141 We use <a href="https://travis-ci.org/simgrid/simgrid">Travis</a> to
142 quickly run some tests on Linux and Mac. It answers quickly but may
143 miss issues. And we use <a href="https://ci.appveyor.com/project/mquinson/simgrid">AppVeyor</a>
144 to build and somehow test SimGrid on windows.
146 @subsection inside_tests_jenkins Jenkins on the Inria CI servers
148 You should not have to change the configuration of the Jenkins tool
149 yourself, although you could have to change the slaves' configuration
150 using the <a href="https://ci.inria.fr">CI interface of INRIA</a> --
151 refer to the <a href="https://wiki.inria.fr/ciportal/">CI documentation</a>.
153 The result can be seen here: https://ci.inria.fr/simgrid/
155 We have 2 interesting projects on Jenkins:
156 @li <a href="https://ci.inria.fr/simgrid/job/SimGrid/">SimGrid</a>
157 is the main project, running the tests that we spoke about.@n It is
158 configured (on Jenkins) to run the script <tt>tools/jenkins/build.sh</tt>
159 @li <a href="https://ci.inria.fr/simgrid/job/SimGrid-DynamicAnalysis/">SimGrid-DynamicAnalysis</a>
160 should be called "nightly" because it does not only run dynamic
161 tests, but a whole bunch of long lasting tests: valgrind (memory
162 errors), gcovr (coverage), Sanitizers (bad pointer usage, threading
163 errors, use of unspecified C constructs) and the clang static analyzer.@n It is configured
164 (on Jenkins) to run the script <tt>tools/jenkins/DynamicAnalysis.sh</tt>
166 In each case, SimGrid gets built in
167 /builds/workspace/$PROJECT/build_mode/$CONFIG/label/$SERVER/build
168 with $PROJECT being for instance "SimGrid", $CONFIG "DEBUG" or
169 "ModelChecker" and $SERVER for instance "simgrid-fedora20-64-clang".
171 If some configurations are known to fail on some systems (such as
172 model-checking on non-linux systems), go to your Project and click on
173 "Configuration". There, find the field "combination filter" (if your
174 interface language is English) and tick the checkbox; then add a
175 groovy-expression to disable a specific configuration. For example, in
176 order to disable the "ModelChecker" build on host
177 "small-netbsd-64-clang", use:
180 (label=="small-netbsd-64-clang").implies(build_mode!="ModelChecker")
183 Just for the record, the slaves were created from the available
184 template with the following commands:
187 apt-get install gcc g++ gfortran automake cmake libboost-dev openjdk-8-jdk openjdk-8-jre libxslt-dev libxml2-dev libevent-dev libunwind-dev libdw-dev htop git python3 xsltproc libboost-context-dev
188 #for dynamicanalysis:
189 apt-get install jacoco libjacoco-java libns3-dev pcregrep gcovr ant lua5.3-dev sloccount
192 dnf install libboost-devel openjdk-8-jdk openjdk-8-jre libxslt-devel libxml2-devel xsltproc git python3 libdw-devel libevent-devel libunwind-devel htop lua5.3-devel
195 pkg_add cmake gcc7 boost boost-headers automake openjdk8 libxslt libxml2 libunwind git htop python36
198 zypper install cmake automake clang boost-devel java-1_8_0-openjdk-devel libxslt-devel libxml2-devel xsltproc git python3 libdw-devel libevent-devel libunwind-devel htop binutils ggc7-fortran
201 pkg install boost-libs cmake openjdk8 automake libxslt libxml2 libunwind git htop python3 automake gcc6 flang elfutils libevent
202 #+ clang-devel from ports
205 brew install cmake boost libunwind-headers libxslt git python3
208 @subsection inside_tests_travis Travis
210 Travis is a free (as in free beer) Continuous Integration system that
211 open-sourced project can use freely. It is very well integrated in the
212 GitHub ecosystem. There is a plenty of documentation out there. Our
213 configuration is in the file .travis.yml as it should be, and the
214 result is here: https://travis-ci.org/simgrid/simgrid
216 The .travis.yml configuration file can be useful if you fail to get
217 SimGrid to compile on modern mac systems. We use the @c brew package
218 manager there, and it works like a charm.
220 @subsection inside_tests_appveyor AppVeyor
222 AppVeyor aims at becoming the Travis of Windows. It is maybe less
223 mature than Travis, or maybe it is just that I'm less trained in
224 Windows. Our configuration is in the file appveyor.yml as it should
225 be, and the result is here: https://ci.appveyor.com/project/mquinson/simgrid
227 We use @c Choco as a package manager on AppVeyor, and it is sufficient
228 for us. In the future, we will probably move to the ubuntu subsystem
229 of Windows 10: SimGrid performs very well under these settings, as
230 tested on Inria's CI servers. For the time being having a native
231 library is still useful for the Java users that don't want to install
232 anything beyond Java on their windows.
234 @subsection inside_tests_debian Debian builders
236 Since SimGrid is packaged in Debian, we benefit from their huge
237 testing infrastructure. That's an interesting torture test for our
238 code base. The downside is that it's only for the released versions of
239 SimGrid. That is why the Debian build does not stop when the tests
240 fail: post-releases fixes do not fit well in our workflow and we fix
241 only the most important breakages.
243 The build results are here:
244 https://buildd.debian.org/status/package.php?p=simgrid
246 @subsection inside_tests_sonarqube SonarQube
248 SonarQube is an open-source code quality analysis solution. Their nice
249 code scanners are provided as plugin. The one for C++ is not free, but
250 open-source project can use it at no cost. That is what we are doing.
252 Don't miss the great looking dashboard here:
253 https://sonarcloud.io/dashboard?id=simgrid_simgrid
255 This tool is enriched by the script @c tools/internal/travis-sonarqube.sh
256 that is run from @c .travis.yml