+\verbatim
+int sender()
+{
+ m_task_t task = MSG_task_create("Task", task_comp_size, task_comm_size,
+ calloc(1,sizeof(double)));
+ *((double*) task->data) = MSG_get_clock();
+ MSG_task_put(task, slaves[i % slaves_count], PORT_22);
+ INFO0("Send completed");
+ return 0;
+}
+int receiver()
+{
+ m_task_t task = NULL;
+ double time1,time2;
+
+ time1 = MSG_get_clock();
+ a = MSG_task_get(&(task), PORT_22);
+ time2 = MSG_get_clock();
+ if(time1<*((double *)task->data))
+ time1 = *((double *) task->data);
+ INFO1("Communication time : \"%f\" ", time2-time1);
+ free(task->data);
+ MSG_task_destroy(task);
+ return 0;
+}
+\endverbatim
+
+\subsection faq_MIA_SimDag SimDag related questions
+
+\subsubsection faq_SG_comm Implementing communication delays between tasks.
+
+A classic question of SimDag newcomers is about how to express a
+communication delay between tasks. The thing is that in SimDag, both
+computation and communication are seen as tasks. So, if you want to
+model a data dependency between two DAG tasks t1 and t2, you have to
+create 3 SD_tasks: t1, t2 and c and add dependencies in the following
+way:
+
+\verbatim
+SD_task_dependency_add(NULL, NULL, t1, c);
+SD_task_dependency_add(NULL, NULL, c, t2);
+\endverbatim
+
+This way task t2 cannot start before the termination of communication c
+which in turn cannot start before t1 ends.
+
+When creating task c, you have to associate an amount of data (in bytes)
+corresponding to what has to be sent by t1 to t2.
+
+Finally to schedule the communication task c, you have to build a list
+comprising the workstations on which t1 and t2 are scheduled (w1 and w2
+for example) and build a communication matrix that should look like
+[0;amount ; 0; 0].
+
+\subsubsection faq_SG_DAG How to implement a distributed dynamic scheduler of DAGs.
+
+Distributed is somehow "contagious". If you start making distributed
+decisions, there is no way to handle DAGs directly anymore (unless I
+am missing something). You have to encode your DAGs in term of
+communicating process to make the whole scheduling process
+distributed. Here is an example of how you could do that. Assume T1
+has to be done before T2.
+
+\verbatim
+ int your_agent(int argc, char *argv[] {
+ ...
+ T1 = MSG_task_create(...);
+ T2 = MSG_task_create(...);
+ ...
+ while(1) {
+ ...
+ if(cond) MSG_task_execute(T1);
+ ...
+ if((MSG_task_get_remaining_computation(T1)=0.0) && (you_re_in_a_good_mood))
+ MSG_task_execute(T2)
+ else {
+ /* do something else */
+ }
+ }
+ }
+\endverbatim
+
+If you decide that the distributed part is not that much important and that
+DAG is really the level of abstraction you want to work with, then you should
+give a try to \ref SD_API.
+
+\subsection faq_MIA_generic Generic features
+
+\subsubsection faq_more_processes Increasing the amount of simulated processes
+
+Here are a few tricks you can apply if you want to increase the amount
+of processes in your simulations.
+
+ - <b>A few thousands of simulated processes</b> (soft tricks)\n
+ SimGrid can use either pthreads library or the UNIX98 contextes. On
+ most systems, the number of pthreads is limited and then your
+ simulation may be limited for a stupid reason. This is especially
+ true with the current linux pthreads, and I cannot get more than
+ 2000 simulated processes with pthreads on my box. The UNIX98
+ contexts allow me to raise the limit to 25,000 simulated processes
+ on my laptop.\n\n
+ The <tt>--with-context</tt> option of the <tt>./configure</tt>
+ script allows you to choose between UNIX98 contextes
+ (<tt>--with-context=ucontext</tt>) and the pthread version
+ (<tt>--with-context=pthread</tt>). The default value is ucontext
+ when the script detect a working UNIX98 context implementation. On
+ Windows boxes, the provided value is discarded and an adapted
+ version is picked up.\n\n
+ We experienced some issues with contextes on some rare systems
+ (solaris 8 and lower or old alpha linuxes comes to mind). The main
+ problem is that the configure script detect the contextes as being
+ functional when it's not true. If you happen to use such a system,
+ switch manually to the pthread version, and provide us with a good
+ patch for the configure script so that it is done automatically ;)
+
+ - <b>Hundred thousands of simulated processes</b> (hard-core tricks)\n
+ As explained above, SimGrid can use UNIX98 contextes to represent
+ and handle the simulated processes. Thanks to this, the main
+ limitation to the number of simulated processes becomes the
+ available memory.\n\n
+ Here are some tricks I had to use in order to run a token ring
+ between 25,000 processes on my laptop (1Gb memory, 1.5Gb swap).\n
+ - First of all, make sure your code runs for a few hundreds
+ processes before trying to push the limit. Make sure it's
+ valgrind-clean, ie that valgrind does not report neither memory
+ error nor memory leaks. Indeed, numerous simulated processes
+ result in *fat* simulation hindering debugging.
+ - It was really boring to write 25,000 entries in the deployment
+ file, so I wrote a little script
+ <tt>examples/gras/mutual_exclusion/simple_token/make_deployment.pl</tt>, which you may
+ want to adapt to your case. You could also think about hijacking
+ the SURFXML parser (have look at \ref faq_flexml_bypassing).
+ - The deployment file became quite big, so I had to do what is in
+ the FAQ entry \ref faq_flexml_limit
+ - Each UNIX98 context has its own stack entry. As debugging this is
+ quite hairly, the default value is a bit overestimated so that
+ user don't get into trouble about this. You want to tune this
+ size to increse the number of processes. This is the
+ <tt>STACK_SIZE</tt> define in
+ <tt>src/xbt/xbt_context_sysv.c</tt>, which is 128kb by default.
+ Reduce this as much as you can, but be warned that if this value
+ is too low, you'll get a segfault. The token ring example, which
+ is quite simple, runs with 40kb stacks.
+ - You may tweak the logs to reduce the stack size further. When
+ logging something, we try to build the string to display in a
+ char array on the stack. The size of this array is constant (and
+ equal to XBT_LOG_BUFF_SIZE, defined in include/xbt/log/h). If the
+ string is too large to fit this buffer, we move to a dynamically
+ sized buffer. In which case, we have to traverse one time the log
+ event arguments to compute the size we need for the buffer,
+ malloc it, and traverse the argument list again to do the actual
+ job.\n
+ The idea here is to move XBT_LOG_BUFF_SIZE to 1, forcing the logs
+ to use a dynamic array each time. This allows us to lower further
+ the stack size at the price of some performance loss...\n
+ This allowed me to run the reduce the stack size to ... 4k. Ie,
+ on my 1Gb laptop, I can run more than 250,000 processes!
+
+\subsubsection faq_MIA_batch_scheduler Is there a native support for batch schedulers in SimGrid?
+
+No, there is no native support for batch schedulers and none is
+planned because this is a very specific need (and doing it in a
+generic way is thus very hard). However some people have implemented
+their own batch schedulers. Vincent Garonne wrote one during his PhD
+and put his code in the contrib directory of our SVN so that other can
+keep working on it. You may find inspiring ideas in it.
+
+\subsubsection faq_MIA_checkpointing I need a checkpointing thing
+
+Actually, it depends on whether you want to checkpoint the simulation, or to
+simulate checkpoints.
+
+The first one could help if your simulation is a long standing process you
+want to keep running even on hardware issues. It could also help to
+<i>rewind</i> the simulation by jumping sometimes on an old checkpoint to
+cancel recent calculations.\n
+Unfortunately, such thing will probably never exist in SG. One would have to
+duplicate all data structures because doing a rewind at the simulator level
+is very very hard (not talking about the malloc free operations that might
+have been done in between). Instead, you may be interested in the Libckpt
+library (http://www.cs.utk.edu/~plank/plank/www/libckpt.html). This is the
+checkpointing solution used in the condor project, for example. It makes it
+easy to create checkpoints (at the OS level, creating something like core
+files), and rerunning them on need.
+
+If you want to simulate checkpoints instead, it means that you want the
+state of an executing task (in particular, the progress made towards
+completion) to be saved somewhere. So if a host (and the task executing on
+it) fails (cf. #MSG_HOST_FAILURE), then the task can be restarted
+from the last checkpoint.\n
+
+Actually, such a thing does not exists in SimGrid either, but it's just
+because we don't think it is fundamental and it may be done in the user code
+at relatively low cost. You could for example use a watcher that
+periodically get the remaining amount of things to do (using
+MSG_task_get_remaining_computation()), or fragment the task in smaller
+subtasks.
+
+\subsection faq_platform Platform building and Dynamic resources
+
+\subsubsection faq_platform_example Where can I find SimGrid platform files?
+
+There is several little examples in the archive, in the examples/msg
+directory. From time to time, we are asked for other files, but we
+don't have much at hand right now.
+
+You should refer to the Platform Description Archive
+(http://pda.gforge.inria.fr) project to see the other platform file we
+have available, as well as the Simulacrum simulator, meant to generate
+SimGrid platforms using all classical generation algorithms.
+
+\subsubsection faq_platform_alnem How can I automatically map an existing platform?
+
+We are working on a project called ALNeM (Application-Level Network
+Mapper) which goal is to automatically discover the topology of an
+existing network. Its output will be a platform description file
+following the SimGrid syntax, so everybody will get the ability to map
+their own lab network (and contribute them to the catalog project).
+This tool is not ready yet, but it move quite fast forward. Just stay
+tuned.
+
+\subsubsection faq_platform_synthetic Generating synthetic but realistic platforms
+
+The third possibility to get a platform file (after manual or
+automatic mapping of real platforms) is to generate synthetic
+platforms. Getting a realistic result is not a trivial task, and
+moreover, nobody is really able to define what "realistic" means when
+speaking of topology files. You can find some more thoughts on this
+topic in these
+<a href="http://graal.ens-lyon.fr/~alegrand/articles/Simgrid-Introduction.pdf">slides</a>.
+
+If you are looking for an actual tool, there we have a little tool to
+annotate Tiers-generated topologies. This perl-script is in
+<tt>tools/platform_generation/</tt> directory of the SVN. Dinda et Al.
+released a very comparable tool, and called it GridG.
+
+\subsubsection faq_SURF_dynamic Expressing dynamic resource availability in platform files
+
+A nice feature of SimGrid is that it enables you to seamlessly have
+resources whose availability change over time. When you build a
+platform, you generally declare hosts like that:
+
+\verbatim
+ <host id="host A" power="100.00"/>
+\endverbatim
+
+If you want the availability of "host A" to change over time, the only
+thing you have to do is change this definition like that:
+
+\verbatim
+ <host id="host A" power="100.00" availability_file="trace_A.txt" state_file="trace_A_failure.txt"/>
+\endverbatim
+
+For hosts, availability files are expressed in fraction of available
+power. Let's have a look at what "trace_A.txt" may look like:
+
+\verbatim
+PERIODICITY 1.0
+0.0 1.0
+11.0 0.5
+20.0 0.9
+\endverbatim
+
+At time 0, our host will deliver 100 flop/s. At time 11.0, it will
+deliver only 50 flop/s until time 20.0 where it will will start
+delivering 90 flop/s. Last at time 21.0 (20.0 plus the periodicity
+1.0), we'll be back to the beginning and it will deliver 100 flop/s.
+
+Now let's look at the state file:
+\verbatim
+PERIODICITY 10.0
+1.0 -1.0
+2.0 1.0
+\endverbatim
+
+A negative value means "off" while a positive one means "on". At time
+1.0, the host is on. At time 1.0, it is turned off and at time 2.0, it
+is turned on again until time 12 (2.0 plus the periodicity 10.0). It
+will be turned on again at time 13.0 until time 23.0, and so on.
+
+Now, let's look how the same kind of thing can be done for network
+links. A usual declaration looks like:
+
+\verbatim
+ <link id="LinkA" bandwidth="10.0" latency="0.2"/>
+\endverbatim
+
+You have at your disposal the following options: bandwidth_file,
+latency_file and state_file. The only difference with hosts is that
+bandwidth_file and latency_file do not express fraction of available
+power but are expressed directly in bytes per seconds and seconds.
+
+\subsubsection faq_platform_multipath How to express multipath routing in platform files?
+
+It is unfortunately impossible to express the fact that there is more
+than one routing path between two given hosts. Let's consider the
+following platform file:
+
+\verbatim
+<route src="A" dst="B">
+ <link:ctn id="1"/>
+</route>
+<route src="B" dst="C">
+ <link:ctn id="2"/>
+</route>
+<route src="A" dst="C">
+ <link:ctn id="3"/>
+</route>
+\endverbatim
+
+Although it is perfectly valid, it does not mean that data traveling
+from A to C can either go directly (using link 3) or through B (using
+links 1 and 2). It simply means that the routing on the graph is not
+trivial, and that data do not following the shortest path in number of
+hops on this graph. Another way to say it is that there is no implicit
+in these routing descriptions. The system will only use the routes you
+declare (such as <route src="A" dst="C"><link:ctn
+id="3"/></route>), without trying to build new routes by aggregating
+the provided ones.
+
+You are also free to declare platform where the routing is not
+symmetric. For example, add the following to the previous file:
+
+\verbatim
+<route src="C" dst="A">
+ <link:ctn id="2"/>
+ <link:ctn id="1"/>
+</route>
+\endverbatim
+
+This makes sure that data from C to A go through B where data from A
+to C go directly. Don't worry about realism of such settings since
+we've seen ways more weird situation in real settings (in fact, that's
+the realism of very regular platforms which is questionable, but
+that's another story).
+
+\subsubsection faq_flexml_bypassing Bypassing the XML parser with your own C functions
+
+So you want to bypass the XML files parser, uh? Maybe doing some parameter
+sweep experiments on your simulations or so? This is possible, and
+it's not even really difficult (well. Such a brutal idea could be
+harder to implement). Here is how it goes.
+
+For this, you have to first remember that the XML parsing in SimGrid is done
+using a tool called FleXML. Given a DTD, this gives a flex-based parser. If
+you want to bypass the parser, you need to provide some code mimicking what
+it does and replacing it in its interactions with the SURF code. So, let's
+have a look at these interactions.
+
+FleXML parser are close to classical SAX parsers. It means that a
+well-formed SimGrid platform XML file might result in the following
+"events":
+
+ - start "platform_description" with attribute version="2"
+ - start "host" with attributes id="host1" power="1.0"
+ - end "host"
+ - start "host" with attributes id="host2" power="2.0"
+ - end "host"
+ - start "link" with ...
+ - end "link"
+ - start "route" with ...
+ - start "link:ctn" with ...
+ - end "link:ctn"
+ - end "route"
+ - end "platform_description"
+
+The communication from the parser to the SURF code uses two means:
+Attributes get copied into some global variables, and a surf-provided
+function gets called by the parser for each event. For example, the event
+ - start "host" with attributes id="host1" power="1.0"
+
+let the parser do something roughly equivalent to:
+\verbatim
+ strcpy(A_host_id,"host1");
+ A_host_power = 1.0;
+ STag_host();
+\endverbatim
+
+In SURF, we attach callbacks to the different events by initializing the
+pointer functions to some the right surf functions. Since there can be
+more than one callback attached to the same event (if more than one
+model is in use, for example), they are stored in a dynar. Example in
+workstation_ptask_L07.c:
+\verbatim
+ /* Adding callback functions */
+ surf_parse_reset_parser();
+ surfxml_add_callback(STag_surfxml_host_cb_list, &parse_cpu_init);
+ surfxml_add_callback(STag_surfxml_prop_cb_list, &parse_properties);
+ surfxml_add_callback(STag_surfxml_link_cb_list, &parse_link_init);
+ surfxml_add_callback(STag_surfxml_route_cb_list, &parse_route_set_endpoints);
+ surfxml_add_callback(ETag_surfxml_link_c_ctn_cb_list, &parse_route_elem);
+ surfxml_add_callback(ETag_surfxml_route_cb_list, &parse_route_set_route);
+
+ /* Parse the file */
+ surf_parse_open(file);
+ xbt_assert1((!surf_parse()), "Parse error in %s", file);
+ surf_parse_close();
+\endverbatim
+
+So, to bypass the FleXML parser, you need to write your own version of the
+surf_parse function, which should do the following:
+ - Fill the A_<tag>_<attribute> variables with the wanted values
+ - Call the corresponding STag_<tag>_fun function to simulate tag start
+ - Call the corresponding ETag_<tag>_fun function to simulate tag end
+ - (do the same for the next set of values, and loop)
+
+Then, tell SimGrid that you want to use your own "parser" instead of the stock one:
+\verbatim
+ surf_parse = surf_parse_bypass_environment;
+ MSG_create_environment(NULL);
+ surf_parse = surf_parse_bypass_application;
+ MSG_launch_application(NULL);
+\endverbatim
+
+A set of macros are provided at the end of
+include/surf/surfxml_parse.h to ease the writing of the bypass
+functions. An example of this trick is distributed in the file
+examples/msg/masterslave/masterslave_bypass.c
+
+\subsection faq_simgrid_configuration Changing SimGrid's behavior
+
+A number of options can be given at runtime to change the default
+SimGrid behavior. In particular, you can change the default cpu and
+network models...
+
+\subsubsection faq_simgrid_configuration_gtnets Using GTNetS
+
+It is possible to use a packet-level network simulator
+instead of the default flow-based simulation. You may want to use such
+an approach if you have doubts about the validity of the default model
+or if you want to perform some validation experiments. At the moment,
+we support the GTNetS simulator (it is still rather experimental
+though, so leave us a message if you play with it).
+
+
+<i>
+To enable GTNetS model inside SimGrid it is needed to patch the GTNetS simulator source code
+and build/install it from scratch
+</i>
+
+ - <b>Download and enter the recent downloaded GTNetS directory</b>
+
+ \verbatim
+ svn checkout svn://scm.gforge.inria.fr/svn/simgrid/contrib/trunk/GTNetS/
+ cd GTNetS
+ \endverbatim
+
+
+ - <b>Use the following commands to unzip and patch GTNetS package to work within SimGrid.</b>
+
+ \verbatim
+ unzip gtnets-current.zip
+ tar zxvf gtnets-current-patch.tgz
+ cd gtnets-current
+ cat ../00*.patch | patch -p1
+ \endverbatim
+
+ - <b>OPTIONALLY</b> you can use a patch for itanium 64bit processor family.
+
+ \verbatim
+ cat ../AMD64-FATAL-Removed-DUL_SIZE_DIFF-Added-fPIC-compillin.patch | patch -p1
+ \endverbatim
+
+ - <b>Compile GTNetS</b>
+
+ Due to portability issues it is possible that GTNetS does not compile in your architecture. The patches furnished in SimGrid SVN repository are intended for use in Linux architecture only. Unfortunately, we do not have the time, the money, neither the manpower to guarantee GTNetS portability. We advice you to use one of GTNetS communication channel to get more help in compiling GTNetS.
+
+
+ \verbatim
+ ln -sf Makefile.linux Makefile
+ make depend
+ make debug
+ \endverbatim
+
+
+ - <b>NOTE</b> A lot of warnings are expected but the application should compile
+ just fine. If the makefile insists in compiling some QT libraries
+ please try a make clean before asking for help.
+
+
+ - <b>To compile optimized version</b>
+
+ \verbatim
+ make opt
+ \endverbatim
+
+
+ - <b>Installing GTNetS</b>
+
+ It is important to put the full path of your libgtsim-xxxx.so file when creating the symbolic link. Replace < userhome > by some path you have write access to.
+
+ \verbatim
+ ln -sf /<absolute_path>/gtnets_current/libgtsim-debug.so /<userhome>/usr/lib/libgtnets.so
+ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/<userhome>/usr/lib/libgtnets.so
+ mkdir /<userhome>/usr/include/gtnets
+ cp -fr SRC/*.h /<userhome>/usr/include/gtnets
+ \endverbatim
+
+
+ - <b>Enable GTNetS support in SimGrid</b>
+
+ \verbatim
+ ./configure --with-gtnets=/<userhome>/usr
+ \endverbatim
+
+ - <b>Once you have followed all the instructions for compiling and
+ installing successfully you can activate this feature at
+ runntime with the following options:</b>
+
+ \verbatim
+ cd simgrid/example/msg/
+ make
+ make check
+ \endverbatim
+
+
+ - <b>Or try the GTNetS model dogbone example with</b>
+
+ \verbatim
+ gtnets/gtnets gtnets/onelink-p.xml gtnets/onelink-d.xml --cfg=network_model:GTNets
+ \endverbatim
+
+
+ A long version of this <a href="http://gforge.inria.fr/docman/view.php/12/6283/GTNetS HowTo.html">HowTo</a> it is available
+
+
+ More about GTNetS simulator at <a href="http://www.ece.gatech.edu/research/labs/MANIACS/GTNetS/index.html">GTNetS Website</a>
+
+
+ - <b>DISCLAIMER</b>
+ The patches provided by us worked successfully with GTNetS found
+ <a href="http://www.ece.gatech.edu/research/labs/MANIACS/GTNetS/software/gtnets-current.zip">here</a>,
+ dated from 12th June 2008. Due to the discontinuing development of
+ GTNetS it is impossible to precise a version number. We STRONGLY recommend you
+ to download and install the GTNetS version found in SimGrid repository as explained above.
+
+
+
+
+\subsubsection faq_simgrid_configuration_alternate_network Using alternative flow models
+
+The default simgrid network model uses a max-min based approach as
+explained in the research report
+<a href="ftp://ftp.ens-lyon.fr/pub/LIP/Rapports/RR/RR2002/RR2002-40.ps.gz">A Network Model for Simulation of Grid Application</a>.
+Other models have been proposed and implemented since then (see for example
+<a href="http://mescal.imag.fr/membres/arnaud.legrand/articles/simutools09.pdf">Accuracy Study and Improvement of Network Simulation in the SimGrid Framework</a>)
+and can be activated at runtime. For example:
+\verbatim
+./mycode platform.xml deployment.xml --cfg=workstation/model:compound --cfg=network/model:LV08 -cfg=cpu/model:Cas01
+\endverbatim
+
+Possible models for the network are currently "Constant", "CM02",
+"LegrandVelho", "GTNets", Reno", "Reno2", "Vegas". Others will
+probably be added in the future and many of the previous ones are
+experimental and are likely to disappear without notice... To know the
+list of the currently implemented models, you should use the
+--help-models command line option.
+
+\verbatim
+./masterslave_forwarder ../small_platform.xml deployment_masterslave.xml --help-models
+Long description of the workstation models accepted by this simulator:
+ CLM03: Default workstation model, using LV08 and CM02 as network and CPU
+ compound: Workstation model allowing you to use other network and CPU models
+ ptask_L07: Workstation model with better parallel task modeling
+Long description of the CPU models accepted by this simulator:
+ Cas01_fullupdate: CPU classical model time=size/power
+ Cas01: Variation of Cas01_fullupdate with partial invalidation optimization of lmm system. Should produce the same values, only faster
+ CpuTI: Variation of Cas01 with also trace integration. Should produce the same values, only faster if you use availability traces
+Long description of the network models accepted by this simulator:
+ Constant: Simplistic network model where all communication take a constant time (one second)
+ CM02: Realistic network model with lmm_solve and no correction factors
+ LV08: Realistic network model with lmm_solve and these correction factors: latency*=10.4, bandwidth*=.92, S=8775
+ Reno: Model using lagrange_solve instead of lmm_solve (experts only)
+ Reno2: Model using lagrange_solve instead of lmm_solve (experts only)
+ Vegas: Model using lagrange_solve instead of lmm_solve (experts only)
+\endverbatim
+
+\subsection faq_tracing Tracing Simulations for Visualization
+
+The trace visualization is widely used to observe and understand the behavior
+of parallel applications and distributed algorithms. Usually, this is done in a
+two-step fashion: the user instruments the application and the traces are
+analyzed after the end of the execution. The visualization itself can highlights
+unexpected behaviors, bottlenecks and sometimes can be used to correct
+distributed algorithms. The SimGrid team is currently instrumenting the library
+in order to let users trace their simulations and analyze them. This part of the
+user manual explains how the tracing-related features can be enabled and used
+during the development of simulators using the SimGrid library.
+
+\subsubsection faq_tracing_howitworks How it works
+
+For now, the SimGrid library is instrumented so users can trace the <b>platform
+utilization</b> using the MSG interface. This means that the tracing will
+register how much power is used for each host and how much bandwidth is used for
+each link of the platform. The idea with this type of tracing is to observe the
+overall view of resources utilization in the first place, especially the
+identification of bottlenecks, load-balancing among hosts, and so on.
+
+The idea of the instrumentation is to classify the MSG tasks by category,
+and trace
+the platform utilization (hosts and links) for each of the categories. For that,
+the tracing interface enables the declaration of categories and a function to
+mark a task with a previously declared category. <em>The tasks that are not
+classified according to a category are not traced</em>.
+
+\subsubsection faq_tracing_enabling Enabling using CMake
+
+With the sources of SimGrid, it is possible to enable the tracing
+using the parameter <b>-Dtracing=on</b> when the cmake is executed.
+The section \ref faq_tracing_functions describes all the functions available
+when this Cmake options is activated. These functions will have no effect
+if SimGrid is configured without this option (they are wiped-out by the
+C-preprocessor).
+
+\verbatim
+$ cmake -Dtracing=on .
+$ make
+\endverbatim
+
+\subsubsection faq_tracing_functions Tracing Functions
+
+\subsubsubsection Mandatory Functions
+
+\li <b>\c TRACE_start (const char *filename)</b>: This is the first function to
+be called. It receives a single argument as parameter that contains the name of
+the file that will hold the trace in the end of the simulation. It returns 0 if
+everything was properly initialized, 1 otherwise. All trace functions called
+before TRACE_start do nothing.
+
+\li <b>\c TRACE_category (const char *category)</b>: This function should be used
+to define a user category. The category can be used to differentiate the tasks
+that are created during the simulation (for example, tasks from server1,
+server2, or request tasks, computation tasks, communication tasks).
+All resource utilization (host power and link bandwidth) will be
+classified according to the task category. Tasks that do not belong to a
+category are not traced.
+
+\li <b>\c TRACE_msg_set_task_category (m_task_t task, const char *category)</b>:
+This function should be called after the creation of a task, to define the
+category of that task. The first parameter \c task must contain a task that was
+created with the function \c MSG_task_create. The second parameter
+\c category must contain a category that was previously defined by the function
+\c TRACE_category.
+
+\li <b>\c TRACE_end ()</b>: This is the last function to be called. It closes
+the trace file and stops the tracing of the simulation. All tracing will be
+completely disabled after the calling this function. Although we recommend
+the use of this function somewhere in the end of program, it can be used
+anywhere in the code. This function returns 0 if everything is ok, 1 otherwise.
+
+\subsubsubsection Optional Functions
+
+\li <b>\c TRACE_host_variable_declare (const char *variable)</b>:
+Declare a user variable that will be associated to hosts. A variable can
+be used to trace user variables such as the number of tasks in a server,
+the number of clients in an application, and so on.
+
+\li <b>\c TRACE_host_variable_[set|add|sub] (const char *variable, double
+value)</b>:
+Set the value of a given user variable. It is important to remind that
+the value of this variable is always associated to the host. The host
+that will be used when these functions are called is the one returned by
+the function \c MSG_host_self().
+
+\subsubsection faq_tracing_example Example of Instrumentation
+
+A simplified example using the tracing mandatory functions.
+
+\verbatim
+int main (int argc, char **argv)
+{
+ TRACE_start ("traced_simulation.trace");
+ TRACE_category ("request");
+ TRACE_category ("computation");
+ TRACE_category ("finalize");
+
+ MSG_global_init (&argc, &argv);
+
+ //(... after deployment ...)
+
+ m_task_t req1 = MSG_task_create("1st_request_task", 10, 10, NULL);
+ m_task_t req2 = MSG_task_create("2nd_request_task", 10, 10, NULL);
+ m_task_t req3 = MSG_task_create("3rd_request_task", 10, 10, NULL);
+ m_task_t req4 = MSG_task_create("4th_request_task", 10, 10, NULL);
+ TRACE_msg_set_task_category (req1, "request");
+ TRACE_msg_set_task_category (req2, "request");
+ TRACE_msg_set_task_category (req3, "request");
+ TRACE_msg_set_task_category (req4, "request");
+
+ m_task_t comp = MSG_task_create ("comp_task", 100, 100, NULL);
+ TRACE_msg_set_task_category (comp, "computation");
+
+ m_task_t finalize = MSG_task_create ("finalize", 0, 0, NULL);
+ TRACE_msg_set_task_category (finalize, "finalize");
+
+ //(...)
+
+ MSG_clean();
+
+ TRACE_end();
+ return 0;
+}