+\subsection faq_SG_future Will SG come back in the maintained branch one day?
+
+Sure. In fact, we already have thought about a new and cleaner API:
+\verbatim
+void* SG_link_get_data(SG_link_t link);
+void SG_link_set_data(SG_link_t link, void *data);
+const char* SG_link_get_name(SG_link_t link);
+double SG_link_get_capacity(SG_link_t link);
+double SG_link_get_current_bandwidth(SG_link_t link);
+double SG_link_get_current_latency(SG_link_t link);
+
+SG_workstation_t SG_workstation_get_by_name(const char *name);
+SG_workstation_t* SG_workstation_get_list(void);
+int SG_workstation_get_number(void);
+void SG_workstation_set_data(SG_workstation_t workstation, void *data);
+void * SG_workstation_get_data(SG_workstation_t workstation);
+const char* SG_workstation_get_name(SG_workstation_t workstation);
+SG_link_t* SG_workstation_route_get_list(SG_workstation_t src, SG_workstation_t dst);
+int SG_workstation_route_get_size(SG_workstation_t src, SG_workstation_t dst);
+double SG_workstation_get_power(SG_workstation_t workstation);
+double SG_workstation_get_available_power(SG_workstation_t workstation);
+
+SG_task_t SG_task_create(const char *name, void *data, double amount);
+int SG_task_schedule(SG_task_t task, int workstation_nb,
+ SG_workstation_t **workstation_list, double *computation_amount,
+ double *communication_amount, double rate);
+
+void* SG_task_get_data(SG_task_t task);
+void SG_task_set_data(SG_task_t task, void *data);
+const char* SG_task_get_name(SG_task_t task);
+double SG_task_get_amount(SG_task_t task);
+double SG_task_get_remaining_amount(SG_task_t task);
+void SG_task_dependency_add(const char *name, void *data, SG_task_t src, SG_task_t dst);
+void SG_task_dependency_remove(SG_task_t src, SG_task_t dst);
+e_SG_task_state_t SG_task_state_get(SG_task_t task); /* e_SG_task_state_t can be either SG_SCHEDULED, SG_RUNNING, SG_DONE, or SG_FAILED */
+void SG_task_watch(SG_task_t task, e_SG_task_state_t state); /* SG_simulate will stop as soon as the state of this task is the one given in argument.
+ Watch-point is then automatically removed */
+void SG_task_unwatch(SG_task_t task, e_SG_task_state_t state);
+
+void SG_task_unschedule(SG_task_t task); /* change state and rerun.. */
+
+SG_task_t *SG_simulate(double how_long); /* returns a NULL-terminated array of SG_task_t whose state has changed */
+\endverbatim
+
+We're just looking for somebody to implement it... :)
+
+\section faq_dynamic Dynamic resources and platform building
+
+\subsection faq_platform Building a realistic platform
+
+We can speak more than an hour on this subject and we still do not have
+the right answer, just some ideas. You can read the following
+<a href="http://graal.ens-lyon.fr/~alegrand/articles/Simgrid-Introduction.pdf">slides</a>.
+It may give you some hints. You can also have a look at the
+<tt>tools/platform_generation/</tt> directory. There is a perl-script
+we use to annotate a Tiers generated platform.
+
+\subsection faq_SURF_dynamic How can I have variable resource availability?
+
+A nice feature of SimGrid is that it enables you to seamlessly have
+resources whose availability change over time. When you build a
+platform, you generally declare CPUs like that:
+
+\verbatim
+ <cpu name="Cpu A" power="100.00"/>
+\endverbatim
+
+If you want the availability of "CPU A" to change over time, the only
+thing you have to do is change this definition like that:
+
+\verbatim
+ <cpu name="Cpu A" power="100.00" availability_file="trace_A.txt" state_file="trace_A_failure.txt"/>
+\endverbatim
+
+For CPUs, availability files are expressed in fraction of available
+power. Let's have a look at what "trace_A.txt" may look like:
+
+\verbatim
+PERIODICITY 1.0
+0.0 1.0
+11.0 0.5
+20.0 0.9
+\endverbatim
+
+At time 0, our CPU will deliver 100 Mflop/s. At time 11.0, it will
+deliver only 50 Mflop/s until time 20.0 where it will will start
+delivering 90 Mflop/s. Last at time 21.0 (20.0 plus the periodicity
+1.0), we'll be back to the beginning and it will deliver 100Mflop/s.
+
+Now let's look at the state file:
+\verbatim
+PERIODICITY 10.0
+1.0 -1.0
+2.0 1.0
+\endverbatim
+
+A negative value means "off" while a positive one means "on". At time
+1.0, the CPU is on. At time 1.0, it is turned off and at time 2.0, it
+is turned on again until time 12 (2.0 plus the periodicity 10.0). It
+will be turned on again at time 13.0 until time 23.0, and so on.
+
+Now, let's look how the same kind of thing can be done for network
+links. A usual declaration looks like:
+
+\verbatim
+ <network_link name="LinkA" bandwidth="10.0" latency="0.2"/>
+\endverbatim
+
+You have at your disposal the following options: bandwidth_file,
+latency_file and state_file. The only difference with CPUs is that
+bandwidth_file and latency_file do not express fraction of available
+power but are expressed directly in Mb/s and seconds.
+
+\subsection faq_flexml_bypassing How can I have some C functions do what the platform file does?
+
+So you want to bypass the XML files parser, uh? Maybe doin some parameter
+sweep experiments on your simulations or so? This is possible, but it's not
+really easy. Here is how it goes.
+
+For this, you have to first remember that the XML parsing in SimGrid is done
+using a tool called FleXML. Given a DTD, this gives a flex-based parser. If
+you want to bypass the parser, you need to provide some code mimicking what
+it does and replacing it in its interactions with the SURF code. So, let's
+have a look at these interactions.
+
+FleXML parser are close to classical SAX parsers. It means that a
+well-formed SimGrid platform XML file might result in the following
+"events":
+
+ - start "platform_description"
+ - start "cpu" with attributes name="host1" power="1.0"
+ - end "cpu"
+ - start "cpu" with attributes name="host2" power="2.0"
+ - end "cpu"
+ - start "network_link" with ...
+ - end "network_link"
+ - start "route" with ...
+ - end "route"
+ - start "route" with ...
+ - end "route"
+ - end "platform_description"
+
+The communication from the parser to the SURF code uses two means:
+Attributes get copied into some global variables, and a surf-provided
+function gets called by the parser for each event. For example, the event
+ - start "cpu" with attributes name="host1" power="1.0"
+
+let the parser do the equivalent of:
+\verbatim
+ strcpy("host1",A_cpu_name);
+ A_cpu_power = 1.0;
+ (*STag_cpu_fun)();
+\endverbatim
+
+In SURF, we attach callbacks to the different events by initializing the
+pointer functions to some the right surf functions. Example in
+workstation_KCCFLN05.c (surf_parse_open() ends up calling surf_parse()):
+\verbatim
+ // Building the routes
+ surf_parse_reset_parser();
+ STag_route_fun=parse_route_set_endpoints;
+ ETag_route_element_fun=parse_route_elem;
+ ETag_route_fun=parse_route_set_route;
+ surf_parse_open(file);
+ xbt_assert1((!surf_parse()),"Parse error in %s",file);
+ surf_parse_close();
+\endverbatim
+
+So, to bypass the FleXML parser, you need to write your own version of the
+surf_parse function, which should do the following:
+ - Call the corresponding STag_<tag>_fun function to simulate tag start
+ - Fill the A_<tag>_<attribute> variables with the wanted values
+ - Call the corresponding ETag_<tag>_fun function to simulate tag end
+ - (do the same for the next set of values, and loop)
+
+Then, tell SimGrid that you want to use your own "parser" instead of the stock one:
+\verbatim
+ surf_parse = surf_parse_bypass;
+ MSG_create_environment(NULL);
+\endverbatim
+
+An example of this trick is distributed in the file examples/msg/msg_test_surfxml_bypassed.c
+
+\section faq_troubleshooting Troubleshooting
+
+\subsection faq_context_1000 I want thousands of simulated processes
+
+SimGrid can use either pthreads library or the UNIX98 contextes. On most
+systems, the number of pthreads is limited and then your simulation may be
+limited for a stupid reason. This is especially true with the current linux
+pthreads, and I cannot get more than 2000 simulated processes with pthreads
+on my box. The UNIX98 contexts allow me to raise the limit to 25,000
+simulated processes on my laptop.
+
+The <tt>--with-context</tt> option of the <tt>./configure</tt> script allows
+you to choose between UNIX98 contextes (<tt>--with-context=ucontext</tt>)
+and the pthread version ( (<tt>--with-context=pthread</tt>). The default
+value is ucontext when the script detect a working UNIX98 context
+implementation. On Windows boxes, the provided value is discarded and an
+adapted version is picked up.
+
+We experienced some issues with contextes on some rare systems (solaris 8
+and lower comes to mind). The main problem is that the configure script
+detect the contextes as being functional when it's not true. If you happen
+to use such a system, switch manually to the pthread version, and provide us
+with a good patch for the configure script so that it is done automatically ;)
+
+\subsection faq_context_10000 I want hundred thousands of simulated processes
+
+As explained above, SimGrid can use UNIX98 contextes to represent and handle
+the simulated processes. Thanks to this, the main limitation to the number
+of simulated processes becomes the available memory.
+
+Here are some tricks I had to use in order to run a token ring between
+25,000 processes on my laptop (1Gb memory, 1.5Gb swap).
+
+ - First of all, make sure your code runs for a few hundreds processes
+ before trying to push the limit. Make sure it's valgrind-clean, ie that
+ valgrind does not report neither memory error nor memory leaks. Indeed,
+ numerous simulated processes result in *fat* simulation hindering debugging.
+
+ - It was really boring to write 25,000 entries in the deployment file, so I wrote
+ a little script <tt>examples/gras/tokenS/make_deployment.pl</tt>, which you may
+ want to adapt to your case.
+
+ - The deployment file became quite big, so I had to do what is in the FAQ
+ entry \ref faq_flexml_limit
+
+ - Each UNIX98 context has its own stack entry. As debugging this is quite
+ hairly, the default value is a bit overestimated so that user don't get
+ into trouble about this. You want to tune this size to increse the number
+ of processes. This is the <tt>STACK_SIZE</tt> define in
+ <tt>src/xbt/context_private.h</tt>, which is 128kb by default.
+ Reduce this as much as you can, but be warned that if this value is too
+ low, you'll get a segfault. The token ring example, which is quite simple,
+ runs with 40kb stacks.
+
+\subsection faq_flexml_limit I get the message "surf_parse_lex: Assertion `next<limit' failed."
+
+This is because your platform file is too big for the parser.
+
+Actually, the message comes directly from FleXML, the technology on top of
+which the parser is built. FleXML has the bad idea of fetching the whole
+document in memory before parsing it. And moreover, the memory buffer size
+must be determinded at compilation time.
+
+We use a value which seems big enough for our need without bloating the
+simulators footprints. But of course your mileage may vary. In this case,
+just edit src/surf/surfxml.l modify the definition of
+FLEXML_BUFFERSTACKSIZE. E.g.
+
+\verbatim
+#define FLEXML_BUFFERSTACKSIZE 1000000000
+\endverbatim
+
+Then recompile and everything should be fine, provided that your version of
+Flex is recent enough (>= 2.5.31). If not the compilation process should
+warn you.
+
+A while ago, we worked on FleXML to reduce a bit its memory consumtion, but
+these issues remain. There is two things we should do:
+
+ - use a dynamic buffer instead of a static one so that the only limit
+ becomes your memory, not a stupid constant fixed at compilation time
+ (maybe not so difficult).
+ - change the parser so that it does not need to get the whole file in
+ memory before parsing
+ (seems quite difficult, but I'm a complete newbe wrt flex stuff).
+
+These are changes to FleXML itself, not SimGrid. But since we kinda hijacked
+the development of FleXML, I can grant you that any patches would be really
+welcome and quickly integrated.
+
+\subsection faq_deadlock There is a deadlock !!!
+
+Unfortunately, we cannot debug every code written in SimGrid. We
+furthermore believe that the framework provides ways enough
+information to debug such informations yourself. If the textual output
+is not enough, Make sure to check the \ref faq_visualization FAQ entry to see
+how to get a graphical one.
+
+Now, if you come up with a really simple example that deadlocks and
+you're absolutely convinced that it should not, you can ask on the
+list. Just be aware that you'll be severely punished if the mistake is
+on your side... We have plenty of FAQ entries to redact and new
+features to implement for the impenitents! ;)
+
+\author Arnaud Legrand (arnaud.legrand::imag.fr)
+\author Martin Quinson (martin.quinson::loria.fr)
+
+