X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/blobdiff_plain/b4193f8dcb043ee338cc7ffb756037bce5c256cd..9f1d484852065d8e224f03acb1d11bcca28dd07a:/TODO diff --git a/TODO b/TODO index b2e95ff946..9176059a3f 100644 --- a/TODO +++ b/TODO @@ -1,7 +1,29 @@ + + + + + ************************************************ + *** This file is a TODO. It is thus kinda *** + *** outdated. You know the story, right? *** + ************************************************ + + + + ### ### Ongoing stuff ### +Document the fact that gras processes display the backtrace on sigusr and sigint +Document host module + +/* FIXME: better place? */ +int vasprintf (char **ptr, const char *fmt, va_list ap); +char *bprintf(const char*fmt, ...) _XBT_GNUC_PRINTF(1,2); + + +gras_socket_close should be blocking until all the data sent have been +received by the other side (implemented with an ACK mechanism). ### ### Planned @@ -11,7 +33,7 @@ * Infrastructure **************** -[autoconf] +[build chain] * Check the gcc version on powerpc. We disabled -floop-optimize on powerpc, but versions above 3.4.0 should be ok. * check whether we have better than jmp_buf to implement exceptions, and @@ -21,75 +43,44 @@ * XBT ***** -[doc] - * graphic showing: - (errors, logs ; dynars, dicts, hooks, pools; config, rrdb) - -[portability layer] - * Mallocators and/or memory pool so that we can cleanly kill an actor - [errors/exception] - * Better split casual errors from programing errors. - The first ones should be repported to the user, the second should kill + * Better split casual errors from programming errors. + The first ones should be reported to the user, the second should kill the program (or, yet better, only the msg handler) * Allows the use of an error handler depending on the current module (ie, - the same philosophy than log4c using GSL's error functions) + the same philosophy as log4c using GSL's error functions) [logs] * Hijack message from a given category to another for a while (to mask initializations, and more) * Allow each actor to have its own setting - * a init/exit mecanism for logging appender - * Several appenders; fix the setting stuff to change the appender * more logging appenders (take those from Ralf in l2) -[dict] - * speed up the cursors, for example using the contexts when available - [modules] - * better formalisation of what modules are (amok deeply needs it) - configuration + init() + exit() + dependencies + * Add configuration and dependencies to our module definition * allow to load them at runtime check in erlang how they upgrade them without downtime [other modules] * we may need a round-robin database module, and a statistical one - * a hook module *may* help cleaning up some parts. Not sure yet. * Some of the datacontainer modules seem to overlap. Kill some of them? + - replace fifo with dynars + - replace set with SWAG * * GRAS ****** [doc] - * add the token ring as official example * implement the P2P protocols that macedon does. They constitute great examples, too [transport] - * Spawn threads handling the communication - - Data sending cannot be delegated if we want to be kept informed - (*easily*) of errors here. - - Actor execution flow shouldn't be interrupted - - It should be allowed to access (both in read and write access) - any data available (ie, referenced) from the actor without - requesting to check for a condition before. - (in other word, no mutex or assimilated) - - I know that enforcing those rules prevent the implementation of - really cleaver stuff. Keeping the stuff simple for the users is more - important to me than allowing them to do cleaver tricks. Black magic - should be done *within* gras to reach a good performance level. - - - Data receiving can be delegated (and should) - The first step here is a "simple" mailbox mecanism, with a fifo of - messages protected by semaphore. - The rest is rather straightforward too. - * use poll(2) instead of select(2) when available. (first need to check the advantage of doing so ;) Another idea we spoke about was to simulate this feature with a bunch of - threads blocked in a read(1) on each incomming socket. The latency is + threads blocked in a read(1) on each incoming socket. The latency is reduced by the cost of a syscall, but the more I think about it, the less I find the idea adapted to our context. @@ -114,13 +105,6 @@ Depends on the previous item; difficult to achieve with firewalls [datadesc] - * Implement gras_datadesc_cpy to speedup things in the simulator - (and allow to have several "actors" within the same unix process). - For now, we mimick closely the RL even in SG. It was easier to do - since the datadesc layer is unchanged, but it is not needed and - hinders performance. - gras_datadesc_cpy needs to provide the size of the corresponding messages, so - that we can report it into the simulator. * Add a XML wire protocol alongside to the binary one (for SOAP/HTTP) * cbps: - Error handling @@ -129,13 +113,13 @@ - Port to ARM - Convert in the same buffer when size increase - Exchange (on net) structures in one shoot when possible. - - Port to really exotic platforms (Cray is not IEEE ;) * datadesc_set_cste: give the value by default when receiving. - It's not transfered anymore, which is good for functions pointer. * Parsing macro - Cleanup the code (bison?) - Factorize code in union/struct field adding - - Handle typedefs (needs love from DataDesc/) + - Handle typedefs (gras_datatype_copy can be usefull, but only if + main type is already defined) - Handle unions with annotate - Handle enum - Handle long long and long double @@ -148,43 +132,14 @@ - Check short *** - Check struct { struct { int a } b; } - * gras_datadesc_import_nws? - [Messaging] - * A proper RPC mecanism - - gras_rpctype_declare_v (name,ver, payload_request, payload_answer) - (or gras_msgtype_declare_rpc_v). - - Attaching a cb works the same way. - - gras_msg_rpc(peer, &request, &answer) - - On the wire, a byte indicate the message type: - - 0: one-way message (what we have for now) - - 1: method call (answer expected; sessionID attached) - - 2: successful return (usual datatype attached, with sessionID) - - 3: error return (payload = exception) - - other message types are possible (forwarding request, group - communication) + * Other message types than oneway & RPC are possible: + - forwarding request, group communication * Message priority * Message forwarding * Group communication * Message declarations in a tree manner (such as log channels)? -[GRASPE] (platform expender) - * Tool to visualize/deploy and manage in RL - * pull method of source diffusion in graspe-slave - -[Actors] (parallelism in GRAS) - * An actor is a user process. - It has a highly sequential control flow from its birth until its death. - The timers won't stop the current execution to branch elsewhere, they - will be delayed until the actor is ready to listen. Likewise, no signal - delivery. The goal is to KISS for users. - * You can fork a new actor, even on remote hosts. - * They are implemented as threads in RL, but this is still a distributed - memory *model*. If you want to share data with another actor, send it - using the message interface to explicit who's responsible of this data. - * data exchange between actors placed within the same UNIX process is - *implemented* by memcopy, but that's an implementation detail. - [Other, more general issues] * watchdog in RL (ie, while (1) { fork; exec the child, wait in father }) * Allow [homogeneous] dico to be sent @@ -195,10 +150,8 @@ ****** [bandwidth] - * finish this module (still missing the saturate part) * add a version guessing the appropriate datasizes automatically [other modules] - * provide a way to retrieve the host load as in NWS * log control, management, dynamic token ring * a way using SSH to ask a remote host to open a socket back on me