X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/simgrid.git/blobdiff_plain/9f6cbb176d4301cae36a945dfa57e6b6a9cbafef..339309e6fcad59f2596853d7dcd87120f86da2eb:/src/include/surf/maxmin.h?ds=sidebyside diff --git a/src/include/surf/maxmin.h b/src/include/surf/maxmin.h index 6a614a9f30..297683fa4e 100644 --- a/src/include/surf/maxmin.h +++ b/src/include/surf/maxmin.h @@ -13,42 +13,31 @@ #include "surf/datatypes.h" #include - /** @addtogroup SURF_lmm * @details * A linear maxmin solver to resolve inequations systems. * - * Most SimGrid model rely on a "fluid/steady-state" modeling that - * simulate the sharing of resources between actions at relatively - * coarse-grain. Such sharing is generally done by solving a set of - * linear inequations. Let's take an example and assume we have the - * variables \f$x_1\f$, \f$x_2\f$, \f$x_3\f$, and \f$x_4\f$ . Let's - * say that \f$x_1\f$ and \f$x_2\f$ correspond to activities running - * and the same CPU \f$A\f$ whose capacity is \f$C_A\f$. In such a + * Most SimGrid model rely on a "fluid/steady-state" modeling that simulate the sharing of resources between actions at + * relatively coarse-grain. Such sharing is generally done by solving a set of linear inequations. Let's take an + * example and assume we have the variables \f$x_1\f$, \f$x_2\f$, \f$x_3\f$, and \f$x_4\f$ . Let's say that \f$x_1\f$ + * and \f$x_2\f$ correspond to activities running and the same CPU \f$A\f$ whose capacity is \f$C_A\f$. In such a * case, we need to enforce: * * \f[ x_1 + x_2 \leq C_A \f] * - * Likewise, if \f$x_3\f$ (resp. \f$x_4\f$) corresponds to a network - * flow \f$F_3\f$ (resp. \f$F_4\f$) that goes through a set of links - * \f$L_1\f$ and \f$L_2\f$ (resp. \f$L_2\f$ and \f$L_3\f$), then we - * need to enforce: + * Likewise, if \f$x_3\f$ (resp. \f$x_4\f$) corresponds to a network flow \f$F_3\f$ (resp. \f$F_4\f$) that goes through + * a set of links \f$L_1\f$ and \f$L_2\f$ (resp. \f$L_2\f$ and \f$L_3\f$), then we need to enforce: * * \f[ x_3 \leq C_{L_1} \f] * \f[ x_3 + x_4 \leq C_{L_2} \f] * \f[ x_4 \leq C_{L_3} \f] - * - * One could set every variable to 0 to make sure the constraints are - * satisfied but this would obviously not be very realistic. A - * possible objective is to try to maximize the minimum of the - * \f$x_i\f$ . This ensures that all the \f$x_i\f$ are positive and "as - * large as possible". * - * This is called *max-min fairness* and is the most commonly used - * objective in SimGrid. Another possibility is to maximize - * \f$\sum_if(x_i)\f$, where \f$f\f$ is a strictly increasing concave - * function. + * One could set every variable to 0 to make sure the constraints are satisfied but this would obviously not be very + * realistic. A possible objective is to try to maximize the minimum of the \f$x_i\f$ . This ensures that all the + * \f$x_i\f$ are positive and "as large as possible". * + * This is called *max-min fairness* and is the most commonly used objective in SimGrid. Another possibility is to + * maximize \f$\sum_if(x_i)\f$, where \f$f\f$ is a strictly increasing concave function. * * Constraint: * - bound (set) @@ -88,32 +77,38 @@ * max( var1.weight * var1.value * elem5.value , var3.weight * var3.value * elem6.value ) <= cons3.bound * * This is usefull for the sharing of resources for various models. - * For instance, for the network model, each link is associated - * to a constraint and each communication to a variable. - * + * For instance, for the network model, each link is associated to a constraint and each communication to a variable. * * Implementation details * - * For implementation reasons, we are interested in distinguishing variables that actually participate to the computation of constraints, and those who are part of the equations but are stuck to zero. - * We call enabled variables, those which var.weight is strictly positive. Zero-weight variables are called disabled variables. + * For implementation reasons, we are interested in distinguishing variables that actually participate to the + * computation of constraints, and those who are part of the equations but are stuck to zero. + * We call enabled variables, those which var.weight is strictly positive. Zero-weight variables are called disabled + * variables. * Unfortunately this concept of enabled/disabled variables intersects with active/inactive variable. - * Semantically, the intent is similar, but the conditions under which a variable is active is slightly more strict than the conditions for it to be enabled. + * Semantically, the intent is similar, but the conditions under which a variable is active is slightly more strict + * than the conditions for it to be enabled. * A variable is active only if its var.value is non-zero (and, by construction, its var.weight is non-zero). - * In general, variables remain disabled after their creation, which often models an initialization phase (e.g. first packet propagating in the network). Then, it is enabled by the corresponding model. Afterwards, the max-min solver (lmm_solve()) activates it when appropriate. It is possible that the variable is again disabled, e.g. to model the pausing of an action. - * + * In general, variables remain disabled after their creation, which often models an initialization phase (e.g. first + * packet propagating in the network). Then, it is enabled by the corresponding model. Afterwards, the max-min solver + * (lmm_solve()) activates it when appropriate. It is possible that the variable is again disabled, e.g. to model the + * pausing of an action. * * Concurrency limit and maximum * * We call concurrency, the number of variables that can be enabled at any time for each constraint. - * From a model perspective, this "concurrency" often represents the number of actions that actually compete for one constraint. + * From a model perspective, this "concurrency" often represents the number of actions that actually compete for one + * constraint. * The LMM solver is able to limit the concurrency for each constraint, and to monitor its maximum value. * * One may want to limit the concurrency of constraints for essentially three reasons: - * - Keep LMM system in a size that can be solved (it does not react very well with tens of thousands of variables per constraint) + * - Keep LMM system in a size that can be solved (it does not react very well with tens of thousands of variables per + * constraint) * - Stay within parameters where the fluid model is accurate enough. * - Model serialization effects * - * The concurrency limit can also be set to a negative value to disable concurrency limit. This can improve performance slightly. + * The concurrency limit can also be set to a negative value to disable concurrency limit. This can improve performance + * slightly. * * Overall, each constraint contains three fields related to concurrency: * - concurrency_limit which is the limit enforced by the solver @@ -123,7 +118,8 @@ * Variables also have one field related to concurrency: concurrency_share. * In effect, in some cases, one variable is involved multiple times (i.e. two elements) in a constraint. * For example, cross-traffic is modeled using 2 elements per constraint. - * concurrency_share formally corresponds to the maximum number of elements that associate the variable and any given constraint. + * concurrency_share formally corresponds to the maximum number of elements that associate the variable and any given + * constraint. */ XBT_PUBLIC_DATA(double) sg_maxmin_precision; @@ -133,7 +129,8 @@ static XBT_INLINE void double_update(double *variable, double value, double prec { //printf("Updating %g -= %g +- %g\n",*variable,value,precision); //xbt_assert(value==0 || value>precision); - //Check that precision is higher than the machine-dependent size of the mantissa. If not, brutal rounding may happen, and the precision mechanism is not active... + //Check that precision is higher than the machine-dependent size of the mantissa. If not, brutal rounding may happen, + //and the precision mechanism is not active... //xbt_assert(*variable< (2<