-The second step of the application, and the most time consuming, is
-the learning itself. This is the one which has been parallelized,
-using the XWCH environment. As exposed in the section 2, the
-parallelization relies on a partitioning of the global
-dataset. Following this partitioning all learning tasks are executed
-in parallel independently with their own local data part, with no
-communication, following the fork/join model. Clearly, this
-computation fits well with the model of the chosen middleware.
+The second step of the application, and the most time consuming, is the learning
+in itself. This is the one which has been parallelized, using the XWCH
+environment. As exposed in section 2, the parallelization relies on a
+partitioning of the global dataset. Following this partitioning all learning
+tasks are independently executed in parallel with their own local data part,
+with no communication, following the fork/join model. Clearly, this computation
+fits well with the model of the chosen middleware.