+The size of the input data is about 2.4Gb. In order to avoid
+noises to appear and disturb the learning process, these data can be
+divided into, at most, 25 parts. This generates input data parts of
+about 15Mb (in a compressed format). The output data, which are
+retrieved after the process, are about 30Kb for each part. We used two
+distinct deployments of XWCH:
+\begin{enumerate}
+
+\item In the first one, called ``distributed XWCH'',
+ the XWCH coordinator and the warehouses were located in Geneva,
+ Switzerland while the workers were running in the same local cluster
+ in Belfort, France.
+
+\item The second deployment, called ``local XWCH'' is a local
+ deployment where coordinator, warehouses and workers were, in
+ the same local cluster, at the same time.
+
+\end{enumerate}
+For both deployments, the local cluster is a campus cluster and during
+the day these machines were used by students of the Computer Science
+Department of the IUT of Belfort. Unfortunately, the data
+decomposition limitation does not allow us to use more than 25
+computers (XWCH workers).
+
+In order to evaluate the overhead induced by the use of the platform we have
+furthermore compared the execution of the Neurad application with and without
+the XWCH platform. For the latter case, we want to insist on the fact that the
+testbed consists only in workers deployed with their respective data by the use
+of shell scripts. No specific middleware was used and the workers were in the
+same local cluster.
+
+Finally, five computation precisions were used: $1e^{-1}$, $0.75e^{-1}$,
+$0.50e^{-1}$, $0.25e^{-1}$, and $1e^{-2}$.