X-Git-Url: http://info.iut-bm.univ-fcomte.fr/pub/gitweb/gpc2011.git/blobdiff_plain/57be57e49ce2eee18399dfa373e4aea6ad73ce61..3baad0b0f8d5d6a502d6056167bba0511f418654:/gpc2011.tex diff --git a/gpc2011.tex b/gpc2011.tex index 83a04b4..fc4e16a 100644 --- a/gpc2011.tex +++ b/gpc2011.tex @@ -47,7 +47,7 @@ \author{Nabil Abdennhader\inst{1} \and Mohamed Ben Belgacem\inst{1} \and Raphaël Couturier\inst{2} \and - David Laiymani\inst{2} \and Sébastien Miquée\inst{2} \and Marko Niinimaki\inst{1} \and Marc Sauget\inst{2}} + David Laiymani\inst{2} \and Sébastien Miquée\inst{2} \and Marko Niinimaki\inst{1} \and Marc Sauget\inst{3}} \institute{ University of Applied Sciences Western Switzerland, hepia Geneva, @@ -60,7 +60,7 @@ Laboratoire d'Informatique de l'universit\'{e} \email{\{raphael.couturier,david.laiymani,sebastien.miquee\}@univ-fcomte.fr} \and FEMTO-ST, ENISYS/IRMA, F-25210 Montb\'{e}liard , FRANCE\\ -\email{marc.sauget@femto-st.fr} +\email{marc.sauget@univ-fcomte.fr} } @@ -70,12 +70,12 @@ Laboratoire d'Informatique de l'universit\'{e} This paper presents the design and the evaluation of the gridification of a radiotherapy dose computation application. Due to the inherent characteristics of the application and its execution, - we choose the architectural context of global (or volunteer) + we choose the architectural context of volunteer computing. For this, we used the XtremWeb-CH - environment. Experiments were conducted on a real global computing + environment. Experiments were conducted on a real volunteer computing testbed and show good speed-ups and very acceptable platform - overhead letting XtremWeb-CH be a good candidate for deploying - parallel applications over a global computing environment. + overhead, letting XtremWeb-CH be a good candidate for deploying + parallel applications over a volunteer computing environment. \end{abstract} @@ -95,40 +95,42 @@ techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. The main drawback of these methods is their computation times which can be -rapidly huge. In \cite{} the authors proposed a novel approach, called +rapidly huge. In \cite{NIMB2008} the authors proposed a novel approach, called Neurad, using neural networks. This approach is based on the collaboration of computation codes and multi-layer neural networks used as universal approximators. It provides a fast and accurate evaluation of radiation doses in any given environment for given irradiation parameters. As the learning step is often very time -consuming, in \cite{} the authors proposed a parallel +consuming, in \cite{AES2009} the authors proposed a parallel algorithm that enables to decompose the learning domain into subdomains. The decomposition has the advantage to significantly reduce the complexity of the target functions to approximate. Now, as there exist several classes of distributed/parallel -architectures (supercomputers, clusters, global computing...) we have -to choose the best suited one for the parallel Neurad application. -The Global or Volunteer Computing model seems to be an interesting -approach. Here, the computing power is obtained by aggregating unused -(or volunteer) public resources connected to the Internet. For our -case, we can imagine for example, that a part of the architecture will -be composed of some of the different computers of the hospital. This -approach presents the advantage to be clearly cheaper than a more -dedicated approach like the use of supercomputers or clusters. +architectures (supercomputers, clusters, global computing\dots{}) we +have to choose the best suited one for the parallel Neurad +application. The volunteer (or global) computing model seems to be an +interesting approach. Here, the computing power is obtained by +aggregating unused (or volunteer) public resources connected to the +Internet. For our case, we can imagine for example, that a part of the +architecture will be composed of some of the different computers of +the hospital. This approach presents the advantage to be clearly +cheaper than a more dedicated approach like the use of supercomputers +or clusters. Furthermore and as we will see in the remainder, the +studied parallel algorithm fits well this computation model. The aim of this paper is to propose and evaluate a gridification of the Neurad application (more precisely, of the most time consuming -part, the learning step) using a Global Computing approach. For this, -we focus on the XtremWeb-CH environment\cite{}. We choose this environment -because it tackles the centralized aspect of other global computing -environments such as XtremWeb\cite{} or Seti\cite{}. It tends to a -peer-to-peer approach by distributing some components of the -architecture. For instance, the computing nodes are allowed to -directly communicate. Experiments were conducted on a real Global -Computing testbed. The results are very encouraging. They exhibit an -interesting speed-up and show that the overhead induced by the use of -XtremWeb-CH is very acceptable. +part, the learning step) using a volunteer computing approach. For +this, we focus on the XtremWeb-CH environment\cite{xwch}. We choose +this environment because it tackles the centralized aspect of other +global computing environments such as XtremWeb\cite{xtremweb} or +Seti\cite{seti}. It tends to a peer-to-peer approach by distributing +some components of the architecture. For instance, the computing nodes +are allowed to directly communicate. Experiments were conducted on a +real global computing testbed. The results are very encouraging. They +exhibit an interesting speed-up and show that the overhead induced by +the use of XtremWeb-CH is very acceptable. The paper is organized as follows. In Section 2 we present the Neurad application and particularly its most time consuming part, i.e. the @@ -146,29 +148,27 @@ end in Section 6 by some concluding remarks and perspectives. \label{f_neurad} \end{figure} -The \emph{Neurad}~\cite{Neurad} project presented in this paper takes -place in a multi-disciplinary project, involving medical physicists -and computer scientists whose goal is to enhance the treatment -planning of cancerous tumors by external radiotherapy. In our previous -works~\cite{RADIO09,ICANN10,NIMB2008}, we have proposed an original -approach to solve scientific problems whose accurate modeling and/or -analytical description are difficult. That method is based on the -collaboration of computational codes and neural networks used as -universal interpolator. Thanks to that method, the \emph{Neurad} -software provides a fast and accurate evaluation of radiation doses in -any given environment (possibly inhomogeneous) for given irradiation -parameters. We have shown in a previous work (\cite{AES2009}) the -interest to use a distributed algorithm for the neural network -learning. We use a classical RPROP (DEFINITION)algorithm with a HPU -topology to do the training of our neural network. - -Figure~\ref{f_neurad} presents the {\it{Neurad}} scheme. Three parts -are clearly independent: the initial data production, the learning -process and the dose deposit evaluation. The first step, the data -production, is outside of the {\it{Neurad}} project. They are many -solutions to obtain data about the radiotherapy treatments like the -measure or the simulation. The only essential criterion is that the -result must be obtained in an homogeneous environment. +The \emph{Neurad}~\cite{Neurad} project presented in this paper takes place in a +multi-disciplinary project, involving medical physicists and computer scientists +whose goal is to enhance the treatment planning of cancerous tumors by external +radiotherapy. In our previous works~\cite{RADIO09,ICANN10,NIMB2008}, we have +proposed an original approach to solve scientific problems whose accurate +modeling and/or analytical description are difficult. That method is based on +the collaboration of computational codes and neural networks used as universal +interpolator. Thanks to that method, the \emph{Neurad} software provides a fast +and accurate evaluation of radiation doses in any given environment (possibly +inhomogeneous) for given irradiation parameters. We have shown in a previous +work (\cite{AES2009}) the interest to use a distributed algorithm for the neural +network learning. We use a classical RPROP~\footnote{Resilient backpropagation} +algorithm with a HPU~\footnote{High order processing units} topology to do the +training of our neural network. + +Figure~\ref{f_neurad} presents the {\it{Neurad}} scheme. Three parts are clearly +independent: the initial data production, the learning process and the dose +deposit evaluation. The first step, the data production, is outside of the +{\it{Neurad}} project. They are many solutions to obtain data about the +radiotherapy treatments like the measure or the simulation. The only essential +criterion is that the result must be obtained in an homogeneous environment. % We have chosen to % use only a Monte Carlo simulation because this kind of tool is the @@ -187,24 +187,22 @@ result must be obtained in an homogeneous environment. % \label{f_tray} % \end{figure} -The secondary stage of the {\it{Neurad}} project is the learning step -and this is the most time consuming step. This step is performed -off-line but it is important to reduce the time used for the learning -process to keep a workable tool. Indeed, if the learning time is too -huge (for the moment, this time could reach one week for a limited -domain), this process should not be launched at any time, but only -when a major modification occurs in the environment, like a change of -context for instance. However, it is interesting to update the -knowledge of the neural network, by using the learning process, when -the domain evolves (evolution in material used for the prosthesis or -evolution on the beam (size, shape or energy)). The learning time is -related to the volume of data who could be very important in a real -medical context. A work has been done to reduce this learning time -with the parallelization of the learning process by using a -partitioning method of the global dataset. The goal of this method is -to train many neural networks on sub-domains of the global -dataset. After this training, the use of these neural networks all -together allows to obtain a response for the global domain of study. +The secondary stage of the {\it{Neurad}} project is the learning step and this +is the most time consuming step. This step is performed off-line but it is +important to reduce the time used for the learning process to keep a workable +tool. Indeed, if the learning time is too huge (for the moment, this time could +reach one week for a limited domain), this process should not be launched at any +time, but only when a major modification occurs in the environment, like a +change of context for instance. However, it is interesting to update the +knowledge of the neural network, by using the learning process, when the domain +evolves (evolution in material used for the prosthesis or evolution on the beam +(size, shape or energy)). The learning time is related to the volume of data who +could be very important in a real medical context. A work has been done to +reduce this learning time with the parallelization of the learning process by +using a partitioning method of the global dataset. The goal of this method is to +train many neural networks on sub-domains of the global dataset. After this +training, the use of these neural networks all together allows to obtain a +response for the global domain of study. \begin{figure}[h] @@ -215,22 +213,24 @@ together allows to obtain a response for the global domain of study. \label{fig:overlap} \end{figure} - -However, performing the learning on sub-domains constituting a -partition of the initial domain is not satisfying according to the -quality of the results. This comes from the fact that the accuracy of -the approximation performed by a neural network is not constant over -the learned domain. Thus, it is necessary to use an overlapping of -the sub-domains. The overall principle is depicted in -Figure~\ref{fig:overlap}. In this way, each sub-network has an -exploitation domain smaller than its training domain and the -differences observed at the borders are no longer relevant. -Nonetheless, in order to preserve the performance of the parallel -algorithm, it is important to carefully set the overlapping ratio -$\alpha$. It must be large enough to avoid the border's errors, and -as small as possible to limit the size increase of the data subsets -(Qu'en est-il pour nos test ?). - +% j'ai relu mais pas vu le probleme + +However, performing the learning on sub-domains constituting a partition of the +initial domain is not satisfying according to the quality of the results. This +comes from the fact that the accuracy of the approximation performed by a neural +network is not constant over the learned domain. Thus, it is necessary to use an +overlapping of the sub-domains. The overall principle is depicted in +Figure~\ref{fig:overlap}. In this way, each sub-network has an exploitation +domain smaller than its training domain and the differences observed at the +borders are no longer relevant. Nonetheless, in order to preserve the +performance of the parallel algorithm, it is important to carefully set the +overlapping ratio $\alpha$. It must be large enough to avoid the border's +errors, and as small as possible to limit the size increase of the data +subsets~\cite{AES2009}. + +%(Qu'en est-il pour nos test ?). +% Ce paramètre a deja été etudié dans un précédent papier, il a donc choisi d'être fixe +% pour ces tests-ci. \section{The XtremWeb-CH environment} @@ -252,12 +252,20 @@ density. This part is out of the scope of this paper. The second step of the application, and the most time consuming, is the learning itself. This is the one which has been parallelized, using the XWCH environment. As exposed in the section 2, the -parallelization relies on a partitionning of the global -dataset. Following this partitionning all learning tasks are executed +parallelization relies on a partitioning of the global +dataset. Following this partitioning all learning tasks are executed in parallel independently with their own local data part, with no communication, following the fork/join model. Clearly, this computation fits well with the model of the chosen middleware. +\begin{figure}[ht] + \centering + \includegraphics[width=8cm]{figures/neurad_gridif} + \caption{The proposed Neurad gridification} + \label{fig:neurad_grid} +\end{figure} + + The execution scheme is then the following (see Figure \ref{fig:neurad_grid}): \begin{enumerate} @@ -267,21 +275,15 @@ The execution scheme is then the following (see Figure \item When a worker (W) is ready to compute, it requests a task to execute to the coordinator (Coord.); \item The coordinator assigns the worker a task. This last one retrieves the -application and its assigned data and so can start the computation. +application and its assigned data and so can start the computation; \item At the end of the learning process, the worker sends the result to a warehouse. \end{enumerate} The last step of the application is to retrieve these results (some weighted neural networks) and exploit them through a dose distribution -process. This latter step is out of the scope of this paper. +process. -\begin{figure}[ht] - \centering - \includegraphics[width=8cm]{figures/neurad_gridif} - \caption{The proposed Neurad gridification} - \label{fig:neurad_grid} -\end{figure} \section{Experimental results} \label{sec:neurad_xp} @@ -289,22 +291,20 @@ process. This latter step is out of the scope of this paper. The aim of this section is to describe and analyze the experimental results we have obtained with the parallel Neurad version previously described. Our goal was to carry out this application with real input -data and on a real global computing testbed. +data and on a real volunteer computing testbed. \subsubsection{Experimental conditions} \label{sec:neurad_cond} -The size of the input data is about 2.4Gb. In order to avoid that data +The size of the input data is about 2.4Gb. In order to avoid that noise appears and disturbs the learning process, these data can be divided into, at most, 25 parts. This generates input data parts of about 15Mb (in a compressed format). The output data, which are -retrieved after the process, are about 30Kb for each -part. Unfortunately, the data decomposition limitation does not allow -us to use more than 25 computers (XWCH workers). Nevertheless, we used two +retrieved after the process, are about 30Kb for each part. We used two distinct deployments of XWCH: \begin{enumerate} -\item In the first one, called ``distributed XWCH'' in the following, +\item In the first one, called ``distributed XWCH'', the XWCH coordinator and the warehouses were located in Geneva, Switzerland while the workers were running in the same local cluster in Belfort, France. @@ -314,8 +314,11 @@ distinct deployments of XWCH: the same local cluster. \end{enumerate} -For both deployments, during the day these machines were used by -students of the Computer Science Department of the IUT of Belfort. +For both deployments, le local cluster is a campus cluster and during +the day these machines were used by students of the Computer Science +Department of the IUT of Belfort. Unfortunately, the data +decomposition limitation does not allow us to use more than 25 +computers (XWCH workers). In order to evaluate the overhead induced by the use of the platform we have furthermore compared the execution of the Neurad application @@ -331,12 +334,13 @@ $0.50e^{-1}$, $0.25e^{-1}$, and $1e^{-2}$. \subsubsection{Results} \label{sec:neurad_result} + Table \ref{tab:neurad_res} presents the execution times of the Neurad application on 25 machines with XWCH (local and distributed deployment) and without XWCH. These results correspond to the measures of the same steps for both kinds of execution, i.e. sending of local data and the executable, the learning process, and retrieving the -results. Results represent the average time of $?? x ??$ executions. +results. Results represent the average time of $5$ executions. \begin{table}[h!] @@ -395,6 +399,8 @@ coordinator and one or more warehouses near a cluster of workers can enhance computations and platform performances. + + \section{Conclusion and future works} In this paper, we have presented a gridification of a real medical @@ -403,16 +409,28 @@ tries to optimize the irradiated dose distribution within a patient. Based on a multi-layer neural network, this application presents a very time consuming step, i.e. the learning step. Due to the computing characteristics of this step, we choose to parallelize it -using the XtremWeb-CH global computing environment. Obtained +using the XtremWeb-CH volunteer computing environment. Obtained experimental results show good speed-ups and underline that overheads induced by XWCH are very acceptable, letting it be a good candidate -for deploying parallel applications over a global computing environment. +for deploying parallel applications over a volunteer computing environment. Our future works include the testing of the application on a more large scale testbed. This implies, the choice of a data input set allowing a finer decomposition. Unfortunately, this choice of input -data is not trivial and relies on a large number of parameters -(demander ici des précisions à Marc). +data is not trivial and relies on a large number of parameters. + +We are also planning to test XWCH with parallel applications where +communication between workers occurs during the execution. In this +way, the use of the asynchronous iteration model \cite{bcl08} may be +an interesting perspective. + +%(demander ici des précisions à Marc). +% Si tu veux parler de l'ensembles des paramètres que l'on peut utiliser pour caractériser les conditions d'irradiations +% tu peux parler : +% - caracteristiques du faisceaux d'irradiation (beam size (de quelques mm à plus de 40 cm), energy, SSD (source surface distance), +% - caractéritiques de la matière : density + + \bibliographystyle{plain} \bibliography{biblio}