-The use of distributed architectures for solving large scientific problems seems
-to become mandatory in a lot of cases. For example, in the domain of
-radiotherapy dose computation the problem is crucial. The main goal of external
-beam radiotherapy is the treatment of tumours while minimizing exposure to
-healthy tissue. Dosimetric planning has to be carried out in order to optimize
-the dose distribution within the patient is necessary. Thus, for determining the
-most accurate dose distribution during treatment planning, a compromise must be
-found between the precision and the speed of calculation. Current techniques,
-using analytic methods, models and databases, are rapid but lack
-precision. Enhanced precision can be achieved by using calculation codes based,
-for example, on Monte Carlo methods. In [] the authors proposed a novel approach
-based on the use of neural networks. The approach is based on the collaboration
-of computation codes and multi-layer neural networks used as universal
-approximators. It provides a fast and accurate evaluation of radiation doses in
-any given environment for given irradiation parameters. As the learning step is
-often very time consumming, in \cite{bcvsv08:ip} the authors proposed a parallel
-algorithm that enable to decompose the learning domain into subdomains. The
-decomposition has the advantage to significantly reduce the complexity of the
-target functions to approximate.
-
-Now, as there exist several classes of distributed/parallel architectures
-(supercomputers, clusters, global computing...) we have to choose the best
-suited one for the parallel Neurad application. The Global or Volunteer
-computing model seems to be an interesting approach. Here, the computing power
-is obtained by agregating unused (or volunteer) public resources connected to
-the Internet. For our case, we can imagine for example, that a part of the
-architecture will be composed of some of the different computers of the
-hospital. This approach present the advantage to be clearly cheaper than a more
-dedicated approach like the use of supercomputer or clusters.
-
-The aim of this paper is to propose and evaluate a gridification of the Neurad
-application (more precisely, of the most time consuming part, the learning step)
-using a Global computing approach. For this, we focus on the XtremWeb-CH
-environnement []. We choose this environnent because it tackles the centralized
-aspect of other global computing environments such as XTremWeb [] or Seti []. It
-tends to a peer-to-peer approach by distributing some components of the
-architecture. For instance, the computing nodes are allowed to directly
-communicate. Experimentations were conducted on a real Global Computing
-testbed. The results are very encouraging. They exhibit an interesting speed-up
-and show that the overhead induced by the use of XTremWeb-CH is very acceptable.
-
-The paper is organized as follows. In section 2 we present the Neurad
-application and particularly it most time consuming part i.e. the learning
-step. Section 3 details the XtremWeb-CH environnement while in section 4 we
-expose the gridification of the Neurad application. Experimental results are
-presented in section 5 and we end in section 6 by some concluding remarks and
-perspectives.
+The use of distributed architectures for solving large scientific
+problems seems to become mandatory in a lot of cases. For example, in
+the domain of radiotherapy dose computation the problem is
+crucial. The main goal of external beam radiotherapy is the treatment
+of tumors while minimizing exposure to healthy tissue. Dosimetric
+planning has to be carried out in order to optimize the dose
+distribution within the patient is necessary. Thus, to determine the
+most accurate dose distribution during treatment planning, a
+compromise must be found between the precision and the speed of
+calculation. Current techniques, using analytic methods, models and
+databases, are rapid but lack precision. Enhanced precision can be
+achieved by using calculation codes based, for example, on Monte Carlo
+methods. In [] the authors proposed a novel approach using neural
+networks. This approach is based on the collaboration of computation
+codes and multi-layer neural networks used as universal
+approximators. It provides a fast and accurate evaluation of radiation
+doses in any given environment for given irradiation parameters. As
+the learning step is often very time consuming, in \cite{bcvsv08:ip}
+the authors proposed a parallel algorithm that enable to decompose the
+learning domain into subdomains. The decomposition has the advantage
+to significantly reduce the complexity of the target functions to
+approximate.
+
+Now, as there exist several classes of distributed/parallel
+architectures (supercomputers, clusters, global computing...) we have
+to choose the best suited one for the parallel Neurad application.
+The Global or Volunteer Computing model seems to be an interesting
+approach. Here, the computing power is obtained by aggregating unused
+(or volunteer) public resources connected to the Internet. For our
+case, we can imagine for example, that a part of the architecture will
+be composed of some of the different computers of the hospital. This
+approach present the advantage to be clearly cheaper than a more
+dedicated approach like the use of supercomputers or clusters.
+
+The aim of this paper is to propose and evaluate a gridification of
+the Neurad application (more precisely, of the most time consuming
+part, the learning step) using a Global Computing approach. For this,
+we focus on the XtremWeb-CH environment []. We choose this environment
+because it tackles the centralized aspect of other global computing
+environments such as XtremWeb [] or Seti []. It tends to a
+peer-to-peer approach by distributing some components of the
+architecture. For instance, the computing nodes are allowed to
+directly communicate. Experiments were conducted on a real Global
+Computing testbed. The results are very encouraging. They exhibit an
+interesting speed-up and show that the overhead induced by the use of
+XtremWeb-CH is very acceptable.
+
+The paper is organized as follows. In Section 2 we present the Neurad
+application and particularly it most time consuming part i.e. the
+learning step. Section 3 details the XtremWeb-CH environment while in
+Section 4 we expose the gridification of the Neurad
+application. Experimental results are presented in Section 5 and we
+end in Section 6 by some concluding remarks and perspectives.