From: SAUGET Marc
Date: Tue, 11 Jan 2011 09:04:32 +0000 (+0100)
Subject: Petites corrections (addresse mail, labo d'affectation)
XGitUrl: http://info.iutbm.univfcomte.fr/pub/gitweb/gpc2011.git/commitdiff_plain/70d6de89376491e1fefe8ef5a68591b9ce4b8773
Petites corrections (addresse mail, labo d'affectation)
Ajout de référence et de commentaire aux endroits demandés
X

diff git a/gpc2011.tex b/gpc2011.tex
index 83a04b4..c9f08f8 100644
 a/gpc2011.tex
+++ b/gpc2011.tex
@@ 47,7 +47,7 @@
\author{Nabil Abdennhader\inst{1} \and Mohamed Ben Belgacem\inst{1} \and RaphaÃ«l Couturier\inst{2} \and
 David Laiymani\inst{2} \and SÃ©bastien MiquÃ©e\inst{2} \and Marko Niinimaki\inst{1} \and Marc Sauget\inst{2}}
+ David Laiymani\inst{2} \and SÃ©bastien MiquÃ©e\inst{2} \and Marko Niinimaki\inst{1} \and Marc Sauget\inst{3}}
\institute{
University of Applied Sciences Western Switzerland, hepia Geneva,
@@ 60,7 +60,7 @@ Laboratoire d'Informatique de l'universit\'{e}
\email{\{raphael.couturier,david.laiymani,sebastien.miquee\}@univfcomte.fr}
\and
FEMTOST, ENISYS/IRMA, F25210 Montb\'{e}liard , FRANCE\\
\email{marc.sauget@femtost.fr}
+\email{marc.sauget@univfcomte.fr}
}
@@ 95,13 +95,13 @@ techniques, using analytic methods, models and databases, are rapid
but lack precision. Enhanced precision can be achieved by using
calculation codes based, for example, on Monte Carlo methods. The main
drawback of these methods is their computation times which can be
rapidly huge. In \cite{} the authors proposed a novel approach, called
+rapidly huge. In \cite{NIMB2008} the authors proposed a novel approach, called
Neurad, using neural networks. This approach is based on the
collaboration of computation codes and multilayer neural networks
used as universal approximators. It provides a fast and accurate
evaluation of radiation doses in any given environment for given
irradiation parameters. As the learning step is often very time
consuming, in \cite{} the authors proposed a parallel
+consuming, in \cite{AES2009} the authors proposed a parallel
algorithm that enables to decompose the learning domain into
subdomains. The decomposition has the advantage to significantly
reduce the complexity of the target functions to approximate.
@@ 146,29 +146,27 @@ end in Section 6 by some concluding remarks and perspectives.
\label{f_neurad}
\end{figure}
The \emph{Neurad}~\cite{Neurad} project presented in this paper takes
place in a multidisciplinary project, involving medical physicists
and computer scientists whose goal is to enhance the treatment
planning of cancerous tumors by external radiotherapy. In our previous
works~\cite{RADIO09,ICANN10,NIMB2008}, we have proposed an original
approach to solve scientific problems whose accurate modeling and/or
analytical description are difficult. That method is based on the
collaboration of computational codes and neural networks used as
universal interpolator. Thanks to that method, the \emph{Neurad}
software provides a fast and accurate evaluation of radiation doses in
any given environment (possibly inhomogeneous) for given irradiation
parameters. We have shown in a previous work (\cite{AES2009}) the
interest to use a distributed algorithm for the neural network
learning. We use a classical RPROP (DEFINITION)algorithm with a HPU
topology to do the training of our neural network.

Figure~\ref{f_neurad} presents the {\it{Neurad}} scheme. Three parts
are clearly independent: the initial data production, the learning
process and the dose deposit evaluation. The first step, the data
production, is outside of the {\it{Neurad}} project. They are many
solutions to obtain data about the radiotherapy treatments like the
measure or the simulation. The only essential criterion is that the
result must be obtained in an homogeneous environment.
+The \emph{Neurad}~\cite{Neurad} project presented in this paper takes place in a
+multidisciplinary project, involving medical physicists and computer scientists
+whose goal is to enhance the treatment planning of cancerous tumors by external
+radiotherapy. In our previous works~\cite{RADIO09,ICANN10,NIMB2008}, we have
+proposed an original approach to solve scientific problems whose accurate
+modeling and/or analytical description are difficult. That method is based on
+the collaboration of computational codes and neural networks used as universal
+interpolator. Thanks to that method, the \emph{Neurad} software provides a fast
+and accurate evaluation of radiation doses in any given environment (possibly
+inhomogeneous) for given irradiation parameters. We have shown in a previous
+work (\cite{AES2009}) the interest to use a distributed algorithm for the neural
+network learning. We use a classical RPROP~\footnote{Resilient backpropagation}
+algorithm with a HPU~\footnote{High order processing units} topology to do the
+training of our neural network.
+
+Figure~\ref{f_neurad} presents the {\it{Neurad}} scheme. Three parts are clearly
+independent: the initial data production, the learning process and the dose
+deposit evaluation. The first step, the data production, is outside of the
+{\it{Neurad}} project. They are many solutions to obtain data about the
+radiotherapy treatments like the measure or the simulation. The only essential
+criterion is that the result must be obtained in an homogeneous environment.
% We have chosen to
% use only a Monte Carlo simulation because this kind of tool is the
@@ 187,24 +185,22 @@ result must be obtained in an homogeneous environment.
% \label{f_tray}
% \end{figure}
The secondary stage of the {\it{Neurad}} project is the learning step
and this is the most time consuming step. This step is performed
offline but it is important to reduce the time used for the learning
process to keep a workable tool. Indeed, if the learning time is too
huge (for the moment, this time could reach one week for a limited
domain), this process should not be launched at any time, but only
when a major modification occurs in the environment, like a change of
context for instance. However, it is interesting to update the
knowledge of the neural network, by using the learning process, when
the domain evolves (evolution in material used for the prosthesis or
evolution on the beam (size, shape or energy)). The learning time is
related to the volume of data who could be very important in a real
medical context. A work has been done to reduce this learning time
with the parallelization of the learning process by using a
partitioning method of the global dataset. The goal of this method is
to train many neural networks on subdomains of the global
dataset. After this training, the use of these neural networks all
together allows to obtain a response for the global domain of study.
+The secondary stage of the {\it{Neurad}} project is the learning step and this
+is the most time consuming step. This step is performed offline but it is
+important to reduce the time used for the learning process to keep a workable
+tool. Indeed, if the learning time is too huge (for the moment, this time could
+reach one week for a limited domain), this process should not be launched at any
+time, but only when a major modification occurs in the environment, like a
+change of context for instance. However, it is interesting to update the
+knowledge of the neural network, by using the learning process, when the domain
+evolves (evolution in material used for the prosthesis or evolution on the beam
+(size, shape or energy)). The learning time is related to the volume of data who
+could be very important in a real medical context. A work has been done to
+reduce this learning time with the parallelization of the learning process by
+using a partitioning method of the global dataset. The goal of this method is to
+train many neural networks on subdomains of the global dataset. After this
+training, the use of these neural networks all together allows to obtain a
+response for the global domain of study.
\begin{figure}[h]
@@ 215,22 +211,24 @@ together allows to obtain a response for the global domain of study.
\label{fig:overlap}
\end{figure}

However, performing the learning on subdomains constituting a
partition of the initial domain is not satisfying according to the
quality of the results. This comes from the fact that the accuracy of
the approximation performed by a neural network is not constant over
the learned domain. Thus, it is necessary to use an overlapping of
the subdomains. The overall principle is depicted in
Figure~\ref{fig:overlap}. In this way, each subnetwork has an
exploitation domain smaller than its training domain and the
differences observed at the borders are no longer relevant.
Nonetheless, in order to preserve the performance of the parallel
algorithm, it is important to carefully set the overlapping ratio
$\alpha$. It must be large enough to avoid the border's errors, and
as small as possible to limit the size increase of the data subsets
(Qu'en estil pour nos test ?).

+% j'ai relu mais pas vu le probleme
+
+However, performing the learning on subdomains constituting a partition of the
+initial domain is not satisfying according to the quality of the results. This
+comes from the fact that the accuracy of the approximation performed by a neural
+network is not constant over the learned domain. Thus, it is necessary to use an
+overlapping of the subdomains. The overall principle is depicted in
+Figure~\ref{fig:overlap}. In this way, each subnetwork has an exploitation
+domain smaller than its training domain and the differences observed at the
+borders are no longer relevant. Nonetheless, in order to preserve the
+performance of the parallel algorithm, it is important to carefully set the
+overlapping ratio $\alpha$. It must be large enough to avoid the border's
+errors, and as small as possible to limit the size increase of the data
+subsets~\cite{AES2009}.
+
+%(Qu'en estil pour nos test ?).
+% Ce paramÃ¨tre a deja Ã©tÃ© etudiÃ© dans un prÃ©cÃ©dent papier, il a donc choisi d'Ãªtre fixe
+% pour ces testsci.
\section{The XtremWebCH environment}
@@ 412,7 +410,14 @@ Our future works include the testing of the application on a more
large scale testbed. This implies, the choice of a data input set
allowing a finer decomposition. Unfortunately, this choice of input
data is not trivial and relies on a large number of parameters
(demander ici des prÃ©cisions Ã Marc).
+
+%(demander ici des prÃ©cisions Ã Marc).
+% Si tu veux parler de l'ensembles des paramÃ¨tres que l'on peut utiliser pour caractÃ©riser les conditions d'irradiations
+% tu peux parler :
+%  caracteristiques du faisceaux d'irradiation (beam size (de quelques mm Ã plus de 40 cm), energy, SSD (source surface distance),
+%  caractÃ©ritiques de la matiÃ¨re : density
+
+
\bibliographystyle{plain}
\bibliography{biblio}