-and this is the most time consuming step. This step is off-line but it
-is important to reduce the time used for the learning process to keep
-a workable tool. Indeed, if the learning time is too huge (for the
-moment, this time could reach one week for a limited domain), this
-process should not be launched at any time, but only when a major
-modification occurs in the environment, like a change of context for
-instance. However, it is interesting to update the knowledge of the
-neural network, by using the learning process, when the domain evolves
-(evolution in material used for the prosthesis or evolution on the
-beam (size, shape or energy)). The learning time is related to the
-volume of data who could be very important in a real medical context.
-A work has been done to reduce this learning time with the
-parallelization of the learning process by using a partitioning method
-of the global dataset. The goal of this method is to train many neural
-networks on sub-domains of the global dataset. After this training,
-the use of these neural networks all together allows to obtain a
-response for the global domain of study.
+and this is the most time consuming step. This step is performed
+off-line but it is important to reduce the time used for the learning
+process to keep a workable tool. Indeed, if the learning time is too
+huge (for the moment, this time could reach one week for a limited
+domain), this process should not be launched at any time, but only
+when a major modification occurs in the environment, like a change of
+context for instance. However, it is interesting to update the
+knowledge of the neural network, by using the learning process, when
+the domain evolves (evolution in material used for the prosthesis or
+evolution on the beam (size, shape or energy)). The learning time is
+related to the volume of data who could be very important in a real
+medical context. A work has been done to reduce this learning time
+with the parallelization of the learning process by using a
+partitioning method of the global dataset. The goal of this method is
+to train many neural networks on sub-domains of the global
+dataset. After this training, the use of these neural networks all
+together allows to obtain a response for the global domain of study.