I was under the impression that training the whole network with gradient decent was impossible, because the propagated error becomes infinitesimally small.
If you do it naively, yes. But researches figured out how to attack that problem from multiple angles: from the choice of the non-linear activation function, to specifics of the optimization algorithm, to the random distribution used to sample the initial weights.
Do you have a link about how they managed to train the whole network?
The batch normalization paper cited above is one example of that.
If you do it naively, yes. But researches figured out how to attack that problem from multiple angles: from the choice of the non-linear activation function, to specifics of the optimization algorithm, to the random distribution used to sample the initial weights.
The batch normalization paper cited above is one example of that.