That was indeed one of the hypotheses about why it was difficult to train the networks—the vanishing gradient problem. In retrospect, one of the main reasons why this happened was the use of saturating nonlinearities in the network—nonlinearities like the logistic function or tanh which asymptote at 1. Because they asymptote, their derivatives always end up being really small, and the deeper your network the more this effect compounds. The first large-scale network that fixed this was by Krizhevsky et al., which used a Rectified Linear Unit (ReLU) for their nonlinearity, given by f(x) = max(0, x). The earliest reference I can find to using ReLUs is Jarrett et al., but since Krizhevsky’s result pretty much everyone uses ReLUs (or some variant thereof). In fact, the first result I’ve seen showing that logistic/tanh nonlinearities can work is the batch normalization paper Sean_o_h linked, which gets around the problem by normalizing the input to the nonlinearity, which presumably prevents the units from saturating too much (though this is still an open question).
That was indeed one of the hypotheses about why it was difficult to train the networks—the vanishing gradient problem. In retrospect, one of the main reasons why this happened was the use of saturating nonlinearities in the network—nonlinearities like the logistic function or tanh which asymptote at 1. Because they asymptote, their derivatives always end up being really small, and the deeper your network the more this effect compounds. The first large-scale network that fixed this was by Krizhevsky et al., which used a Rectified Linear Unit (ReLU) for their nonlinearity, given by f(x) = max(0, x). The earliest reference I can find to using ReLUs is Jarrett et al., but since Krizhevsky’s result pretty much everyone uses ReLUs (or some variant thereof). In fact, the first result I’ve seen showing that logistic/tanh nonlinearities can work is the batch normalization paper Sean_o_h linked, which gets around the problem by normalizing the input to the nonlinearity, which presumably prevents the units from saturating too much (though this is still an open question).