How can neural networks approximate functions well in practice, when the set of possible functions is exponentially larger than the set of practically possible networks?
This question answers itself. If neural networks could really approximate every possible function, they could never generalize. That is the whole point of statistical learning theory: you get a Probably Approximately Correct (PAC) generalization bound when 1) your learning machine gets good empirical accuracy and 2) the number of possible functions expressible by the machine is small in some sense compared to the volume of training data.
This question answers itself. If neural networks could really approximate every possible function, they could never generalize. That is the whole point of statistical learning theory: you get a Probably Approximately Correct (PAC) generalization bound when 1) your learning machine gets good empirical accuracy and 2) the number of possible functions expressible by the machine is small in some sense compared to the volume of training data.