(I somehow didn’t notice your comment until now.) I believe you are correct. The theorem for function approximation I know also uses brute force (i.e., large networks) in the proof, so it doesn’t seem like evidence for the existence of [weights that implement algorithms].
(And I am definitely not talking about algorithms in terms of input/output behavior.)
I’ve changed the paragraph into
You might wonder what the space of all models looks like. The typical answer is that the possible models are sets of weights for a neural network. The problem exists insofar as some sets of weights implement specific search algorithms.
Anyone who knows of alternative evidence I can point to here is welcome to reply to this comment.
(I somehow didn’t notice your comment until now.) I believe you are correct. The theorem for function approximation I know also uses brute force (i.e., large networks) in the proof, so it doesn’t seem like evidence for the existence of [weights that implement algorithms].
(And I am definitely not talking about algorithms in terms of input/output behavior.)
I’ve changed the paragraph into
Anyone who knows of alternative evidence I can point to here is welcome to reply to this comment.