Ah, OK. Interesting, thanks. Would you agree with the following view:
“The NTK/GP stuff has neural nets implementing a “psuedosimplicity prior” which is maybe also a simplicity prior but might not be, the evidence is unclear. A psuedosimplicity prior is like a simplicity prior except that there are some important classes of kolmogorov-simple functions that don’t get high prior / high measure.”
Which would you say is more likely: The NTK/GP stuff is indeed not universally data efficient, and thus modern neural nets aren’t either, or (b) NTK/GP stuff is indeed not universally data efficient, and thus modern neural nets aren’t well-characterized by the NTK/GP stuff.
I’d say (b) -- it seems quite unlikely to me that the NTK/GP are universally data-efficient, while neural nets might be(although that’s mostly speculation on my part). I think the lack of feature learning is a stronger argument that NTK/GP don’t characterize neural nets well.
Ah, OK. Interesting, thanks. Would you agree with the following view:
“The NTK/GP stuff has neural nets implementing a “psuedosimplicity prior” which is maybe also a simplicity prior but might not be, the evidence is unclear. A psuedosimplicity prior is like a simplicity prior except that there are some important classes of kolmogorov-simple functions that don’t get high prior / high measure.”
Which would you say is more likely: The NTK/GP stuff is indeed not universally data efficient, and thus modern neural nets aren’t either, or (b) NTK/GP stuff is indeed not universally data efficient, and thus modern neural nets aren’t well-characterized by the NTK/GP stuff.
Yeah, that summary sounds right.
I’d say (b) -- it seems quite unlikely to me that the NTK/GP are universally data-efficient, while neural nets might be(although that’s mostly speculation on my part). I think the lack of feature learning is a stronger argument that NTK/GP don’t characterize neural nets well.