Right, and I’ve explained why I don’t think any of those analyses are relevant to neural networks. Deep learning simply does not search over Turing machines or circuits of varying lengths. It searches over parameters of an arithmetic circuit of fixed structure, size, and runtime. So Solomonoff induction, speed priors, and circuit priors are all inapplicable. There has been a lot of work in the mainstream science of deep learning literature on the generalization behavior of actual neural nets, and I’m pretty baffled at why you don’t pay more attention to that stuff.
Right, and I’ve explained why I don’t think any of those analyses are relevant to neural networks. Deep learning simply does not search over Turing machines or circuits of varying lengths. It searches over parameters of an arithmetic circuit of fixed structure, size, and runtime. So Solomonoff induction, speed priors, and circuit priors are all inapplicable.
It is trivially easy to modify the formalism to search only over fixed-size algorithms, and in fact that’s usually what I do when I run this sort of analysis. I feel like you still aren’t understanding the key criticism here—it’s really not about Solomonoff induction—and I’m not sure how to explain that in any way other than how I’ve already done so.
There has been a lot of work in the mainstream science of deep learning literature on the generalization behavior of actual neural nets, and I’m pretty baffled at why you don’t pay more attention to that stuff.
I’m going to assume you just aren’t very familiar with my writing, because working through empirical evidence about neural network inductive biases is somethingI loveto doall thetime.
It is trivially easy to modify the formalism to search only over fixed-size algorithms, and in fact that’s usually what I do when I run this sort of analysis.
What? Which formalism? I don’t see how this is true at all. Please elaborate or send an example of “modifying” Solomonoff so that all the programs have fixed length, or “modifying” the circuit prior so all circuits are the same size.
No, I’m pretty familiar with your writing. I still don’t think you’re focusing on mainstream ML literature enough because you’re still putting nonzero weight on these other irrelevant formalisms. Taking that literature seriously would mean ceasing to take the Solomonoff or circuit prior literature seriously.
Right, and I’ve explained why I don’t think any of those analyses are relevant to neural networks. Deep learning simply does not search over Turing machines or circuits of varying lengths. It searches over parameters of an arithmetic circuit of fixed structure, size, and runtime. So Solomonoff induction, speed priors, and circuit priors are all inapplicable. There has been a lot of work in the mainstream science of deep learning literature on the generalization behavior of actual neural nets, and I’m pretty baffled at why you don’t pay more attention to that stuff.
It is trivially easy to modify the formalism to search only over fixed-size algorithms, and in fact that’s usually what I do when I run this sort of analysis. I feel like you still aren’t understanding the key criticism here—it’s really not about Solomonoff induction—and I’m not sure how to explain that in any way other than how I’ve already done so.
I’m going to assume you just aren’t very familiar with my writing, because working through empirical evidence about neural network inductive biases is something I love to do all the time.
What? Which formalism? I don’t see how this is true at all. Please elaborate or send an example of “modifying” Solomonoff so that all the programs have fixed length, or “modifying” the circuit prior so all circuits are the same size.
No, I’m pretty familiar with your writing. I still don’t think you’re focusing on mainstream ML literature enough because you’re still putting nonzero weight on these other irrelevant formalisms. Taking that literature seriously would mean ceasing to take the Solomonoff or circuit prior literature seriously.