some afaik-open problems relating to bridging parametrized bayes with sth like solomonoff induction
I think that for each NN architecture+prior+task/loss, conditioning the initialization prior on train data (or doing some other bayesian thing) is typically basically a completely different learning algorithm than (S)GD-learning, because local learning is a very different thing, which is one reason I doubt the story in the slides as an explanation of generalization in deep learning[1].[2] But setting this aside (though I will touch on it again briefly in the last point I make below), I agree it would be cool to have a story connecting the parametrized bayesian thing to something like Solomonoff induction. Here’s an outline of an attempt to give a more precise story extending the one in Lucius’s slides, with a few afaik-open problems:
Let’s focus on boolean functions (because that’s easy to think about — but feel free to make a different choice). Let’s take a learner to be shown certain input-output pairs (that’s “training it”), and having to predict outputs on new inputs (that’s “test time”). Let’s say we’re interested in understanding something about which learning setups “generalize well” to these new inputs.
What should we mean by “generalizing well” in this context? This isn’t so clear to me — we could e.g. ask that it does well on problems “like this” which come up in practice, but to solve such problems, one would want to look at what situation gave us the problem and so on, which doesn’t seem like the kind of data we want to include in the problem setup here; we could imagine simply removing such data and asking for something that would work well in practice, but this still doesn’t seem like such a clean criterion.
But anyway, the following seems like a reasonable Solomonoff-like thing:
There’s some complexity (i.e., size/[description length], probably) prior on boolean circuits. There can be multiple reasonable choices of [types of circuits admitted] and/or [description language] giving probably genuinely different priors here, but make some choice (it seems fine to make whatever reasonable choice which will fit best with the later parts of the story we’re attempting to build).
Think of all the outputs (i.e. train and test) as being generated by taking a circuit from this prior and running the inputs through it.
To predict outputs on new inputs, just do the bayesian thing (ie condition the induced prior on functions on all the outputs you’ve seen).
My suggestion is that to explain why another learning setup (for boolean functions) has good generalization properties, we could be sort of happy with building a bridge between it and the above simplicity-prior-circuit-solomonoff thing. (This could let us bypass having to further specify what it is to generalize well.)
One key step in the present attempt at building a bridge from NN-bayes to simplicity-prior-circuit-solomonoff is to get from simplicity-prior-circuit-solomonoff to a setup with a uniform prior over circuits — the story would like to say that instead of picking circuits from a simplicity prior, you can pick circuits uniformly at random from among all circuits of up to a certain size. The first main afaik-open problem I want to suggest is to actually work out this step: to provide a precise setup where the uniform prior on boolean circuits up to a certain size is like the simplicity prior on boolean circuits (and to work out the correspondence). (It could also be interesting and [sufficient for building a bridge] to argue that the uniform prior on boolean circuits has good generalization properties in some other way.) I haven’t thought about this that much, but my initial sense is that this could totally be false unless one is careful about getting the right setup (for example: given inputs-outputs from a particular boolean function with a small circuit, maybe it would work up to a certain upper bound on the size of the circuits on which we have a uniform prior, and then stop working; and/or maybe it depends more precisely on our [types of circuits admitted] and/or [description language]). (I know there is this story with programs, but idk how to get such a correspondence for circuits from that, and the correspondence for circuits seems like what we actually need/want.)
The second afaik-open problem I’m suggesting is to figure out in much more detail how to get from e.g. the MLP with a certain prior to boolean circuits with a uniform prior.
One reason I’m stressing these afaik-open problems (particularly the second one) is that I’m pretty sure many parametrized bayesian setups do not in fact give good generalization behavior — one probably needs some further things (about the architecture+prior, given the task) to go right to get good generalization (in fact, I’d guess that it’s “rare” to get good generalization without these further unclear hyperparams taking on the right values), and one’s attempt at building a bridge should probably make contact with these further things (so as to not be “explaining” a falsehood).
One interesting example is given by MLPs in the NN gaussian process limit (i.e. a certain kind of initialization + taking the width to infinity) learning boolean functions, which I think ends up being equivalent to kernel ridge regression with the fourier basis on boolean functions as the kernel features (with certain L2 weights depending on the size of the XOR), which I think doesn’t have great generalization properties — in particular, it’s quite unlike simplicity-prior-circuit-solomonoff, and it’s probably fair to think of it as doing sth more like a polyfit in some sense. I think this also happens for the NTK, btw. (But I should say I’m going off some only loosely figured out calculations (joint with Dmitry Vaintrob and o1-preview) here, so there’s a real chance I’m wrong about this example and you shouldn’t completely trust me on it currently.) But I’d guess that deep learning can do somewhat better than this. (speculation: Maybe a major role in getting bad generalization here is played by the NNGP and NTK not “learning intermediate variables”, preventing any analogy with boolean circuits with some depth going through, whereas deep learning can learn intermediate variables to some extent.) So if we want to have a correct solomonoff story which explains better generalization behavior than that of this probably fairly stupid kernel thing, then we would probably want the story to make some distinction which prevents it from also applying in this NNGP limit. (Anyway, even if I’m wrong about the NNGP case, I’d guess that most setups provide examples of fairly poor generalization, so one probably really needn’t appeal to NNGP calculations to make this point.)
Separately from the above bridge attempt, it is not at all obvious to me that parametrized bayes in fact has such good generalization behavior at all (i.e., “at least as good as deep learning”, whatever that means, let’s say)[3]; here’s some messages on this topic I sent to [the group chat in which the posted discussion happened] later:
“i’d be interested in hearing your reasons to think that NN-parametrized bayesian inference with a prior given by canonical initialization randomization (or some other reasonable prior) generalizes well (for eg canonical ML tasks or boolean functions), if you think it does — this isn’t so clear to me at all
practical SGD-NNs generalize decently, but that’s imo a sufficiently different learning process to give little evidence about the bayesian case (but i’m open to further discussion of this). i have some vague sense that the bayesian thing should be better than SGD, but idk if i actually have good reason to believe this?
i assume that there are some other practical ML things inspired by bayes which generalize decently but it seems plausible that those are still pretty local so pretty far from actual bayes and maybe even closer to SGD than to bayes, tho idk what i should precisely mean by that. but eg it seems plausible from 3 min of thinking that some MCMC (eg SGLD) setup with a non-galactic amount of time on a NN of practical size would basically walk from init to a local likelihood max and not escape it in time, which sounds a lot more like SGD than like bayes (but idk maybe some step size scheduling makes the mixing time non-galactic in some interesting case somehow, or if it doesn’t actually do that maybe it can give a fine approximation of the posterior in some other practical sense anyway? seems tough). i haven’t thought about variational inference much tho — maybe there’s something practical which is more like bayes here and we could get some relevant evidence from that
maybe there’s some obvious answer and i’m being stupid here, idk :)
one could also directly appeal to the uniformly random program analogy but the current version of that imo doesn’t remotely constitute sufficiently good reason to think that bayesian NNs generalize well on its own”
to the extent that deep learning in fact exhibits good generalization, which is probably a very small extent compared to sth like Solomonoff induction, and this has to do with some stuff I talked about in my messages in the post above; but I digress
I also think that different architecture+prior+task/loss choices probably give many substantially-differently-behaved learning setups, deserving somewhat separate explanations of generalization, for both bayes and SGD.
to be clear: I’m not claiming it doesn’t have good generalization behavior — instead, I lack good evidence/reason to think it does or doesn’t and feel like I don’t know
some afaik-open problems relating to bridging parametrized bayes with sth like solomonoff induction
I think that for each NN architecture+prior+task/loss, conditioning the initialization prior on train data (or doing some other bayesian thing) is typically basically a completely different learning algorithm than (S)GD-learning, because local learning is a very different thing, which is one reason I doubt the story in the slides as an explanation of generalization in deep learning[1].[2] But setting this aside (though I will touch on it again briefly in the last point I make below), I agree it would be cool to have a story connecting the parametrized bayesian thing to something like Solomonoff induction. Here’s an outline of an attempt to give a more precise story extending the one in Lucius’s slides, with a few afaik-open problems:
Let’s focus on boolean functions (because that’s easy to think about — but feel free to make a different choice). Let’s take a learner to be shown certain input-output pairs (that’s “training it”), and having to predict outputs on new inputs (that’s “test time”). Let’s say we’re interested in understanding something about which learning setups “generalize well” to these new inputs.
What should we mean by “generalizing well” in this context? This isn’t so clear to me — we could e.g. ask that it does well on problems “like this” which come up in practice, but to solve such problems, one would want to look at what situation gave us the problem and so on, which doesn’t seem like the kind of data we want to include in the problem setup here; we could imagine simply removing such data and asking for something that would work well in practice, but this still doesn’t seem like such a clean criterion.
But anyway, the following seems like a reasonable Solomonoff-like thing:
There’s some complexity (i.e., size/[description length], probably) prior on boolean circuits. There can be multiple reasonable choices of [types of circuits admitted] and/or [description language] giving probably genuinely different priors here, but make some choice (it seems fine to make whatever reasonable choice which will fit best with the later parts of the story we’re attempting to build).
Think of all the outputs (i.e. train and test) as being generated by taking a circuit from this prior and running the inputs through it.
To predict outputs on new inputs, just do the bayesian thing (ie condition the induced prior on functions on all the outputs you’ve seen).
My suggestion is that to explain why another learning setup (for boolean functions) has good generalization properties, we could be sort of happy with building a bridge between it and the above simplicity-prior-circuit-solomonoff thing. (This could let us bypass having to further specify what it is to generalize well.)
One key step in the present attempt at building a bridge from NN-bayes to simplicity-prior-circuit-solomonoff is to get from simplicity-prior-circuit-solomonoff to a setup with a uniform prior over circuits — the story would like to say that instead of picking circuits from a simplicity prior, you can pick circuits uniformly at random from among all circuits of up to a certain size. The first main afaik-open problem I want to suggest is to actually work out this step: to provide a precise setup where the uniform prior on boolean circuits up to a certain size is like the simplicity prior on boolean circuits (and to work out the correspondence). (It could also be interesting and [sufficient for building a bridge] to argue that the uniform prior on boolean circuits has good generalization properties in some other way.) I haven’t thought about this that much, but my initial sense is that this could totally be false unless one is careful about getting the right setup (for example: given inputs-outputs from a particular boolean function with a small circuit, maybe it would work up to a certain upper bound on the size of the circuits on which we have a uniform prior, and then stop working; and/or maybe it depends more precisely on our [types of circuits admitted] and/or [description language]). (I know there is this story with programs, but idk how to get such a correspondence for circuits from that, and the correspondence for circuits seems like what we actually need/want.)
The second afaik-open problem I’m suggesting is to figure out in much more detail how to get from e.g. the MLP with a certain prior to boolean circuits with a uniform prior.
One reason I’m stressing these afaik-open problems (particularly the second one) is that I’m pretty sure many parametrized bayesian setups do not in fact give good generalization behavior — one probably needs some further things (about the architecture+prior, given the task) to go right to get good generalization (in fact, I’d guess that it’s “rare” to get good generalization without these further unclear hyperparams taking on the right values), and one’s attempt at building a bridge should probably make contact with these further things (so as to not be “explaining” a falsehood).
One interesting example is given by MLPs in the NN gaussian process limit (i.e. a certain kind of initialization + taking the width to infinity) learning boolean functions, which I think ends up being equivalent to kernel ridge regression with the fourier basis on boolean functions as the kernel features (with certain L2 weights depending on the size of the XOR), which I think doesn’t have great generalization properties — in particular, it’s quite unlike simplicity-prior-circuit-solomonoff, and it’s probably fair to think of it as doing sth more like a polyfit in some sense. I think this also happens for the NTK, btw. (But I should say I’m going off some only loosely figured out calculations (joint with Dmitry Vaintrob and o1-preview) here, so there’s a real chance I’m wrong about this example and you shouldn’t completely trust me on it currently.) But I’d guess that deep learning can do somewhat better than this. (speculation: Maybe a major role in getting bad generalization here is played by the NNGP and NTK not “learning intermediate variables”, preventing any analogy with boolean circuits with some depth going through, whereas deep learning can learn intermediate variables to some extent.) So if we want to have a correct solomonoff story which explains better generalization behavior than that of this probably fairly stupid kernel thing, then we would probably want the story to make some distinction which prevents it from also applying in this NNGP limit. (Anyway, even if I’m wrong about the NNGP case, I’d guess that most setups provide examples of fairly poor generalization, so one probably really needn’t appeal to NNGP calculations to make this point.)
Separately from the above bridge attempt, it is not at all obvious to me that parametrized bayes in fact has such good generalization behavior at all (i.e., “at least as good as deep learning”, whatever that means, let’s say)[3]; here’s some messages on this topic I sent to [the group chat in which the posted discussion happened] later:
to the extent that deep learning in fact exhibits good generalization, which is probably a very small extent compared to sth like Solomonoff induction, and this has to do with some stuff I talked about in my messages in the post above; but I digress
I also think that different architecture+prior+task/loss choices probably give many substantially-differently-behaved learning setups, deserving somewhat separate explanations of generalization, for both bayes and SGD.
to be clear: I’m not claiming it doesn’t have good generalization behavior — instead, I lack good evidence/reason to think it does or doesn’t and feel like I don’t know