The abstract theory of InfraBayes (like the abstract theory of Bayes) elides computational concerns.
In reality, all of ML can more or less be thought of as using a big search for good models, where “good” means something approximately like MAP, although we can also consider more sophisticated variational targets. This introduces two different types of approximation:
The optimization target is approximate.
The optimization itself gives only approximate maxima.
What we want out of InfraBayes is a bounded regret guarantee (in settings where we previously didn’t know how to get one). What we have is a picture of how to get that if we can actually do the generalized Bayesian update. What we might want is a picture of how to do that more generally, when we can’t actually compute the full update.
Can we get such a thing with InfraBayes?
In other words, search is a very basic type of logical uncertainty. Currently, we don’t have much of a model of that, except “Bayesian Search” (which does not provide any nice regret bounds that I know of, although I may be ignorant). We might need such a thing in order to get nice guarantees for systems which employ search internally. Can we get it?
Obviously, we can do the bayesian-search thing with InfraBayes substituted in, which already probably provides some kind of guarantee which couldn’t be gotten otherwise. However, the challenge is to get the guarantee to carry all the way through to the end result.
My hope is that we will eventually have computationally feasible algorithms that satisfy provable (or at least conjectured) infra-Bayesian regret bounds for some sufficiently rich hypothesis space. Currently, even in the Bayesian case, we only have such algorithms for poor hypothesis spaces, such as MDPs with a small number of states. We can also rule out such algorithms for some large hypothesis spaces, such as short programs with a fixed polynomial-time bound. In between, there should be some hypothesis space which is small enough to be feasible and rich enough to be useful. Indeed, it seems to me that the existence of such a space is the simplest explanation for the success of deep learning (that is, for the ability to solve a diverse array of problems with relatively simple and domain-agnostic algorithms). But, at present I only have speculations about what this space looks like.
OK, so, here is a question.
The abstract theory of InfraBayes (like the abstract theory of Bayes) elides computational concerns.
In reality, all of ML can more or less be thought of as using a big search for good models, where “good” means something approximately like MAP, although we can also consider more sophisticated variational targets. This introduces two different types of approximation:
The optimization target is approximate.
The optimization itself gives only approximate maxima.
What we want out of InfraBayes is a bounded regret guarantee (in settings where we previously didn’t know how to get one). What we have is a picture of how to get that if we can actually do the generalized Bayesian update. What we might want is a picture of how to do that more generally, when we can’t actually compute the full update.
Can we get such a thing with InfraBayes?
In other words, search is a very basic type of logical uncertainty. Currently, we don’t have much of a model of that, except “Bayesian Search” (which does not provide any nice regret bounds that I know of, although I may be ignorant). We might need such a thing in order to get nice guarantees for systems which employ search internally. Can we get it?
Obviously, we can do the bayesian-search thing with InfraBayes substituted in, which already probably provides some kind of guarantee which couldn’t be gotten otherwise. However, the challenge is to get the guarantee to carry all the way through to the end result.
My hope is that we will eventually have computationally feasible algorithms that satisfy provable (or at least conjectured) infra-Bayesian regret bounds for some sufficiently rich hypothesis space. Currently, even in the Bayesian case, we only have such algorithms for poor hypothesis spaces, such as MDPs with a small number of states. We can also rule out such algorithms for some large hypothesis spaces, such as short programs with a fixed polynomial-time bound. In between, there should be some hypothesis space which is small enough to be feasible and rich enough to be useful. Indeed, it seems to me that the existence of such a space is the simplest explanation for the success of deep learning (that is, for the ability to solve a diverse array of problems with relatively simple and domain-agnostic algorithms). But, at present I only have speculations about what this space looks like.