This is an informal note about an idea which I consider important, but which I’m not planning to start working on immediately (first I want to develop the theory of delegation some more in the realizable setting). Putting it up here as a reminder, a point to bear in mind, and in case someone else wants to pick it up.
Forecasting using gamblers (a.k.a. logical induction generalized outside of logic, a.k.a. dominant forecasting)* can be regarded as a generalization of Bayesian inference, however it seem qualitatively less computationally tractable than Bayesian inference. Bayesian inference for a modest number of efficiently computable hypotheses (i.e. hypotheses that allow computing the conditional probability distribution of the next observation efficiently; this is a stronger requirement than efficient samplability) is efficiently computable. On the other hand, computing a dominant forecaster seems intractable even for a modest number of efficiently computable gamblers, since it amounts to finding a fixed point which is a problem that is often PPAD-equivalent.
However, it occurred to me that there is a fairly large class of gamblers for which the problem is tractable, in particular it probably includes gamblers for incomplete models. A gambler is a continuous function G:ΔOω→C(Oω) (we will ignore dependence on past forecasts since it is immaterial for our purpose). For now, pretend that ΔOω is a finite dimensional space (indeed, when O is finite, the simplest way to get a computable approximation is truncating the space to finite dimension via replacing Oω by On; nevertheless it would be nice to get the functional analysis right later). Then, C(Oω) is its cotangent space and therefore G is a covector field = section of the cotangent bundle = differential form of rank 1. It is thus natural to consider the special case when G is exact i.e. equal to the differential of some function g:ΔOω→R. For convenience, we will take a minus sign, i.e. G=−dg. Moreover, we can consider the further special case when g is convex. The resulting class of gamblers is closed under linear combinations, so dominating a countable set of “convex gamblers” reduces to dominating a single convex gambler.
For convex differentiable g, G=−dg and any μ∈ΔOω and x∈Oω, we have
G(μ,x)−Eμ[G(μ)]=Eμ[dg(μ)]−Eδx[dg(μ)]≤g(μ)−g(δx)
It follows that for any μ∗∈argming
suppμ∗⊆argmaxG(μ∗)
Thus, we reduced the problem to convex optimization which is well-studied and tractable! Specifically for an incomplete model Φ, the associated function g is something like a convex function of the Kantorovich-Rubinstein distance from Φ, except that KR distance is not differentiable. This should be possible to remedy either by “smoothing” the KR distance or by considering subgradients and multivalued gamblers**.
*We need a better name for this. “Logical induction” sounds specific to formal logic, and “dominant forecasting” is too generic. My ideas so far are “mercantile forecasting” and “Dutch forecasting”. Also, we might want a special name for the incomplete models version as opposed to the fully general gamblers version.
**More generally, it seems possible to incorporate multivalued gamblers into the formalism e.g. using Lassonde’s fixed point theorem.
On the computational feasibility of forecasting using gamblers
This is an informal note about an idea which I consider important, but which I’m not planning to start working on immediately (first I want to develop the theory of delegation some more in the realizable setting). Putting it up here as a reminder, a point to bear in mind, and in case someone else wants to pick it up.
Forecasting using gamblers (a.k.a. logical induction generalized outside of logic, a.k.a. dominant forecasting)* can be regarded as a generalization of Bayesian inference, however it seem qualitatively less computationally tractable than Bayesian inference. Bayesian inference for a modest number of efficiently computable hypotheses (i.e. hypotheses that allow computing the conditional probability distribution of the next observation efficiently; this is a stronger requirement than efficient samplability) is efficiently computable. On the other hand, computing a dominant forecaster seems intractable even for a modest number of efficiently computable gamblers, since it amounts to finding a fixed point which is a problem that is often PPAD-equivalent.
However, it occurred to me that there is a fairly large class of gamblers for which the problem is tractable, in particular it probably includes gamblers for incomplete models. A gambler is a continuous function G:ΔOω→C(Oω) (we will ignore dependence on past forecasts since it is immaterial for our purpose). For now, pretend that ΔOω is a finite dimensional space (indeed, when O is finite, the simplest way to get a computable approximation is truncating the space to finite dimension via replacing Oω by On; nevertheless it would be nice to get the functional analysis right later). Then, C(Oω) is its cotangent space and therefore G is a covector field = section of the cotangent bundle = differential form of rank 1. It is thus natural to consider the special case when G is exact i.e. equal to the differential of some function g:ΔOω→R. For convenience, we will take a minus sign, i.e. G=−dg. Moreover, we can consider the further special case when g is convex. The resulting class of gamblers is closed under linear combinations, so dominating a countable set of “convex gamblers” reduces to dominating a single convex gambler.
For convex differentiable g, G=−dg and any μ∈ΔOω and x∈Oω, we have
G(μ,x)−Eμ[G(μ)]=Eμ[dg(μ)]−Eδx[dg(μ)]≤g(μ)−g(δx)
It follows that for any μ∗∈argming
suppμ∗⊆argmaxG(μ∗)
Thus, we reduced the problem to convex optimization which is well-studied and tractable! Specifically for an incomplete model Φ, the associated function g is something like a convex function of the Kantorovich-Rubinstein distance from Φ, except that KR distance is not differentiable. This should be possible to remedy either by “smoothing” the KR distance or by considering subgradients and multivalued gamblers**.
*We need a better name for this. “Logical induction” sounds specific to formal logic, and “dominant forecasting” is too generic. My ideas so far are “mercantile forecasting” and “Dutch forecasting”. Also, we might want a special name for the incomplete models version as opposed to the fully general gamblers version.
**More generally, it seems possible to incorporate multivalued gamblers into the formalism e.g. using Lassonde’s fixed point theorem.