I’m really happy to see this. I’ve had similar thoughts about the Good Regulator Theorem, but didn’t take the time to write them up or really pursue the fix.
Marginally related: my hope at some point was to fix the Good Regulator Theorem and then integrate it with other representation theorems, to come up with a representation theorem which derived several things simultaneously:
Probabilistic beliefs (or some appropriate generalization).
Expected utility theory (or some appropriate generalization).
A notion of “truth” based on map/territory correspondence (or some appropriate modification). This is the Good Regulator part.
The most ambitious version I can think of would address several more questions:
A. Some justification for the basic algebra. (I think sigma-algebras are probably not the right algebra to base beliefs on. Something resembling linear logic might be better for reasons we’ve discussed privately; that’s very speculative of course. Ideally the right algebra should be derived from considerations arising in construction of the representation theorem, rather than attempting to force any outcome top-down.) This is related to questions of what’s the right logic, and should touch on questions of constructivism vs platonism. Due to point #3, it should also touch on formal theories of truth, particularly if we can manage a theorem related to embedded agency rather than cartesian (dualistic) agency.
B. It should be better than CCT in that it should represent the full preference ordering, rather than only the optimal policy. This may or may not lead to InfraBayesian beliefs /expected values (the InfraBayesian representation theorem being a generalization of CCT which represents the whole preference ordering).
C. It should ideally deal with logical uncertainty, not just the logically omniscient case. This is hard. (But your representation theorem for logical induction is a start.) Or failing that, it should at least deal with a logically omniscient version of Radical Probabilism, ie address the radical-probabilist critique of Bayesian updating. (See my post Radical Probabilism; currently typing on a phone, so getting links isn’t convenient.
D. Obviously it would ideally also deal with questions of CDT vs EDT (ie present a solution to the problem of counterfactuals).
E. And deal with tiling problems, perhaps as part of the basic criteria.
I think sigma-algebras are probably not the right algebra to base beliefs on. Something resembling linear logic might be better for reasons we’ve discussed privately; that’s very speculative of course. Ideally the right algebra should be derived from considerations arising in construction of the representation theorem, rather than attempting to force any outcome top-down.
Have you elaborated on this somewhere or can you link some resource about why linear logic is a better algebra for beliefs than sigma algebra?
More or less quoting from autogenerated subtitles:
So, this remains very speculative despite years of kind of thinking about this and I’m not saying something like “linear logic is just the right thing”. I feel there is something deeply wrong about linear logic as well but I’ll give my argument.
So my pitch is like this: we want to understand where does the sigma algebra come from or where does the algebra come from. And if it is a sigma algebra, could we justify that? And if it’s not, what’s the right appropriate thing that we get instead, that sort of naturally falls out? My argument that the thing that naturally falls out is probably more like linear logic than like a sigma algebra is like this.
As many of you may know I like logical induction a whole lot. I think that it’s right to think of beliefs as not completely coherent and instead we want systems that have these approximately coherent beliefs. And how do you do that? Well a kind of generic way to do that is with a market because in a market any sort of easily computed non-coherence will be pumped out of the market by a trader who recognizes that incoherence and takes advantage of it for a money pump and thereby gains wealth and at some point has enough wealth that it’s just enforcing that notion of coherence. I think that’s a great picture but if we are imagining beliefs as having this kind of type signature of things on a market that can be traded then the natural algebra for beliefs is basically like an algebra of derivatives.
So we have a bunch of basic goods. Some of them represent probabilities because eventually, we anticipate that they take on the value zero or one. Some of them represent more general expectations as I’ve been arguing but we have these goods. The goods have values which are expectations and then we have ways of composing these goods together to make more goods. I don’t have a completely satisfying picture here that says “Ah! It should be linear logic!” but it feels like if I imagine goods as these kinds of contracts then I naturally have something like a tensor. If I have good A and good B then I have a contract which is like “I owe you one A and one B and this is like a tensor. I naturally have the short of an object which is like the negation. So if I have good A and then somebody wants to short that good, then I can kind of make this argument for like all the other linear operators, like natural contract types.
If we put together a prediction market, we can we can force it to use classical logic if we want. That’s what logical induction does. Logical induction gives us something that approximates classical probability distributions but only by virtue of forcing. The market maker is saying “I will allow any trades that enforce the coherence properties associated with classical logic”. It’s like: things are either true or false. Propositions are assumed to eventually converge to one or zero. We assume that there’s not some unresolved proposition even though logical induction is manufactured to deal with logical uncertainty which means there are some undecidable propositions because we’re dealing with general propositions in mathematics. So it’s sort of enforcing this classicality with no justification. My hope there is like: Well, we should think about what falls out naturally from the idea of setting up a prediction market when the market maker isn’t obsessed with classical logic. As I said it just seems like whatever it is it seems closer to the linear logic.
Not sure this is exactly what you meant by the full preference ordering, but might be of interest: I give the preorder of universally-shared-preferences between “models” here (in section 4).
Basically, it is the Blackwell order, if you extend the Blackwell setting to include a system.
I’m really happy to see this. I’ve had similar thoughts about the Good Regulator Theorem, but didn’t take the time to write them up or really pursue the fix.
Marginally related: my hope at some point was to fix the Good Regulator Theorem and then integrate it with other representation theorems, to come up with a representation theorem which derived several things simultaneously:
Probabilistic beliefs (or some appropriate generalization).
Expected utility theory (or some appropriate generalization).
A notion of “truth” based on map/territory correspondence (or some appropriate modification). This is the Good Regulator part.
The most ambitious version I can think of would address several more questions:
A. Some justification for the basic algebra. (I think sigma-algebras are probably not the right algebra to base beliefs on. Something resembling linear logic might be better for reasons we’ve discussed privately; that’s very speculative of course. Ideally the right algebra should be derived from considerations arising in construction of the representation theorem, rather than attempting to force any outcome top-down.) This is related to questions of what’s the right logic, and should touch on questions of constructivism vs platonism. Due to point #3, it should also touch on formal theories of truth, particularly if we can manage a theorem related to embedded agency rather than cartesian (dualistic) agency.
B. It should be better than CCT in that it should represent the full preference ordering, rather than only the optimal policy. This may or may not lead to InfraBayesian beliefs /expected values (the InfraBayesian representation theorem being a generalization of CCT which represents the whole preference ordering).
C. It should ideally deal with logical uncertainty, not just the logically omniscient case. This is hard. (But your representation theorem for logical induction is a start.) Or failing that, it should at least deal with a logically omniscient version of Radical Probabilism, ie address the radical-probabilist critique of Bayesian updating. (See my post Radical Probabilism; currently typing on a phone, so getting links isn’t convenient.
D. Obviously it would ideally also deal with questions of CDT vs EDT (ie present a solution to the problem of counterfactuals).
E. And deal with tiling problems, perhaps as part of the basic criteria.
Have you elaborated on this somewhere or can you link some resource about why linear logic is a better algebra for beliefs than sigma algebra?
I asked Abram about this live here https://youtu.be/zIC_YfLuzJ4?si=l5xbTCyXK9UhofIH&t=5546
Would you be able to share Abram’s answer in written form? I’d be keen to hear what he has to say.
More or less quoting from autogenerated subtitles:
Abram also shared the paper: From Classical to Intuitionistic Probability.
Not sure this is exactly what you meant by the full preference ordering, but might be of interest: I give the preorder of universally-shared-preferences between “models” here (in section 4).
Basically, it is the Blackwell order, if you extend the Blackwell setting to include a system.