Nice work! I’m not too attached to the particular prior described in the “UDT 1.5” post, and I’m very happy that you’re trying to design something better to solve the same problem. That said, it’s a bit suspicious that you’re conditioning on provability rather than truth, which leads you to use GLS etc. Maybe I just don’t understand the “non-naturalistic” objection?
Also I don’t understand why you have to use both propositional coherence and partitions of truth. Can’t you replace partitions of truth with arbitrary propositional combinations of sentences?
The main reason to think it’s important to be “naturalistic” about the updates is reflective consistency. When there is negligible interaction between possible worlds, an updateless agent should behave like an updateful one. The updateless version of the agent would not endorse updating on ϕ rather than □ϕ! Since it does not trust what its future self proves, it would self-modify to remove such an update if it were hard-wired. So I think updating on provability rather than truth is really the right thing.
My intention wasn’t really to use both propositional coherence and partitions of truth—I switched from one to the other because they’re equivalent (at least, the way I did it). Probably would have been better to stick with one.
I do think this notion of ‘naturalistic’ is important. The idea is that if another computer implements the same logic ans your internal theorem prover, and you know this, you should treat information coming from it just the same as you would your own. This seems like a desirable property, if you can get it.
I can understand being suspicious. I’m not claiming that using GLS gives some magical self-reference properties due to knowing what is true about provability. It’s more like using PA+CON(PA) to reason about PA. It’s much more restricted than that, though; it’s a probabilistic reasoning system that believes GLS, reasoning about PA and provability in PA. In any case, you won’t be automatically trusting external theorem provers in this “higher” system, only in PA. However, GLS is decidable, so trusting what external theorem provers claim about it is a non-issue.
What’s the use? Aside from giving a quite pleasing (to me at least) solution to the paradox of ignorance, this seems to me to be precisely what’s needed for impossible possible worlds. In order to be uncertain about logic itself, there needs to be some “structure” what we are uncertain about: something that is really chosen deterministically, according to real logic, but which we pretend is chosen randomly, in order to get a tractable distribution to reason about (which then includes the impossible possible worlds).
Nice work! I’m not too attached to the particular prior described in the “UDT 1.5” post, and I’m very happy that you’re trying to design something better to solve the same problem. That said, it’s a bit suspicious that you’re conditioning on provability rather than truth, which leads you to use GLS etc. Maybe I just don’t understand the “non-naturalistic” objection?
Also I don’t understand why you have to use both propositional coherence and partitions of truth. Can’t you replace partitions of truth with arbitrary propositional combinations of sentences?
The main reason to think it’s important to be “naturalistic” about the updates is reflective consistency. When there is negligible interaction between possible worlds, an updateless agent should behave like an updateful one. The updateless version of the agent would not endorse updating on ϕ rather than □ϕ! Since it does not trust what its future self proves, it would self-modify to remove such an update if it were hard-wired. So I think updating on provability rather than truth is really the right thing.
(Sorry I didn’t notice this comment earlier.)
My intention wasn’t really to use both propositional coherence and partitions of truth—I switched from one to the other because they’re equivalent (at least, the way I did it). Probably would have been better to stick with one.
I do think this notion of ‘naturalistic’ is important. The idea is that if another computer implements the same logic ans your internal theorem prover, and you know this, you should treat information coming from it just the same as you would your own. This seems like a desirable property, if you can get it.
I can understand being suspicious. I’m not claiming that using GLS gives some magical self-reference properties due to knowing what is true about provability. It’s more like using PA+CON(PA) to reason about PA. It’s much more restricted than that, though; it’s a probabilistic reasoning system that believes GLS, reasoning about PA and provability in PA. In any case, you won’t be automatically trusting external theorem provers in this “higher” system, only in PA. However, GLS is decidable, so trusting what external theorem provers claim about it is a non-issue.
What’s the use? Aside from giving a quite pleasing (to me at least) solution to the paradox of ignorance, this seems to me to be precisely what’s needed for impossible possible worlds. In order to be uncertain about logic itself, there needs to be some “structure” what we are uncertain about: something that is really chosen deterministically, according to real logic, but which we pretend is chosen randomly, in order to get a tractable distribution to reason about (which then includes the impossible possible worlds).