What properties do you want your logical probability distribution to have? I’d prefer building something with these properties, rather than jumping to answers first.
You don’t seem to worry about computational resources, but I still recommend my posts. You focus on using other agents’ beliefs as evidence, but you also want to avoid Löb’s theorem, and I think you might violate it if you prove R-+(s,1).
What properties do you want your logical probability distribution to have?
In a consistent theory, my distribution assigns probability 1 to provably true statements and probability 0 to provably false statements. In general, I think it satisfies the interval version of the property P(x) = P(x & y) + P(x & not-y) (I think it should follow from my treatment of 0-order equivalent sentences as equivalent but I have to think about it). These two would mean that for consistent theories it is similar to “coherent” assignments in the sense of Christiano et al, albeit it only assigns probability intervals rather than actual probabilities. In inconsistent theories, it assigns high probability to sentences with short evidence in favor and long evidence against, which is what we want to happen for UDT to work. There is more to be said regarding choosing encoding of evidence and I hope to write about it later.
You don’t seem to worry about computational resources, but I still recommend my posts.
I’ve read some of them! You can get a computable approximation of my probability intervals if you limit evidence length to some finite D. Of course the time complexity would be exponential in D.
...you also want to avoid Löb’s theorem, and I think you might violate it if you prove R-+(s,1).
What do you mean “violate it”? How can you violate a theorem?
it assigns high probability to sentences with short evidence in favor and long evidence against, which is what we want to happen for UDT to work.
Interesting. I’m not convinced that that’s required (e.g. for the 5 and 10 problem). If you read Ingredients of TDT with an eye towards that, I think there’s a strong case that using causal surgery rather than logical implication solves the problem.
What do you mean “violate it”? How can you violate a theorem?
Fair enough—I meant that if you prove R-+(s,1), then for a consistent set of axioms, I think you violate the consistency condition set by Löb’s theorem if say “If Pmin(s)=1, then s.” Hmm. I guess this is not really a problem with the content of your post—it’s more about the form of the additional axiom “GL”.
Interesting. I’m not convinced that that’s required (e.g. for the 5 and 10 problem). If you read Ingredients of TDT with an eye towards that, I think there’s a strong case that using causal surgery rather than logical implication solves the problem.
As far as I know, TDT was mostly abandoned in favor of UDT. In particular I don’t think there is a well defined recipe how to describe a given process as a causal diagram with pure-computation nodes. But I might be missing something.
Fair enough—I meant that if you prove R-+(s,1), then for a consistent set of axioms, I think you violate the consistency condition set by Löb’s theorem if say “If Pmin(s)=1, then s.”
I’m not sure what you’re saying here. The usual notion of consistency doesn’t apply to my system since it works fine for inconsistent theories. I believe that for consistent theories the energy minimum is always 0 which provides a sort of analogue to consistency in usual logics.
What properties do you want your logical probability distribution to have? I’d prefer building something with these properties, rather than jumping to answers first.
You don’t seem to worry about computational resources, but I still recommend my posts. You focus on using other agents’ beliefs as evidence, but you also want to avoid Löb’s theorem, and I think you might violate it if you prove R-+(s,1).
In a consistent theory, my distribution assigns probability 1 to provably true statements and probability 0 to provably false statements. In general, I think it satisfies the interval version of the property P(x) = P(x & y) + P(x & not-y) (I think it should follow from my treatment of 0-order equivalent sentences as equivalent but I have to think about it). These two would mean that for consistent theories it is similar to “coherent” assignments in the sense of Christiano et al, albeit it only assigns probability intervals rather than actual probabilities. In inconsistent theories, it assigns high probability to sentences with short evidence in favor and long evidence against, which is what we want to happen for UDT to work. There is more to be said regarding choosing encoding of evidence and I hope to write about it later.
I’ve read some of them! You can get a computable approximation of my probability intervals if you limit evidence length to some finite D. Of course the time complexity would be exponential in D.
What do you mean “violate it”? How can you violate a theorem?
Interesting. I’m not convinced that that’s required (e.g. for the 5 and 10 problem). If you read Ingredients of TDT with an eye towards that, I think there’s a strong case that using causal surgery rather than logical implication solves the problem.
Fair enough—I meant that if you prove R-+(s,1), then for a consistent set of axioms, I think you violate the consistency condition set by Löb’s theorem if say “If Pmin(s)=1, then s.” Hmm. I guess this is not really a problem with the content of your post—it’s more about the form of the additional axiom “GL”.
As far as I know, TDT was mostly abandoned in favor of UDT. In particular I don’t think there is a well defined recipe how to describe a given process as a causal diagram with pure-computation nodes. But I might be missing something.
I’m not sure what you’re saying here. The usual notion of consistency doesn’t apply to my system since it works fine for inconsistent theories. I believe that for consistent theories the energy minimum is always 0 which provides a sort of analogue to consistency in usual logics.