The idea is similar in concept to fuzzy theories of truth (which try to assign fractional values to paradoxes), but succeeds where those fail (ie, allowing quantifiers) to a surprising degree. Like fuzzy theories and many other extra-value theories, the probabilistic self-reference is a “vague theory” of sorts; but it seems to be minimally vague (the vagueness comes down to infinitesimal differences!).
Comparing it to my version of probabilities over logic, we see that this system will trust its axioms to an arbitrary high probability, whereas if my system were given some axioms such as PA or ZFC, it would assign probability 1 to those axioms, but assign a nonzero probability to the axioms being inconsistent. (This is not a direct comparison: I’m comparing a self-referential probability with a probability-of-inconsistency. To make a direct comparison we would have to establish whether the arbitrarily high self-referential probability entailed that the system placed an arbitrarily low probability on its axioms being inconsistent. An additional question would be whether it believed that the axioms plus the reflection principle are probably consistent.)
Comparing to my current theory of truth, the idea of probabilistic self-reference seems to achieve almost everything that is classically desired of a theory of truth (to within an epsilon!). My current understanding, on the other hand, says that some of those things were wrong-headed (we should not have desired them in the first place); in particular, allowing a truth predicate in the same context as a diagonal lemma holds doesn’t make sense, because it leads very directly to the creation of nonsense expressions which intuitively seem to have no meaning. So, we could ask: what is the benefit of succesfully providing truth-values for these nonsense expressions? Is it not better to rult out the nonsense?
The reply seems to be: probabilistic self-reference may include a bunch of nonsense, but it does so at little cost, and we seem to get more (desirable) structure with fewer assumptions in this way. (The test would be to try to buld up foundational mathematics and see how natural each approach seemed.)
I liked your paper at AGI (we were discussing the same ideas at MIRI at that time; I guess it’s in the air at the moment). Our system is a definition of truth rather than a definition of beliefs. In fact P( “axioms inconsistent” ) > 0 in our formalism. There is definitely much more to say on this topic (I’m currently optimistic about resolving incompleteness up to some arbitrarily small epsilon, but it involves other ideas.)
I agree that some self-referential sentences have no meaning, but I think accepting them is philosophically unproblematic and there are lots of meaningful self-referential sentences (e.g. believing “everything I believe is true”) which I’d prefer not through out with the bathwater.
Indeed, it seems relatively easy to make progress in this direction given the current state of things! (I would have been surprised if you had not been thinking along similar lines to my paper.)
In fact P( “axioms inconsistent” ) > 0 in our formalism.
Ah! Interesting. Why is that?
I did construct a distribution in which this is not the case, but it was not particularly satisfying. If (during the random theory generation) you block the creation of existential statements until an example has already been introduced, then you seem to get acceptable results. However, the probability distribution does not seem obviously correct.
The idea is similar in concept to fuzzy theories of truth (which try to assign fractional values to paradoxes), but succeeds where those fail (ie, allowing quantifiers) to a surprising degree. Like fuzzy theories and many other extra-value theories, the probabilistic self-reference is a “vague theory” of sorts; but it seems to be minimally vague (the vagueness comes down to infinitesimal differences!).
Comparing it to my version of probabilities over logic, we see that this system will trust its axioms to an arbitrary high probability, whereas if my system were given some axioms such as PA or ZFC, it would assign probability 1 to those axioms, but assign a nonzero probability to the axioms being inconsistent. (This is not a direct comparison: I’m comparing a self-referential probability with a probability-of-inconsistency. To make a direct comparison we would have to establish whether the arbitrarily high self-referential probability entailed that the system placed an arbitrarily low probability on its axioms being inconsistent. An additional question would be whether it believed that the axioms plus the reflection principle are probably consistent.)
Comparing to my current theory of truth, the idea of probabilistic self-reference seems to achieve almost everything that is classically desired of a theory of truth (to within an epsilon!). My current understanding, on the other hand, says that some of those things were wrong-headed (we should not have desired them in the first place); in particular, allowing a truth predicate in the same context as a diagonal lemma holds doesn’t make sense, because it leads very directly to the creation of nonsense expressions which intuitively seem to have no meaning. So, we could ask: what is the benefit of succesfully providing truth-values for these nonsense expressions? Is it not better to rult out the nonsense?
The reply seems to be: probabilistic self-reference may include a bunch of nonsense, but it does so at little cost, and we seem to get more (desirable) structure with fewer assumptions in this way. (The test would be to try to buld up foundational mathematics and see how natural each approach seemed.)
I liked your paper at AGI (we were discussing the same ideas at MIRI at that time; I guess it’s in the air at the moment). Our system is a definition of truth rather than a definition of beliefs. In fact P( “axioms inconsistent” ) > 0 in our formalism. There is definitely much more to say on this topic (I’m currently optimistic about resolving incompleteness up to some arbitrarily small epsilon, but it involves other ideas.)
I agree that some self-referential sentences have no meaning, but I think accepting them is philosophically unproblematic and there are lots of meaningful self-referential sentences (e.g. believing “everything I believe is true”) which I’d prefer not through out with the bathwater.
Indeed, it seems relatively easy to make progress in this direction given the current state of things! (I would have been surprised if you had not been thinking along similar lines to my paper.)
Ah! Interesting. Why is that?
I did construct a distribution in which this is not the case, but it was not particularly satisfying. If (during the random theory generation) you block the creation of existential statements until an example has already been introduced, then you seem to get acceptable results. However, the probability distribution does not seem obviously correct.