I liked your paper at AGI (we were discussing the same ideas at MIRI at that time; I guess it’s in the air at the moment). Our system is a definition of truth rather than a definition of beliefs. In fact P( “axioms inconsistent” ) > 0 in our formalism. There is definitely much more to say on this topic (I’m currently optimistic about resolving incompleteness up to some arbitrarily small epsilon, but it involves other ideas.)
I agree that some self-referential sentences have no meaning, but I think accepting them is philosophically unproblematic and there are lots of meaningful self-referential sentences (e.g. believing “everything I believe is true”) which I’d prefer not through out with the bathwater.
Indeed, it seems relatively easy to make progress in this direction given the current state of things! (I would have been surprised if you had not been thinking along similar lines to my paper.)
In fact P( “axioms inconsistent” ) > 0 in our formalism.
Ah! Interesting. Why is that?
I did construct a distribution in which this is not the case, but it was not particularly satisfying. If (during the random theory generation) you block the creation of existential statements until an example has already been introduced, then you seem to get acceptable results. However, the probability distribution does not seem obviously correct.
I liked your paper at AGI (we were discussing the same ideas at MIRI at that time; I guess it’s in the air at the moment). Our system is a definition of truth rather than a definition of beliefs. In fact P( “axioms inconsistent” ) > 0 in our formalism. There is definitely much more to say on this topic (I’m currently optimistic about resolving incompleteness up to some arbitrarily small epsilon, but it involves other ideas.)
I agree that some self-referential sentences have no meaning, but I think accepting them is philosophically unproblematic and there are lots of meaningful self-referential sentences (e.g. believing “everything I believe is true”) which I’d prefer not through out with the bathwater.
Indeed, it seems relatively easy to make progress in this direction given the current state of things! (I would have been surprised if you had not been thinking along similar lines to my paper.)
Ah! Interesting. Why is that?
I did construct a distribution in which this is not the case, but it was not particularly satisfying. If (during the random theory generation) you block the creation of existential statements until an example has already been introduced, then you seem to get acceptable results. However, the probability distribution does not seem obviously correct.