Can the AI recognize such situations and say “no way, this formal system doesn’t seem to describe my regular integers”?
It need not -- asking whether a formal system “describes my regular integers” is a disguised query for whether it satisfies some set of properties that happen to be useful. All the AI needs to be able to do is evaluate how effectively different models describe whatever it’s trying to use them to describe.
Unfortunately, if we have an arithmetical statement that we can neither prove or disprove so far, your idea would have us believe that it’s true and its negation is also true. That doesn’t look like correct Bayesian reasoning to me!
I don’t see why not. It’s not that we would believe the statement and its negation are both true; rather, we would believe that the statement is true with probability x and false with probability 1-x, as usual.
asking whether a formal system “describes my regular integers” is a disguised query for whether it satisfies some set of properties that happen to be useful
komponisto, did you leave my question unanswered because you don’t know the answer, or because you thought the question stupid and decided to bail out? If you can dissolve my confusion, please do.
Sorry! I didn’t have an answer immediately, but thought I might come up with one after a day or two. Unfortunately, by that time, I had forgotten about the question!
Anyway, the way I’d approach it is to ask what is wrong, from our point of view, with a given nonstandard theory.
Actually, I just thought of something while writing this comment. Take your example of adding a “PA is inconsistent” axiom to PA. Yes, we could add such an axiom, but why bother? What use do we get from this new system that we didn’t already get from PA? If the answer is “nothing”, then we can invoke a simplicity criterion. On the other hand, if there is some situation where this system is actually convenient, then there is indeed nothing “wrong” with it, and we wouldn’t want an AI to think that there was.
(Edit: I’ll try to make sure I reply more quickly next time.)
It’s not that we would believe the statement and its negation are both true; rather, we would believe that the statement is true with probability x and false with probability 1-x, as usual.
Then I don’t understand why you said this earlier:
we expect that if ZFC were inconsistent, we would have found a contradiction by now
The consistency of ZFC is an arithmetical statement. You say we haven’t yet found a disproof for it, so we should believe it more; but we haven’t found a disproof of its negation either, so we should believe it less! Isn’t this incoherent by Bayesian lights? Or am I misunderstanding something about your idea?
It need not -- asking whether a formal system “describes my regular integers” is a disguised query for whether it satisfies some set of properties that happen to be useful. All the AI needs to be able to do is evaluate how effectively different models describe whatever it’s trying to use them to describe.
I don’t see why not. It’s not that we would believe the statement and its negation are both true; rather, we would believe that the statement is true with probability x and false with probability 1-x, as usual.
What are these properties?
komponisto, did you leave my question unanswered because you don’t know the answer, or because you thought the question stupid and decided to bail out? If you can dissolve my confusion, please do.
Sorry! I didn’t have an answer immediately, but thought I might come up with one after a day or two. Unfortunately, by that time, I had forgotten about the question!
Anyway, the way I’d approach it is to ask what is wrong, from our point of view, with a given nonstandard theory.
Actually, I just thought of something while writing this comment. Take your example of adding a “PA is inconsistent” axiom to PA. Yes, we could add such an axiom, but why bother? What use do we get from this new system that we didn’t already get from PA? If the answer is “nothing”, then we can invoke a simplicity criterion. On the other hand, if there is some situation where this system is actually convenient, then there is indeed nothing “wrong” with it, and we wouldn’t want an AI to think that there was.
(Edit: I’ll try to make sure I reply more quickly next time.)
Then I don’t understand why you said this earlier:
The consistency of ZFC is an arithmetical statement. You say we haven’t yet found a disproof for it, so we should believe it more; but we haven’t found a disproof of its negation either, so we should believe it less! Isn’t this incoherent by Bayesian lights? Or am I misunderstanding something about your idea?