Sure, thats always true. But sometimes its also true that A&¬A. So unless you believe PA is consistent, you need to hold open the possibility that the ball will both (stop and continue) and (do at most one of those). But of course you can also prove that it will do at most one of those. And so on. I’m not very confident whats right, ordinary imagination is probably just misleading here.
I think you’re still just confusing levels here. If you’re reasoning using PA, you’ll hold open the possibility that PA is inconsistent, but you won’t hold open the possibility that A&¬A. You believe the world is consistent. You’re just not so sure about PA.
I’m wondering what you mean by “hold open the possibility”.
If you mean “keep some probability mass on this possibility”, then I think most reasonable definitions of “keep your probabilities consistent with your logical beliefs” will forbid this.
If you mean “hold off on fully believing things which contradict the possibility”, then obviously the agent would hold off on fully believing PA itself.
Etc for other reasonable definitions of holding open the possibility (I claim).
If you’re reasoning using PA, you’ll hold open the possibility that PA is inconsistent, but you won’t hold open the possibility that A&¬A. You believe the world is consistent. You’re just not so sure about PA.
Do you? This sounds like PA is not actually the logic you’re using. Which is realistic for a human. But if PA is indeed inconsistent, and you don’t have some further-out system to think in, then what is the difference to you between “PA is inconsistent” and “the world is inconsistent”? In both cases you just believe everything and its negation. This also goes with what I said about thought and perception not being separated in this model, which stand in for “logic” and “the world”. So I suppose that is where you would look when trying to fix this.
If you mean “hold off on fully believing things which contradict the possibility”, then obviously the agent would hold off on fully believing PA itself.
You do fully believe in PA. But it might be that you also believe its negation. Obviously this doesn’t go well with probabilistic approaches.
This sounds like PA is not actually the logic you’re using.
Maybe this is the confusion. I’m not using PA. I’m assuming (well, provisionally assuming) PA is consistent.
If PA is consistent, then an agent using PA believes the world is consistent—in the sense of assigning probability 1 to tautologies, and also assigning probability 0 to contradictions.
(At least, 1 to tautologies it can recognize, and 0 to contradictions it can recognize.)
Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don’t know whether PA is consistent, but, believe the world is consistent.
If PA were inconsistent, then we need more assumptions to tell us how probabilities are assigned. EG, maybe the agent “respects logic” in the sense of assigning 0 to refutable things. Then It assigns 0 to everything. Maybe it “respects logic” in the sense of assigning 1 to provable things. Then it assigns 1 to everything. (But we can’t have both. The two notions of “respect logic” are equivalent if the underlying logic is consistent, but not otherwise.)
But such an agent doesn’t have much to say for itself anyway, so it’s more interesting to focus on what the consistent agent has to say for itself.
And I think the consistent agent very much does not “hold open the possibility” that the world is inconsistent. It actively denies this.
Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don’t know whether PA is consistent, but, believe the world is consistent.
Theres two ways to express “PA is consistent”. The first is ∀A¬(A∧¬A). The other is a complicated construct about Gödel-encodings. Each has a corresponding version of “the world is consistent” (indeed, this “world” is inside PA, so they are basically equivalent). The agent using PA will believe only the former. The Troll expresses the consistency of PA using provability logic, which if I understand correctly has the gödelization in it.
I think you’re still just confusing levels here. If you’re reasoning using PA, you’ll hold open the possibility that PA is inconsistent, but you won’t hold open the possibility that A&¬A. You believe the world is consistent. You’re just not so sure about PA.
I’m wondering what you mean by “hold open the possibility”.
If you mean “keep some probability mass on this possibility”, then I think most reasonable definitions of “keep your probabilities consistent with your logical beliefs” will forbid this.
If you mean “hold off on fully believing things which contradict the possibility”, then obviously the agent would hold off on fully believing PA itself.
Etc for other reasonable definitions of holding open the possibility (I claim).
Do you? This sounds like PA is not actually the logic you’re using. Which is realistic for a human. But if PA is indeed inconsistent, and you don’t have some further-out system to think in, then what is the difference to you between “PA is inconsistent” and “the world is inconsistent”? In both cases you just believe everything and its negation. This also goes with what I said about thought and perception not being separated in this model, which stand in for “logic” and “the world”. So I suppose that is where you would look when trying to fix this.
You do fully believe in PA. But it might be that you also believe its negation. Obviously this doesn’t go well with probabilistic approaches.
Maybe this is the confusion. I’m not using PA. I’m assuming (well, provisionally assuming) PA is consistent.
If PA is consistent, then an agent using PA believes the world is consistent—in the sense of assigning probability 1 to tautologies, and also assigning probability 0 to contradictions.
(At least, 1 to tautologies it can recognize, and 0 to contradictions it can recognize.)
Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don’t know whether PA is consistent, but, believe the world is consistent.
If PA were inconsistent, then we need more assumptions to tell us how probabilities are assigned. EG, maybe the agent “respects logic” in the sense of assigning 0 to refutable things. Then It assigns 0 to everything. Maybe it “respects logic” in the sense of assigning 1 to provable things. Then it assigns 1 to everything. (But we can’t have both. The two notions of “respect logic” are equivalent if the underlying logic is consistent, but not otherwise.)
But such an agent doesn’t have much to say for itself anyway, so it’s more interesting to focus on what the consistent agent has to say for itself.
And I think the consistent agent very much does not “hold open the possibility” that the world is inconsistent. It actively denies this.
Theres two ways to express “PA is consistent”. The first is ∀A¬(A∧¬A). The other is a complicated construct about Gödel-encodings. Each has a corresponding version of “the world is consistent” (indeed, this “world” is inside PA, so they are basically equivalent). The agent using PA will believe only the former. The Troll expresses the consistency of PA using provability logic, which if I understand correctly has the gödelization in it.