Now you might think this is dumb, because its impossible to see that. But why do you think its impossible? Only because its inconsistent. But if you’re using PA, you must believe PA really might be inconsistent, so you can’t believe its impossible.
This part, at least, I disagree with. If I’m using PA, I can prove that ¬(A&¬A). So I don’t need to believe PA is consistent to believe that the ball won’t stop rolling and also continue rolling.
On the other hand, I have no direct objection to believing you can control the consistency of PA by doing something else than PA says you will do. It’s not a priori absurd to me. I have two objections to the line of thinking, but both are indirect.
It seems absurd to think that if you cross the bridge, it will definitely collapse. It seems particularly absurd that, in some sense, the reason you think that is just because you think that.
From a pragmatic/consequentialist perspective, thinking in this way seems to result in poor outcomes.
Sure, thats always true. But sometimes its also true that A&¬A. So unless you believe PA is consistent, you need to hold open the possibility that the ball will both (stop and continue) and (do at most one of those). But of course you can also prove that it will do at most one of those. And so on. I’m not very confident whats right, ordinary imagination is probably just misleading here.
It seems particularly absurd that, in some sense, the reason you think that is just because you think that.
The facts about what you think are theorems of PA. Judging from the outside: clearly if an agent with this source code crosses the bridge, then PA is inconsistent. So, I think the agent is reasoning correctly about the kind of agent it is. I agree that the outcome looks bad—but its not clear if the agent is “doing something wrong”. For comparison, if we built an agent that would only act if it could be sure its logic is consistent, it wouldn’t do anything—but its not doing anything wrong. Its looking for logical certainty, and there isn’t any, but thats not its fault.
Sure, thats always true. But sometimes its also true that A&¬A. So unless you believe PA is consistent, you need to hold open the possibility that the ball will both (stop and continue) and (do at most one of those). But of course you can also prove that it will do at most one of those. And so on. I’m not very confident whats right, ordinary imagination is probably just misleading here.
I think you’re still just confusing levels here. If you’re reasoning using PA, you’ll hold open the possibility that PA is inconsistent, but you won’t hold open the possibility that A&¬A. You believe the world is consistent. You’re just not so sure about PA.
I’m wondering what you mean by “hold open the possibility”.
If you mean “keep some probability mass on this possibility”, then I think most reasonable definitions of “keep your probabilities consistent with your logical beliefs” will forbid this.
If you mean “hold off on fully believing things which contradict the possibility”, then obviously the agent would hold off on fully believing PA itself.
Etc for other reasonable definitions of holding open the possibility (I claim).
If you’re reasoning using PA, you’ll hold open the possibility that PA is inconsistent, but you won’t hold open the possibility that A&¬A. You believe the world is consistent. You’re just not so sure about PA.
Do you? This sounds like PA is not actually the logic you’re using. Which is realistic for a human. But if PA is indeed inconsistent, and you don’t have some further-out system to think in, then what is the difference to you between “PA is inconsistent” and “the world is inconsistent”? In both cases you just believe everything and its negation. This also goes with what I said about thought and perception not being separated in this model, which stand in for “logic” and “the world”. So I suppose that is where you would look when trying to fix this.
If you mean “hold off on fully believing things which contradict the possibility”, then obviously the agent would hold off on fully believing PA itself.
You do fully believe in PA. But it might be that you also believe its negation. Obviously this doesn’t go well with probabilistic approaches.
This sounds like PA is not actually the logic you’re using.
Maybe this is the confusion. I’m not using PA. I’m assuming (well, provisionally assuming) PA is consistent.
If PA is consistent, then an agent using PA believes the world is consistent—in the sense of assigning probability 1 to tautologies, and also assigning probability 0 to contradictions.
(At least, 1 to tautologies it can recognize, and 0 to contradictions it can recognize.)
Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don’t know whether PA is consistent, but, believe the world is consistent.
If PA were inconsistent, then we need more assumptions to tell us how probabilities are assigned. EG, maybe the agent “respects logic” in the sense of assigning 0 to refutable things. Then It assigns 0 to everything. Maybe it “respects logic” in the sense of assigning 1 to provable things. Then it assigns 1 to everything. (But we can’t have both. The two notions of “respect logic” are equivalent if the underlying logic is consistent, but not otherwise.)
But such an agent doesn’t have much to say for itself anyway, so it’s more interesting to focus on what the consistent agent has to say for itself.
And I think the consistent agent very much does not “hold open the possibility” that the world is inconsistent. It actively denies this.
Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don’t know whether PA is consistent, but, believe the world is consistent.
Theres two ways to express “PA is consistent”. The first is ∀A¬(A∧¬A). The other is a complicated construct about Gödel-encodings. Each has a corresponding version of “the world is consistent” (indeed, this “world” is inside PA, so they are basically equivalent). The agent using PA will believe only the former. The Troll expresses the consistency of PA using provability logic, which if I understand correctly has the gödelization in it.
This part, at least, I disagree with. If I’m using PA, I can prove that ¬(A&¬A). So I don’t need to believe PA is consistent to believe that the ball won’t stop rolling and also continue rolling.
On the other hand, I have no direct objection to believing you can control the consistency of PA by doing something else than PA says you will do. It’s not a priori absurd to me. I have two objections to the line of thinking, but both are indirect.
It seems absurd to think that if you cross the bridge, it will definitely collapse. It seems particularly absurd that, in some sense, the reason you think that is just because you think that.
From a pragmatic/consequentialist perspective, thinking in this way seems to result in poor outcomes.
Sure, thats always true. But sometimes its also true that A&¬A. So unless you believe PA is consistent, you need to hold open the possibility that the ball will both (stop and continue) and (do at most one of those). But of course you can also prove that it will do at most one of those. And so on. I’m not very confident whats right, ordinary imagination is probably just misleading here.
The facts about what you think are theorems of PA. Judging from the outside: clearly if an agent with this source code crosses the bridge, then PA is inconsistent. So, I think the agent is reasoning correctly about the kind of agent it is. I agree that the outcome looks bad—but its not clear if the agent is “doing something wrong”. For comparison, if we built an agent that would only act if it could be sure its logic is consistent, it wouldn’t do anything—but its not doing anything wrong. Its looking for logical certainty, and there isn’t any, but thats not its fault.
I think you’re still just confusing levels here. If you’re reasoning using PA, you’ll hold open the possibility that PA is inconsistent, but you won’t hold open the possibility that A&¬A. You believe the world is consistent. You’re just not so sure about PA.
I’m wondering what you mean by “hold open the possibility”.
If you mean “keep some probability mass on this possibility”, then I think most reasonable definitions of “keep your probabilities consistent with your logical beliefs” will forbid this.
If you mean “hold off on fully believing things which contradict the possibility”, then obviously the agent would hold off on fully believing PA itself.
Etc for other reasonable definitions of holding open the possibility (I claim).
Do you? This sounds like PA is not actually the logic you’re using. Which is realistic for a human. But if PA is indeed inconsistent, and you don’t have some further-out system to think in, then what is the difference to you between “PA is inconsistent” and “the world is inconsistent”? In both cases you just believe everything and its negation. This also goes with what I said about thought and perception not being separated in this model, which stand in for “logic” and “the world”. So I suppose that is where you would look when trying to fix this.
You do fully believe in PA. But it might be that you also believe its negation. Obviously this doesn’t go well with probabilistic approaches.
Maybe this is the confusion. I’m not using PA. I’m assuming (well, provisionally assuming) PA is consistent.
If PA is consistent, then an agent using PA believes the world is consistent—in the sense of assigning probability 1 to tautologies, and also assigning probability 0 to contradictions.
(At least, 1 to tautologies it can recognize, and 0 to contradictions it can recognize.)
Hence, I (standing outside of PA) assert that (since I think PA is probably consistent) agents who use PA don’t know whether PA is consistent, but, believe the world is consistent.
If PA were inconsistent, then we need more assumptions to tell us how probabilities are assigned. EG, maybe the agent “respects logic” in the sense of assigning 0 to refutable things. Then It assigns 0 to everything. Maybe it “respects logic” in the sense of assigning 1 to provable things. Then it assigns 1 to everything. (But we can’t have both. The two notions of “respect logic” are equivalent if the underlying logic is consistent, but not otherwise.)
But such an agent doesn’t have much to say for itself anyway, so it’s more interesting to focus on what the consistent agent has to say for itself.
And I think the consistent agent very much does not “hold open the possibility” that the world is inconsistent. It actively denies this.
Theres two ways to express “PA is consistent”. The first is ∀A¬(A∧¬A). The other is a complicated construct about Gödel-encodings. Each has a corresponding version of “the world is consistent” (indeed, this “world” is inside PA, so they are basically equivalent). The agent using PA will believe only the former. The Troll expresses the consistency of PA using provability logic, which if I understand correctly has the gödelization in it.