I (just yesterday) found a counterexample to this. The universe is a 5-and-10 variant that uses the unprovability of consistency:
def U():
if A() == 2:
if PA is consistent:
return 10
else:
return 0
else:
return 5
The agent can be taken to be modal UDT, using PA as its theory. (The example will still work for other theories extending PA, we just need the universe’s theory to include the agents. Also, to simplify some later arguments we suppose that the agent uses the chicken rule, and that it checks action 1 first, then action 2.) Since the agent cannot prove the consistency of its theory, it will not be able to prove A()=2→U()=10, so the first implication which it can prove is A()=1→U()=5. Thus, it will end up taking action 1.
Now, we work in PA and try to show A()=2→U()=0. If PA is inconsistent (we have to account for this case since we are working in PA), then A()=2→U()=0 follows straightforwardly. Next, we consider the case that PA is consistent and work through the agent’s decision. PA can’t prove A()≠1, since we used the chicken rule, so since the sentence A()=1→U()=5 is easily provable, the sentence A()=1→U()=10 (ie. the first sentence that the agents checks for proofs of) must be unprovable.
The next sentence we check is A()=2→U()=10. If the agent finds a proof of this, then it takes action 2. Otherwise, it moves on to the sentence A()=1→U()=5, which is easily provable as mentioned above, and it takes action 1. Hence, the agent takes action 2 iff it can prove A()=2→U()=10, so A()=2↔□(A()=2→U()=10). Löb’s theorem tells us that □(U=10)↔□(□(U=10)→U()=10), so, by the uniqueness of fixed points, it follows that A()=2↔□(U=10). Then, we get A()=2→□(U=10), so A()=2→□(¬□⊥) by the definition of the universe, and so A()=2→□(⊥) by Gödel’s second incompleteness theorem. Thus, if the agent takes action 2, then PA is inconsistent, so U()=0 as desired.
This tells us that PA⊢A()=2→U()=0. Also, PA⊬A()≠2 by the chicken rule, so PA⊬A()=2→U()≠0. Since PA does not prove A()=2→U()≠0 at all, the shortest proof of PA⊢A()=2→U()=0 is much shorter than the shortest proof of A()=2→U()≠0 for any definition of “much shorter”. (One can object here that there is no shortest proof, but (a) it seems natural to define the “length of the shortest proof” to be infinite if there is no proof, and (b) it is probably straightforward but tedious to modify the agent and universe so that there is a proof of A()=2→U()≠0, but it is very long.)
However, it is clear that U()=0 is not a legitimate counterfactual consequence of A()=2. Informally, if the agent chose action 2, it would have received utility 10, since PA is consistent. Thus, we have a counterexample.
One issue we discussed during the workshop is whether counterfactuals should be defined with respect to a state of knowledge. We may want to say here that we, who know a lot, are in a state of knowledge with respect to which A()=2 would counterfactually result in U()=10, but that someone who reasons in PA is in a state of knowledge w.r.t. which it would result in U()=0. One way to think about this is that we know that PA is obviously consistent, irrespective of how the agent acts, whereas PA does not know that it is consistent, allowing an agent using PA to think of itself as counterfactually controlling PA’s consistency. Indeed, this is roughly how the argument above proceeds.
I’m not sure that this is a good way of thinking about this though. The agents goes through some weird steps, most notably a rather opaque application of the fixed point theorem, so I don’t have a good feel for why it is reasoning this way. I want to unwrap that argument before I can say whether it’s doing something that, on an intuitive level constitutes legitimate counterfactual reasoning.
More worryingly, the perspective of counterfactuals as being defined w.r.t. states of knowledge seems to be at odds with PA believing a wrong counterfactual here. It would make sense for PA not to have enough information to make any statement about the counterfactual consequences of A()=2, but that’s not what’s happening if we think of PA’s counterfactuals as obeying this conjecture; instead, PA postulates a causal mechanism by which the agent controls the consistency of PA, which we didn’t expect to be there at all. Maybe it would all make sense if I had a deeper understanding of the proof I gave, but right now it is very odd.
(This is rather long; perhaps it should be a post? Would anyone prefer that I clean up a few things and make this a post? I’ll also expand on the issue I mention at the end when I have more time to think about it.)
Next, we consider the case that PA is consistent and work through the agent’s decision. PA can’t prove A()≠1, since we used the chicken rule, so since the sentence A()=1→U()=5 is easily provable, the sentence A()=1→U()=10 (ie. the first sentence that the agents checks for proofs of) must be unprovable.
It seems like this argument needs soundness of PA, not just consistency of PA. Do you see a way to prove in PA that if PA⊢A()≠1, then PA is inconsistent?
[edited to add:] However, your idea reminds me of my post on the odd counterfactuals of playing chicken, and I think the example I gave there makes your idea go through:
The scenario is that you get 10 if you take action 1 and it’s not provable that you don’t take action 1; you get 5 if you take action 2; and you get 0 if you take action 1 and it’s provable that you don’t. Clearly you should take action 1, but I prove that modal UDT actually takes action 2. To do so, I show that PA proves A()=1→¬□┌A()=1┐. (Then from the outside, A()=2 follows from the outside by soundness of PA.)
This seems to make your argument go through if we can also show that PA doesn’t show A()≠1. But if it did, then modal UDT would take action 1 because this comes first in its proof search, contradiction.
Thus, PA proves A()=1→U()=0 (because this follows from A()=1→¬□┌A()=1┐), and also PA doesn’t prove A()=1→U()=10. As in your argument, then, the trolljecture implies that we should think “if the agent takes action 1, it gets utility 0” is a good counterfactual, and we don’t think that’s true.
Still interested in whether you can make your argument go through in your case as well, especially if you can use the chicken step in a way I’m not seeing yet. Like Patrick, I’d encourage you to develop this into a post.
The argument that I had in mind was that if PA⊢A()≠1, then PA⊢□┌A()≠1┐, so PA⊢A()=1 since PA knows how the chicken rule works. This gives us PA⊢⊥, so PA can prove that if PA⊢A()≠1, then PA is inconsistent. I’ll include this argument in my post, since you’re right that this was too big a jump.
Edit: We also need to use this argument to show that the modal UDT agent gets to the part where it iterates over utilities, rather than taking an action at the chicken rule step. I didn’t mention this explicitly, since I felt like I had seen it before often enough, but now I realize it is nontrivial enough to point out.
It occurs to me that maybe we could regard the agent as consistently reasoning, “If I choose of my own free will to output 2, that thereby causes Peano Arithmetic to be inconsistent, causing me to get 0 points.”
I mostly don’t buy this, but it slightly defends the legitness of the counterfactual.
I (just yesterday) found a counterexample to this. The universe is a 5-and-10 variant that uses the unprovability of consistency:
The agent can be taken to be modal UDT, using PA as its theory. (The example will still work for other theories extending PA, we just need the universe’s theory to include the agents. Also, to simplify some later arguments we suppose that the agent uses the chicken rule, and that it checks action 1 first, then action 2.) Since the agent cannot prove the consistency of its theory, it will not be able to prove A()=2→U()=10, so the first implication which it can prove is A()=1→U()=5. Thus, it will end up taking action 1.
Now, we work in PA and try to show A()=2→U()=0. If PA is inconsistent (we have to account for this case since we are working in PA), then A()=2→U()=0 follows straightforwardly. Next, we consider the case that PA is consistent and work through the agent’s decision. PA can’t prove A()≠1, since we used the chicken rule, so since the sentence A()=1→U()=5 is easily provable, the sentence A()=1→U()=10 (ie. the first sentence that the agents checks for proofs of) must be unprovable.
The next sentence we check is A()=2→U()=10. If the agent finds a proof of this, then it takes action 2. Otherwise, it moves on to the sentence A()=1→U()=5, which is easily provable as mentioned above, and it takes action 1. Hence, the agent takes action 2 iff it can prove A()=2→U()=10, so A()=2↔□(A()=2→U()=10). Löb’s theorem tells us that □(U=10)↔□(□(U=10)→U()=10), so, by the uniqueness of fixed points, it follows that A()=2↔□(U=10). Then, we get A()=2→□(U=10), so A()=2→□(¬□⊥) by the definition of the universe, and so A()=2→□(⊥) by Gödel’s second incompleteness theorem. Thus, if the agent takes action 2, then PA is inconsistent, so U()=0 as desired.
This tells us that PA⊢A()=2→U()=0. Also, PA⊬A()≠2 by the chicken rule, so PA⊬A()=2→U()≠0. Since PA does not prove A()=2→U()≠0 at all, the shortest proof of PA⊢A()=2→U()=0 is much shorter than the shortest proof of A()=2→U()≠0 for any definition of “much shorter”. (One can object here that there is no shortest proof, but (a) it seems natural to define the “length of the shortest proof” to be infinite if there is no proof, and (b) it is probably straightforward but tedious to modify the agent and universe so that there is a proof of A()=2→U()≠0, but it is very long.)
However, it is clear that U()=0 is not a legitimate counterfactual consequence of A()=2. Informally, if the agent chose action 2, it would have received utility 10, since PA is consistent. Thus, we have a counterexample.
One issue we discussed during the workshop is whether counterfactuals should be defined with respect to a state of knowledge. We may want to say here that we, who know a lot, are in a state of knowledge with respect to which A()=2 would counterfactually result in U()=10, but that someone who reasons in PA is in a state of knowledge w.r.t. which it would result in U()=0. One way to think about this is that we know that PA is obviously consistent, irrespective of how the agent acts, whereas PA does not know that it is consistent, allowing an agent using PA to think of itself as counterfactually controlling PA’s consistency. Indeed, this is roughly how the argument above proceeds.
I’m not sure that this is a good way of thinking about this though. The agents goes through some weird steps, most notably a rather opaque application of the fixed point theorem, so I don’t have a good feel for why it is reasoning this way. I want to unwrap that argument before I can say whether it’s doing something that, on an intuitive level constitutes legitimate counterfactual reasoning.
More worryingly, the perspective of counterfactuals as being defined w.r.t. states of knowledge seems to be at odds with PA believing a wrong counterfactual here. It would make sense for PA not to have enough information to make any statement about the counterfactual consequences of A()=2, but that’s not what’s happening if we think of PA’s counterfactuals as obeying this conjecture; instead, PA postulates a causal mechanism by which the agent controls the consistency of PA, which we didn’t expect to be there at all. Maybe it would all make sense if I had a deeper understanding of the proof I gave, but right now it is very odd.
(This is rather long; perhaps it should be a post? Would anyone prefer that I clean up a few things and make this a post? I’ll also expand on the issue I mention at the end when I have more time to think about it.)
It seems like this argument needs soundness of PA, not just consistency of PA. Do you see a way to prove in PA that if PA⊢A()≠1, then PA is inconsistent?
[edited to add:] However, your idea reminds me of my post on the odd counterfactuals of playing chicken, and I think the example I gave there makes your idea go through:
The scenario is that you get 10 if you take action 1 and it’s not provable that you don’t take action 1; you get 5 if you take action 2; and you get 0 if you take action 1 and it’s provable that you don’t. Clearly you should take action 1, but I prove that modal UDT actually takes action 2. To do so, I show that PA proves A()=1→¬□┌A()=1┐. (Then from the outside, A()=2 follows from the outside by soundness of PA.)
This seems to make your argument go through if we can also show that PA doesn’t show A()≠1. But if it did, then modal UDT would take action 1 because this comes first in its proof search, contradiction.
Thus, PA proves A()=1→U()=0 (because this follows from A()=1→¬□┌A()=1┐), and also PA doesn’t prove A()=1→U()=10. As in your argument, then, the trolljecture implies that we should think “if the agent takes action 1, it gets utility 0” is a good counterfactual, and we don’t think that’s true.
Still interested in whether you can make your argument go through in your case as well, especially if you can use the chicken step in a way I’m not seeing yet. Like Patrick, I’d encourage you to develop this into a post.
The argument that I had in mind was that if PA⊢A()≠1, then PA⊢□┌A()≠1┐, so PA⊢A()=1 since PA knows how the chicken rule works. This gives us PA⊢⊥, so PA can prove that if PA⊢A()≠1, then PA is inconsistent. I’ll include this argument in my post, since you’re right that this was too big a jump.
Edit: We also need to use this argument to show that the modal UDT agent gets to the part where it iterates over utilities, rather than taking an action at the chicken rule step. I didn’t mention this explicitly, since I felt like I had seen it before often enough, but now I realize it is nontrivial enough to point out.
Nice! Yes, I encourage you to develop this into a post.
I can’t see the grandparent, so posting here:
It occurs to me that maybe we could regard the agent as consistently reasoning, “If I choose of my own free will to output 2, that thereby causes Peano Arithmetic to be inconsistent, causing me to get 0 points.”
I mostly don’t buy this, but it slightly defends the legitness of the counterfactual.