I only looked at this for a bit so I could be totally mistaken, but I’ll look at it closely soon, it’s a nice write up!
My thoughts:
A change of variables/values in your proof of proposition 3 definitely doesn’t yield conjecture 4? At first glance it looks like you could just change the variables and flip the indicies for the projections (use pi_1 instead of pi_0) and in the functions A[U,i]. If you look at the U() defined for conjecture 4, it’s exactly the one in proposition 3 with the indices i flipped and C and D flipped, so it’s surprising to me if this doesn’t work or if there isn’t some other minor transformation of the the first proof that yields a proof of conjecture 4.
The agent function is not defined symmetrically with respect to C and D, so flipping the roles of C and D in the universe without also flipping their roles in the agent function does not guarantee that the outcome would change as desired.
Yes, you’re right. Looking at the agent function, the relevant rule seems to be defined for the sole purpose of allowing the agent to cooperate in the even that cooperation is provably better than than defecting. Taking this out of context, it allows the agent to choose one of the actions it can take if it is provably better than the other. It seems like the simple fix is just to add this:
If you alter the agent definition by replacing the third line with this and the fourth with C you have an agent that is like the original one but can be used to prove conjecture 4 but not prop 3, right? So if you instead add this line to the current agent definition and simply leave D as the last line, then if neither the old third line nor this new one hold we simply pick D, which is neither provably better nor provably worse than C, so that’s fine. It seems that we can also prove conjecture 4 by using the new line in lieu of the old one, which should allow us to use a proof that is essentially symmetric to the proof of proposition 3.
So you modify the agent so that line 3 says “cooperating is provably better than defecting” and line 4 says “defecting is provably better than cooperating”. But line 3 comes before line 4, so in proving Conjecture 4 you’d still have to show that the condition in line 3 does not obtain. Or you could prove that line 3 and line 4 can’t both obtain; I haven’t figured out exactly how to do this yet.
Well the agent definition contains a series of conditionals. You have as the last three lines: if “cooperating is provably better than defecting”, then cooperate; else, if “defecting is provably better than cooperating” then defect; else defect. Intuitively, assuming the agent’s utility function is consistent, only one antecedent clause will evaluate to true. In the case that the first one does, the agent will output C. Otherwise, it will move through to the next part of the conditional and if that evaluates to true the agent will output D. If not, then it will output D anyway. Because of this, I would go for proving that line three and 4 can’t both obtain.
Or you could prove that line 3 and line 4 can’t both obtain; I haven’t figured out exactly how to do this yet.
How about this? Suppose condition 3 and 4 both obtain. Then there exists and a and b such that U(C) = #a and U(D) = #b (switching out the underline for ‘#’). Also, there exists a p and q such that p > q and U(D) = #p and U(C) = #q. Now U(C) = #a > #b = U(D) = #c > #q = U(C) so U(C) > U(C). I may actually be confused on some details, since you indicate that a and b are numbers rather than variables (for instance in proposition 2 you select a and b as rational numbers), yet you further denote numerals for them, but I’m assuming that a >b iff #a > #b. Is this a valid assumption? I feel as though I’m missing some details here, and I’m not 100% sure how to fill them in. If my assumption isn’t valid I have a couple of other ideas. I’m not sure if I should be thinking of #a as ‘a’ under some interpretation.
Going a little bit further, it looks like your lemma 1 only invokes the first two lines and can easily be extended for use in conjecture 4 - look at parts 1 and 3. Part 1 is one step away from being identical to part 3 with C instead of D. ~Con(PA) → ~ Con(PA + anything), just plug in Con(PA) in there. Adding axioms can’t make an inconsistent theory consistent. Getting the analogous lemma to use in conjecture 4 looks pretty straightforward—switching out C with D and vice versa in each of the parts yields another true four part lemma, identical in form but for the switched D’s and C’s so you can use them analogously to how you used the original lemma 1 in the proof of proposition 3.
“cooperating provably results in a better payoff than defecting”. This is enough to convince the agent to cooperate. But if you switch C and D, you end up with a proposition that means “cooperating provably results in a worse payoff than defecting”. This is not enough to convince the agent to defect. The agent will defect if there is no proof that cooperating results in a better payoff than defecting. So at least one would have to say a bit more here in the proof.
I only looked at this for a bit so I could be totally mistaken, but I’ll look at it closely soon, it’s a nice write up!
My thoughts:
A change of variables/values in your proof of proposition 3 definitely doesn’t yield conjecture 4? At first glance it looks like you could just change the variables and flip the indicies for the projections (use pi_1 instead of pi_0) and in the functions A[U,i]. If you look at the U() defined for conjecture 4, it’s exactly the one in proposition 3 with the indices i flipped and C and D flipped, so it’s surprising to me if this doesn’t work or if there isn’t some other minor transformation of the the first proof that yields a proof of conjecture 4.
The agent function is not defined symmetrically with respect to C and D, so flipping the roles of C and D in the universe without also flipping their roles in the agent function does not guarantee that the outcome would change as desired.
Yes, you’re right. Looking at the agent function, the relevant rule seems to be defined for the sole purpose of allowing the agent to cooperate in the even that cooperation is provably better than than defecting. Taking this out of context, it allows the agent to choose one of the actions it can take if it is provably better than the other. It seems like the simple fix is just to add this:
=D\to\pi_{\underline{i}}\chi()=\underline{a})\\\mbox{\quad\quad\quad\quad}\wedge(\psi(\ulcorner%20U%20\urcorner,\underline{i})=C\to\pi_{\underline{i}}\chi()=\underline{b})\biggr)\wedge%20a%20%3E%20b)If you alter the agent definition by replacing the third line with this and the fourth with C you have an agent that is like the original one but can be used to prove conjecture 4 but not prop 3, right? So if you instead add this line to the current agent definition and simply leave D as the last line, then if neither the old third line nor this new one hold we simply pick D, which is neither provably better nor provably worse than C, so that’s fine. It seems that we can also prove conjecture 4 by using the new line in lieu of the old one, which should allow us to use a proof that is essentially symmetric to the proof of proposition 3.
Does that seem like a reasonable fix?
So you modify the agent so that line 3 says “cooperating is provably better than defecting” and line 4 says “defecting is provably better than cooperating”. But line 3 comes before line 4, so in proving Conjecture 4 you’d still have to show that the condition in line 3 does not obtain. Or you could prove that line 3 and line 4 can’t both obtain; I haven’t figured out exactly how to do this yet.
Well the agent definition contains a series of conditionals. You have as the last three lines: if “cooperating is provably better than defecting”, then cooperate; else, if “defecting is provably better than cooperating” then defect; else defect. Intuitively, assuming the agent’s utility function is consistent, only one antecedent clause will evaluate to true. In the case that the first one does, the agent will output C. Otherwise, it will move through to the next part of the conditional and if that evaluates to true the agent will output D. If not, then it will output D anyway. Because of this, I would go for proving that line three and 4 can’t both obtain.
How about this? Suppose condition 3 and 4 both obtain. Then there exists and a and b such that U(C) = #a and U(D) = #b (switching out the underline for ‘#’). Also, there exists a p and q such that p > q and U(D) = #p and U(C) = #q. Now U(C) = #a > #b = U(D) = #c > #q = U(C) so U(C) > U(C). I may actually be confused on some details, since you indicate that a and b are numbers rather than variables (for instance in proposition 2 you select a and b as rational numbers), yet you further denote numerals for them, but I’m assuming that a >b iff #a > #b. Is this a valid assumption? I feel as though I’m missing some details here, and I’m not 100% sure how to fill them in. If my assumption isn’t valid I have a couple of other ideas. I’m not sure if I should be thinking of #a as ‘a’ under some interpretation.
Going a little bit further, it looks like your lemma 1 only invokes the first two lines and can easily be extended for use in conjecture 4 - look at parts 1 and 3. Part 1 is one step away from being identical to part 3 with C instead of D.
~Con(PA) → ~ Con(PA + anything), just plug in Con(PA) in there. Adding axioms can’t make an inconsistent theory consistent. Getting the analogous lemma to use in conjecture 4 looks pretty straightforward—switching out C with D and vice versa in each of the parts yields another true four part lemma, identical in form but for the switched D’s and C’s so you can use them analogously to how you used the original lemma 1 in the proof of proposition 3.
Yep, this sounds about right. I think one do has to prove “#a > #b if a > b”, or rather
)One can try to take the proof of Proposition 3 and switch C and D around, but one quickly runs into difficulty: The second line of that proof contains
%20=%20C%20\to%20\pi_0%20U()%20=%20\underline{a}%20)%20\\%20\mbox{\quad\quad\quad\quad}%20\wedge%20(A(\ulcorner%20U%20\urcorner,%200)%20=%20D%20\to%20\pi_0%20U()%20=%20\underline{b})%20\biggr)%20\wedge%20a%3Eb)“cooperating provably results in a better payoff than defecting”. This is enough to convince the agent to cooperate. But if you switch C and D, you end up with a proposition that means “cooperating provably results in a worse payoff than defecting”. This is not enough to convince the agent to defect. The agent will defect if there is no proof that cooperating results in a better payoff than defecting. So at least one would have to say a bit more here in the proof.