Does the identical twin one shot prisoners dilemma only work if you are functionally identical or can you be a little different and is there anything meaningful that can be said about this?
I guess it depends on how much the parts that make you “a little different” are involved in your decision making.
If you can put it in numbers, for example—I believe that if I choose to cooperate, my twin will choose to cooperate with probability p; and if I choose to defect, my twin will defect with probability q; also I care about the well-being of my twin with a coefficient e, and my twin cares about my well-being with a coefficient f—then you could take the payout matrix and these numbers, and calculate the correct strategy.
Option one, what if you cooperate. You multiply your payout, which is C-C with probability p, and C-D with probability 1-p; and also your twin’s payout, which is C-C with probability p, and D-C with probability 1-p; then you multiply your twin’s payout by your empathy e, and add that to your payout, etc. Okay, this is option one, now do the same for options two; and then compare the numbers.
It gets way more complicated when you cannot make a straightforward estimate of the probabilities, because the algorithms are too complicated. Could be even impossible to find a fully general solution (because of the halting problem).
I believe that if I choose to cooperate, my twin will choose to cooperate with probability p; and if I choose to defect, my twin will defect with probability q;
And here the main difficulty pops up again. There is no causal connection between your choice and their choice. Any correlation is a logical one. So imagine I make a copy of you. But the copying machine isn’t perfect. A random 0.001% of neurons are deleted. Also, you know you aren’t a copy. How would you calculate that probability p,q? Even in principle.
I guess it depends on how much the parts that make you “a little different” are involved in your decision making.
If you can put it in numbers, for example—I believe that if I choose to cooperate, my twin will choose to cooperate with probability p; and if I choose to defect, my twin will defect with probability q; also I care about the well-being of my twin with a coefficient e, and my twin cares about my well-being with a coefficient f—then you could take the payout matrix and these numbers, and calculate the correct strategy.
Option one, what if you cooperate. You multiply your payout, which is C-C with probability p, and C-D with probability 1-p; and also your twin’s payout, which is C-C with probability p, and D-C with probability 1-p; then you multiply your twin’s payout by your empathy e, and add that to your payout, etc. Okay, this is option one, now do the same for options two; and then compare the numbers.
It gets way more complicated when you cannot make a straightforward estimate of the probabilities, because the algorithms are too complicated. Could be even impossible to find a fully general solution (because of the halting problem).
And here the main difficulty pops up again. There is no causal connection between your choice and their choice. Any correlation is a logical one. So imagine I make a copy of you. But the copying machine isn’t perfect. A random 0.001% of neurons are deleted. Also, you know you aren’t a copy. How would you calculate that probability p,q? Even in principle.