Got me to register, this one. I was curious about my own reaction, here.
See, I took in the problem, thought for a moment about game theory and such, but I am not proficient in game theory. I haven’t read much of it. I barely know the very basics. And many other people can do that sort of thinking much better than I can.
I took a different angle, because it should all add up to normality. I want to save human lives here. For me, the first instinct on what to do would be to cooperate on the first iteration, then cooperate on the second regardless of whether the other side defected or not and then if they cooperate, keep cooperating until the end, and if they defect, keep defecting until the end. So why it feels so obvious to me? After some thought, I came to the conclusion that that would be because it feels to me that the potential cost of two million lives lost by cooperating in the first two rounds against a player who will always defect weighs less in my decision making than the potential gain of a hundred million lives if I can convince it to cooperate with me to the end.
So, the last round. Or, similarly, the only round in a non-iterated model. At first, I felt like I should defect, when reading the post on the one-shot game. Why? Because, well, saving one billion lives or three billion compared to saving two billion or none. I can’t see why the other player would cooperate in this situation, given that they only care about paperclips. I’m sure there are convincing reasons, and possibly they even would—but if they would, then that means I save three billion lives by defecting, right? Plus, I feel that /not saving any lives/ would be emotionally worse for me than saving a billion lives while potentially letting another billion die. I’m not proud of it, but it does affect my reasoning, the desire to at least get something out of it, to avoid the judgment of people who shout at me for being naive and stupid and losing out on the chance to save lives. After all, if I defect and he defects, I can just point at his choice and say he’d have done it anyway, so I saved the maximum possible lives. If I defect and he cooperates, I’ve saved even more. I recognize that it would be better for me on a higher level of reasoning to figure out why cooperating is better, in order to facilitate cooperation if I come across such a dilemma afterwards, but my reasoning does not influence the reasoning of the other player in this case, so even if I convince myself with a great philosophical argument that cooperating is better, the fact of the matter is that player 2 either defects or cooperates completely regardless of what I do, according to his own philosophical arguments to himself about what he should do, and in either case I should defect.
And a rationalist should win, right? I note the difference here to Newcomb’s Problem, which I would one-box in, is that Player 2 has no magical way of knowing what I will do. In Newcomb’s Problem, if I one-box I gain a million, if I two-box I gain a thousand, so I one-box to gain a million. In this case, Player 2 either defects of cooperates and that does not depend on me and my reasoning and arguments and game-theoric musings in any way. My choice is to defect, because that way I save the most lives possible in that situation. If I were to convince myself to cooperate, that would not change the world into one where Player 2 would also convince itself to cooperate, it would affect Player 2′s decision in no way at all.
But somehow the case seems different for the last round of an iterative game (and, even more so, for all the preceding rounds). This, in turn, makes me feel worried, because it is a sign that some bias or another may be affecting me adversely here. One thing is obviously me being blinded to what the numbers ‘billion’ and ‘million’ actually mean, but I try to compensate for that as best I can. Anyway, by the 100th round, after 99 rounds of cooperation, I get the choice to cooperate or to defect. At this point, me and the other player have a cooperative relationship. We’ve gained a lot. But our mutual interaction is about to end, which means there are no repercussions about defecting here, which means I should maximize my winnings by defecting. However, it feels to me that, since I already know Player 2 is enough of a winner-type to cooperate with me for all the previous rounds, he realizes the same thing. And in that case, I should cooperate here, to maximize my gains. At which point defecting makes more sense. Repeating forever.
What tilts the instinctual decision towards cooperating in this particular case seems to me to be that, regardless of what happens, I have already saved 198 million people. Whether I now save 0, 1, 2 or 3 million more is not such a big thing in comparison (even though it obviously is, but big numbers make me blind). Because I cannot reason myself into either defecting or cooperating, and thus I am unable to assign meaningful probabilities for what Player 2 will do, I cooperate by default because I feel that, other things being equal, it’s the ‘right’ thing to do. If I am fooled and P2 defects, one million people die that would not have died otherwise, but I can bear that burden in the knowledge that I’ve saved 198 million. And meanwhile, it’s P2 that has to bear the label of traitor, which means that I will be better able to justify myself to both myself and society at large. Obviously this reasoning doesn’t seem very good. It feels like I am convincing myself here that my reasoning about what should be done somehow influences the reasoning of Player 2, after condemning that in the one-shot case just above. But then again, I have interacted with P2 for 99 rounds now, influencing him by my reasoning on what’s the best way to act.
And, of course, there’s the looming problem that if either of us had reasoned that it was likely for the other to defect in the last round no matter what, then it would have been better for us to defect in the second-to-last round, which did not happen. By defecting on round 99, you gain +3, and then on round 100 you’re pretty much guaranteed to get +1, which is exactly the same gain as cooperating twice. By defecting earlier than round 98, you lose more than you gain, assuming all the remaining rounds are defect, which seems to me like a reasonable assumption. But by being betrayed on round 99 you get 0, and gain only 1 afterwards on round 100, which means you’re left with 3 less than you could’ve had. Still, I don’t care about how many paperclips P2 gets, only about how many lives I save. I, as a human, have an innate sense of ‘fair play’ that makes 2+2 cooperate feel better than 3+1 double defect in a void. However, does that ‘fair play’ count as more weighty in decision-making than the risk that P2 defects and I gain 1, as opposed to 4? After all, by round 99 if I defect, I’m guaranteed +4. If I cooperate, I’m only guaranteed +1. And even if we both cooperate on round 99, there is still the risk that I gain nothing in the last round. Fair play does not seem worth even the possibility of losing several million lives. Still, the whole basis of this is that I don’t care about Player 2, I only care about lives saved, and thus giving him the opportunity to cooperate gives me the chance to save more lives (at this point, even if he defects and I cooperate for the remaining turns I’ve still saved more than I would have by defecting from the beginning). So I feel, weakly, that I should cooperate until the end here after all simply because it seems that only reasoning that would make me cooperate until the end would give me the ability to cooperate at all, and thus save the most possible lives. But I have not convinced myself on this yet, because it still feels to me that I am unsure of what I would do on that very last round, when P2′s choice is already locked in, and millions of lives are at stake.
Now, the above is simply an analysis of my instinctual choices, and me trying to read into why those were my instinctual choices. But I am not confident in stating they are the correct choices, I am just trying to write my way into better understanding of how I decide things.
Got me to register, this one. I was curious about my own reaction, here.
See, I took in the problem, thought for a moment about game theory and such, but I am not proficient in game theory. I haven’t read much of it. I barely know the very basics. And many other people can do that sort of thinking much better than I can.
I took a different angle, because it should all add up to normality. I want to save human lives here. For me, the first instinct on what to do would be to cooperate on the first iteration, then cooperate on the second regardless of whether the other side defected or not and then if they cooperate, keep cooperating until the end, and if they defect, keep defecting until the end. So why it feels so obvious to me? After some thought, I came to the conclusion that that would be because it feels to me that the potential cost of two million lives lost by cooperating in the first two rounds against a player who will always defect weighs less in my decision making than the potential gain of a hundred million lives if I can convince it to cooperate with me to the end.
So, the last round. Or, similarly, the only round in a non-iterated model. At first, I felt like I should defect, when reading the post on the one-shot game. Why? Because, well, saving one billion lives or three billion compared to saving two billion or none. I can’t see why the other player would cooperate in this situation, given that they only care about paperclips. I’m sure there are convincing reasons, and possibly they even would—but if they would, then that means I save three billion lives by defecting, right? Plus, I feel that /not saving any lives/ would be emotionally worse for me than saving a billion lives while potentially letting another billion die. I’m not proud of it, but it does affect my reasoning, the desire to at least get something out of it, to avoid the judgment of people who shout at me for being naive and stupid and losing out on the chance to save lives. After all, if I defect and he defects, I can just point at his choice and say he’d have done it anyway, so I saved the maximum possible lives. If I defect and he cooperates, I’ve saved even more. I recognize that it would be better for me on a higher level of reasoning to figure out why cooperating is better, in order to facilitate cooperation if I come across such a dilemma afterwards, but my reasoning does not influence the reasoning of the other player in this case, so even if I convince myself with a great philosophical argument that cooperating is better, the fact of the matter is that player 2 either defects or cooperates completely regardless of what I do, according to his own philosophical arguments to himself about what he should do, and in either case I should defect.
And a rationalist should win, right? I note the difference here to Newcomb’s Problem, which I would one-box in, is that Player 2 has no magical way of knowing what I will do. In Newcomb’s Problem, if I one-box I gain a million, if I two-box I gain a thousand, so I one-box to gain a million. In this case, Player 2 either defects of cooperates and that does not depend on me and my reasoning and arguments and game-theoric musings in any way. My choice is to defect, because that way I save the most lives possible in that situation. If I were to convince myself to cooperate, that would not change the world into one where Player 2 would also convince itself to cooperate, it would affect Player 2′s decision in no way at all.
But somehow the case seems different for the last round of an iterative game (and, even more so, for all the preceding rounds). This, in turn, makes me feel worried, because it is a sign that some bias or another may be affecting me adversely here. One thing is obviously me being blinded to what the numbers ‘billion’ and ‘million’ actually mean, but I try to compensate for that as best I can. Anyway, by the 100th round, after 99 rounds of cooperation, I get the choice to cooperate or to defect. At this point, me and the other player have a cooperative relationship. We’ve gained a lot. But our mutual interaction is about to end, which means there are no repercussions about defecting here, which means I should maximize my winnings by defecting. However, it feels to me that, since I already know Player 2 is enough of a winner-type to cooperate with me for all the previous rounds, he realizes the same thing. And in that case, I should cooperate here, to maximize my gains. At which point defecting makes more sense. Repeating forever.
What tilts the instinctual decision towards cooperating in this particular case seems to me to be that, regardless of what happens, I have already saved 198 million people. Whether I now save 0, 1, 2 or 3 million more is not such a big thing in comparison (even though it obviously is, but big numbers make me blind). Because I cannot reason myself into either defecting or cooperating, and thus I am unable to assign meaningful probabilities for what Player 2 will do, I cooperate by default because I feel that, other things being equal, it’s the ‘right’ thing to do. If I am fooled and P2 defects, one million people die that would not have died otherwise, but I can bear that burden in the knowledge that I’ve saved 198 million. And meanwhile, it’s P2 that has to bear the label of traitor, which means that I will be better able to justify myself to both myself and society at large. Obviously this reasoning doesn’t seem very good. It feels like I am convincing myself here that my reasoning about what should be done somehow influences the reasoning of Player 2, after condemning that in the one-shot case just above. But then again, I have interacted with P2 for 99 rounds now, influencing him by my reasoning on what’s the best way to act.
And, of course, there’s the looming problem that if either of us had reasoned that it was likely for the other to defect in the last round no matter what, then it would have been better for us to defect in the second-to-last round, which did not happen. By defecting on round 99, you gain +3, and then on round 100 you’re pretty much guaranteed to get +1, which is exactly the same gain as cooperating twice. By defecting earlier than round 98, you lose more than you gain, assuming all the remaining rounds are defect, which seems to me like a reasonable assumption. But by being betrayed on round 99 you get 0, and gain only 1 afterwards on round 100, which means you’re left with 3 less than you could’ve had. Still, I don’t care about how many paperclips P2 gets, only about how many lives I save. I, as a human, have an innate sense of ‘fair play’ that makes 2+2 cooperate feel better than 3+1 double defect in a void. However, does that ‘fair play’ count as more weighty in decision-making than the risk that P2 defects and I gain 1, as opposed to 4? After all, by round 99 if I defect, I’m guaranteed +4. If I cooperate, I’m only guaranteed +1. And even if we both cooperate on round 99, there is still the risk that I gain nothing in the last round. Fair play does not seem worth even the possibility of losing several million lives. Still, the whole basis of this is that I don’t care about Player 2, I only care about lives saved, and thus giving him the opportunity to cooperate gives me the chance to save more lives (at this point, even if he defects and I cooperate for the remaining turns I’ve still saved more than I would have by defecting from the beginning). So I feel, weakly, that I should cooperate until the end here after all simply because it seems that only reasoning that would make me cooperate until the end would give me the ability to cooperate at all, and thus save the most possible lives. But I have not convinced myself on this yet, because it still feels to me that I am unsure of what I would do on that very last round, when P2′s choice is already locked in, and millions of lives are at stake.
Now, the above is simply an analysis of my instinctual choices, and me trying to read into why those were my instinctual choices. But I am not confident in stating they are the correct choices, I am just trying to write my way into better understanding of how I decide things.