Assuming the 99% likelihood I assign to the disease being malaria doesn’t change, if I can’t communicate with my colleague I obviously take the 5000 units of malaria meds. If I can communicate, I’ll do my best to convince my colleague to cooperate so he takes the 10 000 units of malaria meds, and then I take the other 5000 units of malaria meds.
Either I save 5000 people or I save 15 000 people with 99% likelihood (instead of saving 0 or 10 000 people), which is similar to avoiding 5 years or 10 years in prison (instead of avoiding 0 years or 9.5 years). So yeah, it is similar to the prisoner’s dilemma.
It seems like you assume implicitly that there’s an equal probability of the other doctor defecting: (0 + 10,000)/2 < (5,000 + 15,000)/2. That makes sense in the original prisoner’s dilemma, but given that you can communicate, why assume this?
It doesn’t make a difference. I’m better off defecting no matter what the other doctor does. Like I said, I’ll try to convince him to cooperate and then I’ll break our agreement. If I succeed, good for me; if I fail, at least I’ll have saved 5000 people.
That’s only if there’s a single iteration of this dilemma, of course. If I have reason to believe there will be three iterations and if I’m pretty sure I managed to convince the other doctor, I should cooperate (10000 * 3 > 15 000 + 5000 + 5000).
What if I’m wrong? Well, what if my house gets hit by a meteor today, and I get seriously wounded? Should I then regret not having left my house today?
I could wish I had left, but regretting my decision would be silly. We can only ever make decisions with the information that’s available to us at the moment. Right now I have every reason to believe my house will not get hit by a meteor, and I feel like staying at home, so that’s the best decision. Likewise, in the OP’s scenario I have every reason to believe the disease is malaria, so getting my hands on as much malaria medication as I can is the best decision. That’s all there is to it.
But in this case, someone with a degree of astronomical knowledge comparable to yours, acting in good faith, has come up to you and has said “I’m 99% confident that a meteor will hit your house today. You should leave.” Why not investigate his claim before dismissing it?
The original post specifies that even taking account of the other doctor’s opinion, we’re still 99% sure. This seems pretty unlikely, unless we know that the other doctor is really very rationally deficient, but it’s the scenario we’re discussing.
I’m one of the human beings that Eliezer has so much trouble imagining: While I’m not (entirely) selfish myself, I have no trouble acting as if I were completely selfish for the purpose of playing in the vanilla prisoner’s dilemma. Consequently, it’s of no relevance to me that the other agent is an unfriendly superintelligence, rather than a friendly human being. I defect in both cases.
Assuming the 99% likelihood I assign to the disease being malaria doesn’t change, if I can’t communicate with my colleague I obviously take the 5000 units of malaria meds. If I can communicate, I’ll do my best to convince my colleague to cooperate so he takes the 10 000 units of malaria meds, and then I take the other 5000 units of malaria meds.
Either I save 5000 people or I save 15 000 people with 99% likelihood (instead of saving 0 or 10 000 people), which is similar to avoiding 5 years or 10 years in prison (instead of avoiding 0 years or 9.5 years). So yeah, it is similar to the prisoner’s dilemma.
It seems like you assume implicitly that there’s an equal probability of the other doctor defecting: (0 + 10,000)/2 < (5,000 + 15,000)/2. That makes sense in the original prisoner’s dilemma, but given that you can communicate, why assume this?
It doesn’t make a difference. I’m better off defecting no matter what the other doctor does. Like I said, I’ll try to convince him to cooperate and then I’ll break our agreement. If I succeed, good for me; if I fail, at least I’ll have saved 5000 people.
That’s only if there’s a single iteration of this dilemma, of course. If I have reason to believe there will be three iterations and if I’m pretty sure I managed to convince the other doctor, I should cooperate (10000 * 3 > 15 000 + 5000 + 5000).
What if you’re wrong?
What if I’m wrong? Well, what if my house gets hit by a meteor today, and I get seriously wounded? Should I then regret not having left my house today?
I could wish I had left, but regretting my decision would be silly. We can only ever make decisions with the information that’s available to us at the moment. Right now I have every reason to believe my house will not get hit by a meteor, and I feel like staying at home, so that’s the best decision. Likewise, in the OP’s scenario I have every reason to believe the disease is malaria, so getting my hands on as much malaria medication as I can is the best decision. That’s all there is to it.
But in this case, someone with a degree of astronomical knowledge comparable to yours, acting in good faith, has come up to you and has said “I’m 99% confident that a meteor will hit your house today. You should leave.” Why not investigate his claim before dismissing it?
The original post specifies that even taking account of the other doctor’s opinion, we’re still 99% sure. This seems pretty unlikely, unless we know that the other doctor is really very rationally deficient, but it’s the scenario we’re discussing.
Out of curiosity, do you cooperate or defect against an unfriendly superintelligence in the regular prisoner’s dilemma?
I’m one of the human beings that Eliezer has so much trouble imagining: While I’m not (entirely) selfish myself, I have no trouble acting as if I were completely selfish for the purpose of playing in the vanilla prisoner’s dilemma. Consequently, it’s of no relevance to me that the other agent is an unfriendly superintelligence, rather than a friendly human being. I defect in both cases.
well, thanks for the heads-up =)