(I have launch codes and am happy to prove it to you if you want.)
Hmmm, I feel like the argument “There’s some harm in releasing the codes entrusted to me, but not so much that it’s better for someone to die” might prove too much? Like, death is really bad, I definitely grant that. But despite the dollar amount you gave, I feel like we’re sort of running up against a sacred value thing. I mean, you could just as easily say, “There’s some harm in releasing the codes entrusted to me, but not so much that it’s better for someone to have a 10% chance of dying”—which would naïvely bring your price down to $167.20.
If you accept as true that that argument should be equally ‘morally convincing’, then you end up in a position where the only reasonable thing to do is to calculate exactly how much harm you actually expect to be done by you pressing the button. I’m not going to do this because I’m at work and it seems complicated (what is the disvalue of harm to the social fabric of an online community that’s trying to save the world, and operates largely on trust? perhaps it’s actually a harmless game, but perhaps it’s not, hard to know—seems like the majority of effects would happen down the line).
Additionally, I could just counter-offer a $1,672 counterfactual donation to GiveWell for you to not press the button. I’m not committing to do this, but I might do so if it came down to it.
(I have launch codes and am happy to prove it to you if you want.)
Hmmm, I feel like the argument “There’s some harm in releasing the codes entrusted to me, but not so much that it’s better for someone to die” might prove too much? Like, death is really bad, I definitely grant that. But despite the dollar amount you gave, I feel like we’re sort of running up against a sacred value thing. I mean, you could just as easily say, “There’s some harm in releasing the codes entrusted to me, but not so much that it’s better for someone to have a 10% chance of dying”—which would naïvely bring your price down to $167.20.
If you accept as true that that argument should be equally ‘morally convincing’, then you end up in a position where the only reasonable thing to do is to calculate exactly how much harm you actually expect to be done by you pressing the button. I’m not going to do this because I’m at work and it seems complicated (what is the disvalue of harm to the social fabric of an online community that’s trying to save the world, and operates largely on trust? perhaps it’s actually a harmless game, but perhaps it’s not, hard to know—seems like the majority of effects would happen down the line).
Additionally, I could just counter-offer a $1,672 counterfactual donation to GiveWell for you to not press the button. I’m not committing to do this, but I might do so if it came down to it.
Are you telling me you don’t think this is a good trade?
Wasn’t totally sure when I wrote it, but now firmly yes.
This whole thread is awesome. This is the maybe the best thing that’s happened on LessWrong since Eliezer more-or-less went on hiatus.
Huge respect to everyone. This is really great. Hard but great. Actually it’s great because it’s hard.