The poster failed to mention that the sacrificed children were being sentenced to an eternal fate which worse than death.
I really wonder what other LWers will say about this. Would you prefer to give one person huge disutility, or destroy humankind? For extra fun consider a 1/2^^^3 chance of 3^^^3 disutility to that one person.
1⁄65536 probability of someone suffering 3^^^3 disutilons? If humanity’s lifespan is finite, that’s far worse than wiping out humanity. (If humanity’s lifespan is infinite, or could be infinite with probability greater than 1⁄65536, the reverse is true.)
I’ll take that for an answer. Now let’s go over the question again: if humanity’s lifespan is potentially huge… counting “expected deaths from the LHC” is the wrong way to calculate disutility… the right way is to take the huge future into account… then everyone should oppose the LHC no matter what? Why aren’t you doing it then—I recall you hoped to live for infinity years?
The very small probability of a disaster caused directly by the LHC is swamped by the possible effects (positive or negative) of increased knowledge of physics. Intervening too stridently would be very costly in terms of existential risk: prominent physicists would be annoyed at the interference (asking why those efforts were not being dedicated to nuclear disarmament or biodefence efforts, etc) and could discredit concern with exotic existential risks (e.g. AI) in their retaliation.
...Okay. You do sound like an expected utility consequentialist, I didn’t quite believe that before. Here’s an upvote. One more question and we’re done.
Your loved one is going to be copied a large number of times. Would you prefer all copies to get a dust speck in the eye, or one copy to be tortured for 50 years?
We don’t know enough physics to last until the end of time, but we know enough to build computers; if I made policy for Earth, I would put off high-energy physics experiments until after the Singularity. It’s a question of timing. But I don’t make such policy, of course, and I agree with the rest of the logic for why I shouldn’t bother trying.
I would postpone high-energy physics as well, but your argument seems mostly orthogonal to the claim you said you disagreed with.
New physics knowledge from the LHC could (with low probability, but much higher probability than a direct disaster) bring about powerful new technology: e.g. vastly more powerful computers that speed up AI development, or cheap energy sources that facilitate the creation of a global singleton. Given the past history of serendipitous scientific discovery and the Bostrom-Tegmark evidence against direct disaster, I think much more of the expected importance of the LHC comes from the former than from the latter.
Same here. Unless I am missing something (and please do tell me if I am), the knowledge gained by the LHC is very unlikely to help much to increase the rationality of civilization or to reduce existential risk, so the experiment can wait a few decades, centuries or millenia till civilization has become vastly more rational (and consequently vastly better able to assess the existential risk of doing the experiment).
I really wonder what other LWers will say about this. Would you prefer to give one person huge disutility, or destroy humankind? For extra fun consider a 1/2^^^3 chance of 3^^^3 disutility to that one person.
Eliezer in particular considers his utility function to be provably unbounded in the positive direction at least, thinks we have much more potential for pain than pleasure, thinks destroying humankind has finite disutility on the order of magnitude of “billions of lives lost” (otherwise he’d oppose the LHC no matter what), and he’s an altruist and expected utility consequentialist. Together this seems to imply that he will have to get pretty inventive to avoid destroying humankind.
1/2^^^3 = 2^^(2^^2) = 2^^(2^2) = 2^^4 = 2^2^2^2 = 65536.
1⁄65536, surely?
Er, yes.
Oh, shit. Well… uhhhh.. in the least convenient impossible possible world it isn’t! :-)
Come to think, I don’t even see how your observation makes the question any easier.
?
1⁄65536 probability of someone suffering 3^^^3 disutilons? If humanity’s lifespan is finite, that’s far worse than wiping out humanity. (If humanity’s lifespan is infinite, or could be infinite with probability greater than 1⁄65536, the reverse is true.)
I’ll take that for an answer. Now let’s go over the question again: if humanity’s lifespan is potentially huge… counting “expected deaths from the LHC” is the wrong way to calculate disutility… the right way is to take the huge future into account… then everyone should oppose the LHC no matter what? Why aren’t you doing it then—I recall you hoped to live for infinity years?
The very small probability of a disaster caused directly by the LHC is swamped by the possible effects (positive or negative) of increased knowledge of physics. Intervening too stridently would be very costly in terms of existential risk: prominent physicists would be annoyed at the interference (asking why those efforts were not being dedicated to nuclear disarmament or biodefence efforts, etc) and could discredit concern with exotic existential risks (e.g. AI) in their retaliation.
Agree with all except the first sentence.
...Okay. You do sound like an expected utility consequentialist, I didn’t quite believe that before. Here’s an upvote. One more question and we’re done.
Your loved one is going to be copied a large number of times. Would you prefer all copies to get a dust speck in the eye, or one copy to be tortured for 50 years?
Hmm? In light of Bostrom and Tegmark’s Nature article?
We don’t know enough physics to last until the end of time, but we know enough to build computers; if I made policy for Earth, I would put off high-energy physics experiments until after the Singularity. It’s a question of timing. But I don’t make such policy, of course, and I agree with the rest of the logic for why I shouldn’t bother trying.
I would postpone high-energy physics as well, but your argument seems mostly orthogonal to the claim you said you disagreed with.
New physics knowledge from the LHC could (with low probability, but much higher probability than a direct disaster) bring about powerful new technology: e.g. vastly more powerful computers that speed up AI development, or cheap energy sources that facilitate the creation of a global singleton. Given the past history of serendipitous scientific discovery and the Bostrom-Tegmark evidence against direct disaster, I think much more of the expected importance of the LHC comes from the former than from the latter.
Same here. Unless I am missing something (and please do tell me if I am), the knowledge gained by the LHC is very unlikely to help much to increase the rationality of civilization or to reduce existential risk, so the experiment can wait a few decades, centuries or millenia till civilization has become vastly more rational (and consequently vastly better able to assess the existential risk of doing the experiment).