I’ll take that for an answer. Now let’s go over the question again: if humanity’s lifespan is potentially huge… counting “expected deaths from the LHC” is the wrong way to calculate disutility… the right way is to take the huge future into account… then everyone should oppose the LHC no matter what? Why aren’t you doing it then—I recall you hoped to live for infinity years?
The very small probability of a disaster caused directly by the LHC is swamped by the possible effects (positive or negative) of increased knowledge of physics. Intervening too stridently would be very costly in terms of existential risk: prominent physicists would be annoyed at the interference (asking why those efforts were not being dedicated to nuclear disarmament or biodefence efforts, etc) and could discredit concern with exotic existential risks (e.g. AI) in their retaliation.
...Okay. You do sound like an expected utility consequentialist, I didn’t quite believe that before. Here’s an upvote. One more question and we’re done.
Your loved one is going to be copied a large number of times. Would you prefer all copies to get a dust speck in the eye, or one copy to be tortured for 50 years?
We don’t know enough physics to last until the end of time, but we know enough to build computers; if I made policy for Earth, I would put off high-energy physics experiments until after the Singularity. It’s a question of timing. But I don’t make such policy, of course, and I agree with the rest of the logic for why I shouldn’t bother trying.
I would postpone high-energy physics as well, but your argument seems mostly orthogonal to the claim you said you disagreed with.
New physics knowledge from the LHC could (with low probability, but much higher probability than a direct disaster) bring about powerful new technology: e.g. vastly more powerful computers that speed up AI development, or cheap energy sources that facilitate the creation of a global singleton. Given the past history of serendipitous scientific discovery and the Bostrom-Tegmark evidence against direct disaster, I think much more of the expected importance of the LHC comes from the former than from the latter.
Same here. Unless I am missing something (and please do tell me if I am), the knowledge gained by the LHC is very unlikely to help much to increase the rationality of civilization or to reduce existential risk, so the experiment can wait a few decades, centuries or millenia till civilization has become vastly more rational (and consequently vastly better able to assess the existential risk of doing the experiment).
I’ll take that for an answer. Now let’s go over the question again: if humanity’s lifespan is potentially huge… counting “expected deaths from the LHC” is the wrong way to calculate disutility… the right way is to take the huge future into account… then everyone should oppose the LHC no matter what? Why aren’t you doing it then—I recall you hoped to live for infinity years?
The very small probability of a disaster caused directly by the LHC is swamped by the possible effects (positive or negative) of increased knowledge of physics. Intervening too stridently would be very costly in terms of existential risk: prominent physicists would be annoyed at the interference (asking why those efforts were not being dedicated to nuclear disarmament or biodefence efforts, etc) and could discredit concern with exotic existential risks (e.g. AI) in their retaliation.
Agree with all except the first sentence.
...Okay. You do sound like an expected utility consequentialist, I didn’t quite believe that before. Here’s an upvote. One more question and we’re done.
Your loved one is going to be copied a large number of times. Would you prefer all copies to get a dust speck in the eye, or one copy to be tortured for 50 years?
Hmm? In light of Bostrom and Tegmark’s Nature article?
We don’t know enough physics to last until the end of time, but we know enough to build computers; if I made policy for Earth, I would put off high-energy physics experiments until after the Singularity. It’s a question of timing. But I don’t make such policy, of course, and I agree with the rest of the logic for why I shouldn’t bother trying.
I would postpone high-energy physics as well, but your argument seems mostly orthogonal to the claim you said you disagreed with.
New physics knowledge from the LHC could (with low probability, but much higher probability than a direct disaster) bring about powerful new technology: e.g. vastly more powerful computers that speed up AI development, or cheap energy sources that facilitate the creation of a global singleton. Given the past history of serendipitous scientific discovery and the Bostrom-Tegmark evidence against direct disaster, I think much more of the expected importance of the LHC comes from the former than from the latter.
Same here. Unless I am missing something (and please do tell me if I am), the knowledge gained by the LHC is very unlikely to help much to increase the rationality of civilization or to reduce existential risk, so the experiment can wait a few decades, centuries or millenia till civilization has become vastly more rational (and consequently vastly better able to assess the existential risk of doing the experiment).