Lesswrongers will be encouraged to learn that the Torchwood characters were rationalists to a man and woman—there was little hesitation in agreeing to the 456′s demands.
Are you joking? They weren’t rationalists, they were selfish. There is a distinction. They were looking after there own asses and those of their families (note that the complicit politicians specifically excluded their own family’s children from selection, regardless of ‘worth’).
children—or units as they were plausibly referred to
What do you mean by ‘plausibly’? They were referred to as units in order to dehumanize them. Because the people referring to the children as such recognized that what they were doing was abhorrently wrong, and so had to mask the fact, even to themselves, by obscuring the reality of what they were discussing: the wholesale slaughter of their fellows.
… governments paying attention to round up the orphans, the refugees and the unloved—for the unexpectedly rational reason of minimising the suffering of the survivors
That’s laughable. It had nothing to do with minimizing suffering, that was a rationalization. They were doing it for the same reason any government targets the vulnerable; because there are few willing to protect them and argue for them. It was pretty clear if you watched the show that the children being targeted were hardly ‘unloved’.
You can’t consider the scenario without considering the precedent that it would set. The notion that there are wide swaths of the population—children, who’ve never even had the opportunity to truly prove themselves or do much of anything—who are completely without worth and sacrificeable at the whim of the government is untenable in a society that values things like individuality, personal autonomy, the pursuit of happiness and, well, human life! They would not be saving humanity, they would be mutilating it.
The poster failed to mention that the sacrificed children were being sentence to an eternal fate which worse than death.
And there is a difference between the actions of the government and the actions of the main character. One of them was fighting the monsters. The others were the monster’s business partners.
The poster failed to mention that the sacrificed children were being sentenced to an eternal fate which worse than death.
I really wonder what other LWers will say about this. Would you prefer to give one person huge disutility, or destroy humankind? For extra fun consider a 1/2^^^3 chance of 3^^^3 disutility to that one person.
1⁄65536 probability of someone suffering 3^^^3 disutilons? If humanity’s lifespan is finite, that’s far worse than wiping out humanity. (If humanity’s lifespan is infinite, or could be infinite with probability greater than 1⁄65536, the reverse is true.)
I’ll take that for an answer. Now let’s go over the question again: if humanity’s lifespan is potentially huge… counting “expected deaths from the LHC” is the wrong way to calculate disutility… the right way is to take the huge future into account… then everyone should oppose the LHC no matter what? Why aren’t you doing it then—I recall you hoped to live for infinity years?
The very small probability of a disaster caused directly by the LHC is swamped by the possible effects (positive or negative) of increased knowledge of physics. Intervening too stridently would be very costly in terms of existential risk: prominent physicists would be annoyed at the interference (asking why those efforts were not being dedicated to nuclear disarmament or biodefence efforts, etc) and could discredit concern with exotic existential risks (e.g. AI) in their retaliation.
...Okay. You do sound like an expected utility consequentialist, I didn’t quite believe that before. Here’s an upvote. One more question and we’re done.
Your loved one is going to be copied a large number of times. Would you prefer all copies to get a dust speck in the eye, or one copy to be tortured for 50 years?
We don’t know enough physics to last until the end of time, but we know enough to build computers; if I made policy for Earth, I would put off high-energy physics experiments until after the Singularity. It’s a question of timing. But I don’t make such policy, of course, and I agree with the rest of the logic for why I shouldn’t bother trying.
I would postpone high-energy physics as well, but your argument seems mostly orthogonal to the claim you said you disagreed with.
New physics knowledge from the LHC could (with low probability, but much higher probability than a direct disaster) bring about powerful new technology: e.g. vastly more powerful computers that speed up AI development, or cheap energy sources that facilitate the creation of a global singleton. Given the past history of serendipitous scientific discovery and the Bostrom-Tegmark evidence against direct disaster, I think much more of the expected importance of the LHC comes from the former than from the latter.
Same here. Unless I am missing something (and please do tell me if I am), the knowledge gained by the LHC is very unlikely to help much to increase the rationality of civilization or to reduce existential risk, so the experiment can wait a few decades, centuries or millenia till civilization has become vastly more rational (and consequently vastly better able to assess the existential risk of doing the experiment).
Lesswrongers will be encouraged to learn that the Torchwood characters were rationalists to a man and woman—there was little hesitation in agreeing to the 456′s demands.
Are you joking? They weren’t rationalists, they were selfish. There is a distinction. They were looking after there own asses and those of their families (note that the complicit politicians specifically excluded their own family’s children from selection, regardless of ‘worth’).
children—or units as they were plausibly referred to
What do you mean by ‘plausibly’? They were referred to as units in order to dehumanize them. Because the people referring to the children as such recognized that what they were doing was abhorrently wrong, and so had to mask the fact, even to themselves, by obscuring the reality of what they were discussing: the wholesale slaughter of their fellows.
… governments paying attention to round up the orphans, the refugees and the unloved—for the unexpectedly rational reason of minimising the suffering of the survivors
That’s laughable. It had nothing to do with minimizing suffering, that was a rationalization. They were doing it for the same reason any government targets the vulnerable; because there are few willing to protect them and argue for them. It was pretty clear if you watched the show that the children being targeted were hardly ‘unloved’.
You can’t consider the scenario without considering the precedent that it would set. The notion that there are wide swaths of the population—children, who’ve never even had the opportunity to truly prove themselves or do much of anything—who are completely without worth and sacrificeable at the whim of the government is untenable in a society that values things like individuality, personal autonomy, the pursuit of happiness and, well, human life! They would not be saving humanity, they would be mutilating it.
The poster failed to mention that the sacrificed children were being sentence to an eternal fate which worse than death.
And there is a difference between the actions of the government and the actions of the main character. One of them was fighting the monsters. The others were the monster’s business partners.
I really wonder what other LWers will say about this. Would you prefer to give one person huge disutility, or destroy humankind? For extra fun consider a 1/2^^^3 chance of 3^^^3 disutility to that one person.
Eliezer in particular considers his utility function to be provably unbounded in the positive direction at least, thinks we have much more potential for pain than pleasure, thinks destroying humankind has finite disutility on the order of magnitude of “billions of lives lost” (otherwise he’d oppose the LHC no matter what), and he’s an altruist and expected utility consequentialist. Together this seems to imply that he will have to get pretty inventive to avoid destroying humankind.
1/2^^^3 = 2^^(2^^2) = 2^^(2^2) = 2^^4 = 2^2^2^2 = 65536.
1⁄65536, surely?
Er, yes.
Oh, shit. Well… uhhhh.. in the least convenient impossible possible world it isn’t! :-)
Come to think, I don’t even see how your observation makes the question any easier.
?
1⁄65536 probability of someone suffering 3^^^3 disutilons? If humanity’s lifespan is finite, that’s far worse than wiping out humanity. (If humanity’s lifespan is infinite, or could be infinite with probability greater than 1⁄65536, the reverse is true.)
I’ll take that for an answer. Now let’s go over the question again: if humanity’s lifespan is potentially huge… counting “expected deaths from the LHC” is the wrong way to calculate disutility… the right way is to take the huge future into account… then everyone should oppose the LHC no matter what? Why aren’t you doing it then—I recall you hoped to live for infinity years?
The very small probability of a disaster caused directly by the LHC is swamped by the possible effects (positive or negative) of increased knowledge of physics. Intervening too stridently would be very costly in terms of existential risk: prominent physicists would be annoyed at the interference (asking why those efforts were not being dedicated to nuclear disarmament or biodefence efforts, etc) and could discredit concern with exotic existential risks (e.g. AI) in their retaliation.
Agree with all except the first sentence.
...Okay. You do sound like an expected utility consequentialist, I didn’t quite believe that before. Here’s an upvote. One more question and we’re done.
Your loved one is going to be copied a large number of times. Would you prefer all copies to get a dust speck in the eye, or one copy to be tortured for 50 years?
Hmm? In light of Bostrom and Tegmark’s Nature article?
We don’t know enough physics to last until the end of time, but we know enough to build computers; if I made policy for Earth, I would put off high-energy physics experiments until after the Singularity. It’s a question of timing. But I don’t make such policy, of course, and I agree with the rest of the logic for why I shouldn’t bother trying.
I would postpone high-energy physics as well, but your argument seems mostly orthogonal to the claim you said you disagreed with.
New physics knowledge from the LHC could (with low probability, but much higher probability than a direct disaster) bring about powerful new technology: e.g. vastly more powerful computers that speed up AI development, or cheap energy sources that facilitate the creation of a global singleton. Given the past history of serendipitous scientific discovery and the Bostrom-Tegmark evidence against direct disaster, I think much more of the expected importance of the LHC comes from the former than from the latter.
Same here. Unless I am missing something (and please do tell me if I am), the knowledge gained by the LHC is very unlikely to help much to increase the rationality of civilization or to reduce existential risk, so the experiment can wait a few decades, centuries or millenia till civilization has become vastly more rational (and consequently vastly better able to assess the existential risk of doing the experiment).