But one may disagree on “Omelas and Space Colonization”, i.e. on how many lives worth living are needed to “outweigh” or “compensate for” miserable ones (which our future existence will inevitably also produce, probably in astronomical numbers, assuming astronomical population expansion).
A superintelligent Singleton (e.g., FAI) can guarantee a minimum standard of living for everyone who will ever be born or created, so I don’t understand why you think astronomical population expansion inevitably produces miserable lives.
Also, I note that space colonization can produce an astronomical number of QALYs even assuming no population growth, by letting currently existing people continue to live after all the negentropy in our solar system has been exhausted.
Yes, it can. But a Singleton is not guaranteed; and conditional on the future existence of a Singleton, friendliness is not guaranteed. What I meant was that astronomical population expansion clearly produces an astronomical number of most miserable, tortured lives in expectation.
Lots of dystopian future scenarios are possible. Here are some of them.
How many happy people for one miserable existence? - I take the zero option very seriously because I don’t think that (anticipated) non-existence poses any moral problem or generates any moral urgency to act, while (anticipated) miserable existence clearly does. I don’t think it would have been any intrinsic problem whatsoever had I never been born; but it clearly would have been a problem had I been born into miserable circumstances.
But even if you do believe that non-existence poses a moral problem and creates an urgency to act, it’s not clear yet that the value of the future is net positive. If the number of happy people you require for one miserable existence is sufficiently great and/or if dystopian scenarios are sufficiently likely, the future will be negative in expectation. Beware optimism bias, illusion of control, etc.
Lots of dystopian future scenarios are possible. Here are some of them.
But even if you do believe that non-existence poses a moral problem and creates an urgency to act, it’s not clear yet that the value of the future is net positive.
Even Brian Tomasik, the author of that page, says that if one trades off pain and pleasure at ordinary rates the expected happiness of the future exceeds the expected suffering, by a factor of between 2 and 50.
The 2-50 bounds seem reasonable for EV(happiness)/EV(suffering) using normal people’s pleasure-pain exchange ratios, which I think are insane. :) Something like accepting 1 minute of Medieval torture for a few days/weeks of good life.
Using my own pleasure-pain exchange ratio, the future is almost guaranteed to be negative in expectation, although I maintain ~30% chance that it would still be better for Earth-based life to colonize space to prevent counterfactual suffering elsewhere.
It is worth pointing out that by ‘insane’, Brian just means ‘an exchange rate that is very different from the one I happen to endorse.’ :-) He admits that there is no reason to favor his own exchange rate over other people’s. (By contrast, some of these other people would argue that there are reasons to favor their exchange rates over Brian’s.)
That’s incorrect. David Pearce claims that pains below a certain intensity can’t be outweighed by any amount of pleasure. Both Brian and Dave agree that Dave is a (threshold) negative utilitarian whereas Brian is a negative-leaning utilitarian.
Not so sure. Dave believes that pains have an “ought-not-to-be-in-the-world-ness” property that pleasures lack. And in the discussions I have seen, he indeed was not prepared to accept that small pains can be outweighed by huge quantities of pleasure.
Brian was oscillating between NLU and NU. He recently told me he found the claim convincing that such states as flow, orgasm, meditative tranquility, perfectly subjectively fine muzak, and the absence of consciousness were all equally good.
Can preference utilitarians, classical utilitarians and negative utilitarians hammer out some kind of cosmological policy consensus? Not ideal by anyone’s lights, but good enough? So long as we don’t create more experience below “hedonic zero” in our forward light-cone, NUs are untroubled by wildly differing outcomes. There is clearly a tension between preference utilitarianism and classical utilitarianism; but most(?) preference utilitarians are relaxed about having hedonic ranges shifted upwards—perhaps even radically upwards—if recalibration is done safely, intelligently and conservatively—a big “if”, for sure. Surrounding the sphere of sentient agents in our Local Supercluster(?) with a sea of hedonium propagated by von Neumann probes or whatever is a matter of indifference to most preference utilitarians and NUs but mandated(?) by CU.
Regarding “people’s ordinary exchange rates”, I suspect that in cases people clearly recognize as altruistic, the rates are closer to Brian’s than to yours. In cases they (IMO confusedly) think of as “egoistic”, the rates may be closer to yours. - This provides an argument that people should end up with Brian upon knocking out confusion.
I suspect that in cases people clearly recognize as altruistic, the rates are closer to Brian’s than to yours.
Which cases did you have in mind?
People generally don’t altruistically favor euthanasia for pets with temporarily painful but easily treatable injuries (where recovery could be followed with extended healthy life). People are not eager to campaign to stop the pain of childbirth at the expense of birth. They don’t consider a single instance of torture worse than many deaths depriving people of happy lives. They favor bringing into being the lives of children that will contain some pain.
Thanks for the explanation. I was thrown by your usage of the word “inevitable” earlier, but I think I understand your position now. (EDIT: Deleted rest of this comment which makes a point that you already discussing with Nick Beckstead.)
A superintelligent Singleton (e.g., FAI) can guarantee a minimum standard of living for everyone who will ever be born or created, so I don’t understand why you think astronomical population expansion inevitably produces miserable lives.
Also, I note that space colonization can produce an astronomical number of QALYs even assuming no population growth, by letting currently existing people continue to live after all the negentropy in our solar system has been exhausted.
Yes, it can. But a Singleton is not guaranteed; and conditional on the future existence of a Singleton, friendliness is not guaranteed. What I meant was that astronomical population expansion clearly produces an astronomical number of most miserable, tortured lives in expectation.
Lots of dystopian future scenarios are possible. Here are some of them.
How many happy people for one miserable existence? - I take the zero option very seriously because I don’t think that (anticipated) non-existence poses any moral problem or generates any moral urgency to act, while (anticipated) miserable existence clearly does. I don’t think it would have been any intrinsic problem whatsoever had I never been born; but it clearly would have been a problem had I been born into miserable circumstances.
But even if you do believe that non-existence poses a moral problem and creates an urgency to act, it’s not clear yet that the value of the future is net positive. If the number of happy people you require for one miserable existence is sufficiently great and/or if dystopian scenarios are sufficiently likely, the future will be negative in expectation. Beware optimism bias, illusion of control, etc.
Even Brian Tomasik, the author of that page, says that if one trades off pain and pleasure at ordinary rates the expected happiness of the future exceeds the expected suffering, by a factor of between 2 and 50.
The 2-50 bounds seem reasonable for EV(happiness)/EV(suffering) using normal people’s pleasure-pain exchange ratios, which I think are insane. :) Something like accepting 1 minute of Medieval torture for a few days/weeks of good life.
Using my own pleasure-pain exchange ratio, the future is almost guaranteed to be negative in expectation, although I maintain ~30% chance that it would still be better for Earth-based life to colonize space to prevent counterfactual suffering elsewhere.
It is worth pointing out that by ‘insane’, Brian just means ‘an exchange rate that is very different from the one I happen to endorse.’ :-) He admits that there is no reason to favor his own exchange rate over other people’s. (By contrast, some of these other people would argue that there are reasons to favor their exchange rates over Brian’s.)
Also, still others (such as David Pearce) would argue that there are reasons to favor Brian’s exchange rate. :)
That’s incorrect. David Pearce claims that pains below a certain intensity can’t be outweighed by any amount of pleasure. Both Brian and Dave agree that Dave is a (threshold) negative utilitarian whereas Brian is a negative-leaning utilitarian.
Not so sure. Dave believes that pains have an “ought-not-to-be-in-the-world-ness” property that pleasures lack. And in the discussions I have seen, he indeed was not prepared to accept that small pains can be outweighed by huge quantities of pleasure. Brian was oscillating between NLU and NU. He recently told me he found the claim convincing that such states as flow, orgasm, meditative tranquility, perfectly subjectively fine muzak, and the absence of consciousness were all equally good.
Can preference utilitarians, classical utilitarians and negative utilitarians hammer out some kind of cosmological policy consensus? Not ideal by anyone’s lights, but good enough? So long as we don’t create more experience below “hedonic zero” in our forward light-cone, NUs are untroubled by wildly differing outcomes. There is clearly a tension between preference utilitarianism and classical utilitarianism; but most(?) preference utilitarians are relaxed about having hedonic ranges shifted upwards—perhaps even radically upwards—if recalibration is done safely, intelligently and conservatively—a big “if”, for sure. Surrounding the sphere of sentient agents in our Local Supercluster(?) with a sea of hedonium propagated by von Neumann probes or whatever is a matter of indifference to most preference utilitarians and NUs but mandated(?) by CU.
Is this too rosy a scenario?
Regarding “people’s ordinary exchange rates”, I suspect that in cases people clearly recognize as altruistic, the rates are closer to Brian’s than to yours. In cases they (IMO confusedly) think of as “egoistic”, the rates may be closer to yours. - This provides an argument that people should end up with Brian upon knocking out confusion.
Which cases did you have in mind?
People generally don’t altruistically favor euthanasia for pets with temporarily painful but easily treatable injuries (where recovery could be followed with extended healthy life). People are not eager to campaign to stop the pain of childbirth at the expense of birth. They don’t consider a single instance of torture worse than many deaths depriving people of happy lives. They favor bringing into being the lives of children that will contain some pain.
Thanks for the explanation. I was thrown by your usage of the word “inevitable” earlier, but I think I understand your position now. (EDIT: Deleted rest of this comment which makes a point that you already discussing with Nick Beckstead.)