Stuart, it sounds like you think that the life of the typical animal, and of the typical human in history, were not worth living—you’d prefer that they had never existed. Since you seem to think your own life worth living, you must see people like you as a rare exception, and may be unsure if your existence justifies all the suffering your ancestors went through to produce you. And you’d naturally be wary of a future of descendants with lives more like your ancestors’ than like your own. What you’d most want from the future is to stop change enough to ensure that people very much like you continue to dominate.
If we conceive of “death” broadly, then pretty much any competitive scenario will have lots of “death”, if we look at it on a large enough scale. But this hardly implies that individuals will often feel the emotional terror of an impending death—that depends far more on framing and psychology.
Stuart, it sounds like you think that the life of the typical animal, and of the typical human in history, were not worth living—you’d prefer that they had never existed.
I’d prefer that their lives were better, rather than there were more of them.
What you’d most want from the future is to stop change enough to ensure that people very much like you continue to dominate.
What I’d most want from the future is change in many directions (more excitement! more freedom! more fun!), but not in the direction of low-individual-choice, death-filled worlds (with possibly a lot of pain). I’d eagerly embrace a world without mathematicians, without males, without academics, without white people (and so on down the list of practically any of my characteristics), without me or any copy of me, in order to avoid the malthusian scenario.
Even if you and I might disagree on trading number/length of lives for some measure of quality, I hope you see that my analysis can help you identify policies that might push the future in your favored direction. I’m first and foremost trying to predict the outcomes of a low regulation scenario. That is the standard basis for analyzing the consequences of possible regulations.
Hang on, yesterday you were telling me that there’s very little anyone could do to make a real difference to the outcome. So why tell Stuart that your analysis could be helpful in bringing about a different outcome?
I hope you see that my analysis can help you identify policies that might push the future in your favored direction
Certainly. I don’t dispute the utility of the research (though I do sometimes feel that it is presented in ways that make the default outcome seem more attractive).
I notice that Robin avoided answering your question.
For what it is worth if OvercomingBias.com posts are taken literally then Hanson has declared the scenario to be desirable and so assuming rudimentary consequentialist behavior some degree of “prefer to work to bring about” is implied. I am not sure exactly how much of what he writes on OB is for provocative effect rather than sincere testimony. I am also not sure how much such advocacy is based on “sour grapes”—that is, concluding that the scenario is inevitable and then trying to convince yourself that that what you wanted all along.
If you want more precise answers you have to ask more precise questions. Whether or not I favor any particular thing depends on what alternatives it is being compared with.
That isn’t a realistic choice. If you mean imagine that 1) humanity continues on as it has without ems arriving, or 2) ems arrive as I envision, then we’d be adding trillions of ems with lives worth living onto the billions of humans who would exist anyway with a similar quality of life. That sounds good to me.
(FWIW, I didn’t mean that last one as a choice, just a comparison of two situations happening at different times, but i don’t think that raelly matters)
People tend to assume they have more personal influence on these big far topics than they do. We might be able to make minor adjustments to what happens when, but we just don’t get to choose between very different outcomes like uploads vs. AGI vs. nothing. We might work to make an upload world happen a bit sooner, or via a bit more stable a transition.
Because he’s still alive. I’m not sure that’s enough evidence though; it started me thinking about whether I found my life worth living, and I just don’t know. (It’s a bad subject to get me thinking about, I’ll try and stop it ASAP after this post).
Right now, I’m a fat middle-aged translator who has to translate boring technical stuff. OTOH, I do have a sweet husband and a lovely cat, and the people who pay me for those translations must think they add some value to the world.
In the 56 years leading up to that, do the positives outweigh the negatives, taking into account my tendency to remember the negatives (especially the embarrassing ones) much better? I can’t tell, but if someone were proposing to create another one of me, to live the life I have lived so far, I’d say: “Why on earth would you want to do that?”.
That isn’t to say I’m ready to commit suicide though. I do have my husband and cat, I have a holiday to look forward to, and death is probably a very nasty experience, especially a DIY one.
[EDIT: fixed typo]
As a slight aside: I’ve been arguing recently that we should use moral theories that are not universally applicable, but have better results than existing universal theories when they are applicable.
In this case, you correctly point out that many moral theories have conflicts between their evaluation of the value of past lives (possibly negative) and their valuation of present existence (positive). Personally, I answer this by saying my moral theory doesn’t need to make counterfactual choices about things in the past. It’s enough that it be consistent about the future. I think that’s a plausible answer, here, to the question of whether “my existence justifies the past suffering of my ancestors”.
Stuart, it sounds like you think that the life of the typical animal, and of the typical human in history, were not worth living—you’d prefer that they had never existed. Since you seem to think your own life worth living, you must see people like you as a rare exception, and may be unsure if your existence justifies all the suffering your ancestors went through to produce you. And you’d naturally be wary of a future of descendants with lives more like your ancestors’ than like your own. What you’d most want from the future is to stop change enough to ensure that people very much like you continue to dominate.
If we conceive of “death” broadly, then pretty much any competitive scenario will have lots of “death”, if we look at it on a large enough scale. But this hardly implies that individuals will often feel the emotional terror of an impending death—that depends far more on framing and psychology.
When I read this, a part of my brain figuratively started jumping up and down and screaming “False Dichotomy! False Dichotomy!”
I’d prefer that their lives were better, rather than there were more of them.
What I’d most want from the future is change in many directions (more excitement! more freedom! more fun!), but not in the direction of low-individual-choice, death-filled worlds (with possibly a lot of pain). I’d eagerly embrace a world without mathematicians, without males, without academics, without white people (and so on down the list of practically any of my characteristics), without me or any copy of me, in order to avoid the malthusian scenario.
Even if you and I might disagree on trading number/length of lives for some measure of quality, I hope you see that my analysis can help you identify policies that might push the future in your favored direction. I’m first and foremost trying to predict the outcomes of a low regulation scenario. That is the standard basis for analyzing the consequences of possible regulations.
Hang on, yesterday you were telling me that there’s very little anyone could do to make a real difference to the outcome. So why tell Stuart that your analysis could be helpful in bringing about a different outcome?
The issue is the size of the influence you can have. Even if you only have a small influence, you still want to think about how to use it.
Certainly. I don’t dispute the utility of the research (though I do sometimes feel that it is presented in ways that make the default outcome seem more attractive).
Something I’ve not been clear about (I think you might have changed your thinking about this):
Do you see your malthusian upload future as something that we should work to avoid, or work to bring about?
I notice that Robin avoided answering your question.
For what it is worth if OvercomingBias.com posts are taken literally then Hanson has declared the scenario to be desirable and so assuming rudimentary consequentialist behavior some degree of “prefer to work to bring about” is implied. I am not sure exactly how much of what he writes on OB is for provocative effect rather than sincere testimony. I am also not sure how much such advocacy is based on “sour grapes”—that is, concluding that the scenario is inevitable and then trying to convince yourself that that what you wanted all along.
Yeah, the reason I asked is that he’s been evasive about it before and I wanted to try to pin down an actual answer.
If you want more precise answers you have to ask more precise questions. Whether or not I favor any particular thing depends on what alternatives it is being compared with.
Okay, compare it to life now.
That isn’t a realistic choice. If you mean imagine that 1) humanity continues on as it has without ems arriving, or 2) ems arrive as I envision, then we’d be adding trillions of ems with lives worth living onto the billions of humans who would exist anyway with a similar quality of life. That sounds good to me.
Thanks, that tells me what I wanted to know.
(FWIW, I didn’t mean that last one as a choice, just a comparison of two situations happening at different times, but i don’t think that raelly matters)
Hanson sees moral language as something he should work to avoid. :D
s/should/would prefer/ or whatever.
People tend to assume they have more personal influence on these big far topics than they do. We might be able to make minor adjustments to what happens when, but we just don’t get to choose between very different outcomes like uploads vs. AGI vs. nothing. We might work to make an upload world happen a bit sooner, or via a bit more stable a transition.
What’s your take on the first mover advantage that EY is apparently hoping for?
Why does it seem like Stuart considers his life worth living?
Because he’s still alive. I’m not sure that’s enough evidence though; it started me thinking about whether I found my life worth living, and I just don’t know. (It’s a bad subject to get me thinking about, I’ll try and stop it ASAP after this post).
Right now, I’m a fat middle-aged translator who has to translate boring technical stuff. OTOH, I do have a sweet husband and a lovely cat, and the people who pay me for those translations must think they add some value to the world.
In the 56 years leading up to that, do the positives outweigh the negatives, taking into account my tendency to remember the negatives (especially the embarrassing ones) much better? I can’t tell, but if someone were proposing to create another one of me, to live the life I have lived so far, I’d say: “Why on earth would you want to do that?”.
That isn’t to say I’m ready to commit suicide though. I do have my husband and cat, I have a holiday to look forward to, and death is probably a very nasty experience, especially a DIY one. [EDIT: fixed typo]
Revealed preference.
But some people stay alive to make the world a better place, despite considering their life not worth living. I know several people with that view.
As a slight aside: I’ve been arguing recently that we should use moral theories that are not universally applicable, but have better results than existing universal theories when they are applicable.
In this case, you correctly point out that many moral theories have conflicts between their evaluation of the value of past lives (possibly negative) and their valuation of present existence (positive). Personally, I answer this by saying my moral theory doesn’t need to make counterfactual choices about things in the past. It’s enough that it be consistent about the future. I think that’s a plausible answer, here, to the question of whether “my existence justifies the past suffering of my ancestors”.