Reading about this tragic and horrifyingly wasteful dystopia really solidifies my hope that the future goes more like what Robin Hanson has envisioned.
Reading about this tragic and horrifyingly wasteful dystopia really solidifies my hope that the future goes more like what Robin Hanson has envisioned.
Accordingly I must raise my estimation of the threat Robin Hanson poses to humanity. He has persuaded at least one person to advocate his Malthusian hell!
Meh, I wasn’t advocating it, just saying it would be way better than this scenario. Either n humans burning the cosmic commons for tacky IRL video games and sex with strangers or 1 Billion n humans living worthwhile productive lives.
I’m not saying Ishtar’s life isn’t worth living, just that it’s tragic that so many also worthwhile lives are being prevented from existing so that she can play her silly games.
Also, it isn’t productive, it is consumptive. She is simply consuming resources provided by others.
Right. My point was that you view it as “her silly games”, but they might be precisely what make her life worth living. One might just as well say “It’s tragic that so many silly lives exist so that Ishtar cannot live her worthwhile life”.
Not so much a “Who are you to say” style rejection, as much as noting that it’s not obvious, math or no math.
Ok, so would you prefer to pop all but 7 humans out of existence (assume that the process is painless) in exchange for the remaining humans experiencing Ishtar level happiness?
If you just mean to say that terminal values are arbitrary, so some possible minds might prefer the Ishtar scenario, then that’s fine, so far as it goes. But if you take multiplication seriously, then its insanely hard to make the case for this being a genuine eutopia.
You seem to be forgetting the old adage “Shut up and don’t multiply by a count unless it is a count of something that you value linearly with said count”. Very few people value the existence of Hansonian Hell-bots in direct proportion to the number of Hell-bots that are ‘alive’. Of those that do it isn’t clear that they value the existence of these creatures in that condition positively. So taking things ‘seriously’ here is more to do with what preferences people have than it is about ‘multiplication’.
A lot of people live at subsistence levels. They aren’t much less happy than you or me on average. Their lives are very well worth living by their own standards. And they would likely be better adapted to their environment than we are, so there’s good reason to believe they would be better of than 1st worlders are now. And the denizens of this alleged eutopia don’t seem much happier than some people I know now.
And as long as we’re focusing on the preferences of people in the world now, how many of them do you think would approve of the implicit AI autocracy, reproductive central-planning, and hedonism of this world?
This is a dream-world for nerdy/polyamorous/transhumanist folks common on LW but rare everywhere else (except maybe reddit).
I do not think the parent is so obviously wrong to be worth being downvoted to −4 without even mentioning what’s wrong with it. (I actually agree with much of it. There was even a Robin Hanson post about the fact that “the poor also smile” (can’t link to it because Overcoming Bias is blacked out today).
I downvoted the grandparent for making unintuitive claims about the money:happiness relation without presenting evidence (my understanding is that subsistence-level income does have a significant and negative effect on happiness, although the plot of happiness over income levels off quickly after a basic level of financial security is achieved), for making sketchy claims about adaptation without evidence, for conflating approval with preference (particularly glaring because the point about happiness/income above only works without conflating the two), and for the entirely unnecessary swipe at perceived LW norms in the last sentence.
Oh, and by way of disclaimer, I didn’t find the original story especially compelling as a vision of utopia.
Isn’t it equally tragic that in the real world the resources that are currently maintaining my life aren’t instead being used to support other, more worthwhile, lives? (Or, well, more tragic, since it’s actually real?)
Isn’t it equally tragic that in the real world the resources that are currently maintaining my life aren’t instead being used to support other, more worthwhile, lives?
My argument isn’t that each life which might have been is more valuable, but that they are when added up.
First of all, uploads aren’t yet possible, so far fewer lives worth living could be supported with your resources in the first place. More importantly, since most resources aren’t being centrally distributed by all-powerful machine gods, we would have to tax your earnings. This involves the infamous “leaky buckets” problem acknowledged by all utilitarians. People engage in tax avoidance behaviors, work fewer hours, hide income, and some money which is captured is spent on overhead. These problems don’t exist when all resources are being created and distributed by a central depot.
Furthermore, the ability to actually get those resources to the people in need are in doubt, due to grabby governments/warlords/logistical problems etc.
But yes, overall I would say it is tragic that some of our 1st-World resources aren’t going to save marginal lives.
But yes, overall I would say it is tragic that some of our 1st-World resources aren’t going to save marginal lives.
I don’t think this is the alternative he was proposing. I think the more relevant analogy would be our 1st-World resources going to produce extra marginal barely-worth-living lives in the third world.
What do you think people should be doing? In a post scarcity economy, it seems to me that a lot of what remains to be done is keeping each other entertained.
My problem isn’t particularly with the Ishtar’s pastimes, but with the overall system. I’m arguing that Hanson’s upload society would be better than this because it could support so many more lives worth living total and more total utility than this alleged eutopia.
So you’d be happy with this world if it all existed inside a small piece of the galactic computronium-pile, and there was lots more of it? I actually hadn’t considered that, because I just assume all post-Singularity futures are set inside the galactic computronium-pile unless explicitly stated otherwise.
What math is that? Are you talking about number of lives at any given century—effectively judging the situation as if time-periods were sentient observers to be happy or unhappy at their current situation?
Do you have any reason to believe that maximum diversity in human minds (i.e. allowing lots of different humans to exist) would be best satisfied by cramming them all in the same century, as densely as possible?
A trillion lives all crammed in the same century aren’t inherently more worthwhile than a trillion lives spread over a hundred centuries—any more than 10 people forced to live in the same flat are inherently more precious than 10 people having separate apartments. Do you have any reason to prefer the former over the latter? Surely there’s some ideal range where utility is satisfied in terms of people density spread over time and location.
When you use up negentropy, it is used up for good, and there is a finite amount in each section of the universe. The amount being used on Ishtar could theoretically support good lives of billions of upload minds (or a smaller but still huge number of other possible lives). This isn’t a matter of a long and narrow future or a short and wide future, but of how many total, worthwhile lives will exist.
As for quality, there seems to be no reason why simulations can’t be as happy, or even happier than Ishtar.
There’s little indication of how the utopia actually operates at a higher level, only how the artificially and consensually non-uplifted humans experience it. So there’s no way to be certain, from this small snapshot, whether it is inefficient or not.
I would instead say that it’s main flaw is that the machines allow too much of the “fun” decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren’t very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.
OTOH, nothing in that story requires that the humans are making unaided assessments. The protagonist’s environment may well have been suggested by the system in the first place as its best estimate of what will maximize her enjoyment/fulfilment/fun/Fun/utility/whatever, and she may have said “OK, sounds good.”
I would instead say that it’s main flaw is that the machines allow too much of the “fun” decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren’t very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.
I’m afraid I’d prefer it that way. Having the machines decide what’s fun for us would likely lead to wireheading. Or am I missing something?
[off to read the Fun Theory sequence in case this helps me find the answer myself]
Depends on the criteria the machines are using to evaluate fun, of course—it needn’t be limited to immediate pleasure, and in fact a major point of the Fun Theory sequence is that immediate pleasure is a poor metric for capital-F Fun. Human values are complex and there’s a lot of possible ways to get them wrong, but people are pretty bad at maximizing them too.
Yet the horror is that it’s what you might catch yourself worshiping down the line, forgetting to enjoy any of it. Just take a look at the miserable and aimless workaholics out there, if they can still handle whatever it is they’re doing, their boss will happily exploit them. Do you think your brain would care more about you if you set “efficiency” as its watchword?
Yup, if we set out to build a system that maximized our ability to enjoy life, and we ended up with a system in which we didn’t enjoy life, that would be a failure.
If we set out to build a system with some other goal, or with no particular goal in mind at all, and we ended up with a system in which we didn’t enjoy life, that’s more complicated… but at the very least, it’s not an ideal win condition. (It also describes the real world pretty accurately.)
I’m curious: do you have a vision of a win condition that you would endorse?
You best be sarcastic. Waste is good! It’s signaling, it’s ease, it’s a lack of tension, it’s the life’s little luxuries that you’d wish back if they were all taken from you simultaneously, without caring much about the “efficiency” of it.
No, waste is by definition not good. Resource usage can be good, but the world of this story makes me pessimistic about how it is being done. It seems like the AI gods of this world have engineered a “post-scarcity” society with population control to keep the amount of resources extremely high per person—which enables this video game-like lifestyle for people who want it. Millions of lives could be supported with resources centrally allocated to her. That is a horribly anti-egalitarian form of communism.
Admittedly, it is possible that this takes place within a simulation, but that is never stated, and we have reason to believe that it isn’t true. For example, the author mentions that Ishtar knows the AI-gods won’t let her die even if she crashes, implying that this is her physical body.
Millions of lives could be supported with resources centrally allocated to her.
Are you sure you want them to pop into existence? Why? I just can’t understand! Why must there be more people? So that you can have more smiley faces? That’s the road to paper-clipping!
Coming up with a version of utilitarianism that doesn’t have those problems or an equally unintuitive complement to them is harder than it looks, though.
Why does anyone value anything? If we could painlessly pop all but 70 human beings out of existence but make the ones who remain much happier (say, 10x as happy), would you do it? Why not? Why must there be more people?
That’s easy; we have to look at both cases in some detail.
-Forking over a part of our genes, mind, society and culture to create new beings with new complexity, knowing that less than optimal conditions await them, -
-versus refraining from erasing all of the extant and potential value and complexity of current beings, here and now, for a very mixed blessing (increasing the smileyness of faces while decreasing the amount of tiles). The second action has much greater utility, and is not very much like the first at all. So we could easily do the second while avoiding the first, and be consistent in our values and judgment.
Reading about this tragic and horrifyingly wasteful dystopia really solidifies my hope that the future goes more like what Robin Hanson has envisioned.
Accordingly I must raise my estimation of the threat Robin Hanson poses to humanity. He has persuaded at least one person to advocate his Malthusian hell!
Meh, I wasn’t advocating it, just saying it would be way better than this scenario. Either n humans burning the cosmic commons for tacky IRL video games and sex with strangers or 1 Billion n humans living worthwhile productive lives.
It just seems obvious when you do the math.
I don’t share your premises—including the one that seems to be that the agents that survive in Hansonian Hell are humans in any meaningful sense.
Your expression of preference here cannot be credibly described as ‘doing math’.
I guess one person’s tacky IRL video game is another’s worthwhile productive life.
I’m not saying Ishtar’s life isn’t worth living, just that it’s tragic that so many also worthwhile lives are being prevented from existing so that she can play her silly games.
Also, it isn’t productive, it is consumptive. She is simply consuming resources provided by others.
Right. My point was that you view it as “her silly games”, but they might be precisely what make her life worth living. One might just as well say “It’s tragic that so many silly lives exist so that Ishtar cannot live her worthwhile life”.
Not so much a “Who are you to say” style rejection, as much as noting that it’s not obvious, math or no math.
Ok, so would you prefer to pop all but 7 humans out of existence (assume that the process is painless) in exchange for the remaining humans experiencing Ishtar level happiness?
If you just mean to say that terminal values are arbitrary, so some possible minds might prefer the Ishtar scenario, then that’s fine, so far as it goes. But if you take multiplication seriously, then its insanely hard to make the case for this being a genuine eutopia.
You seem to be forgetting the old adage “Shut up and don’t multiply by a count unless it is a count of something that you value linearly with said count”. Very few people value the existence of Hansonian Hell-bots in direct proportion to the number of Hell-bots that are ‘alive’. Of those that do it isn’t clear that they value the existence of these creatures in that condition positively. So taking things ‘seriously’ here is more to do with what preferences people have than it is about ‘multiplication’.
A lot of people live at subsistence levels. They aren’t much less happy than you or me on average. Their lives are very well worth living by their own standards. And they would likely be better adapted to their environment than we are, so there’s good reason to believe they would be better of than 1st worlders are now. And the denizens of this alleged eutopia don’t seem much happier than some people I know now.
And as long as we’re focusing on the preferences of people in the world now, how many of them do you think would approve of the implicit AI autocracy, reproductive central-planning, and hedonism of this world?
This is a dream-world for nerdy/polyamorous/transhumanist folks common on LW but rare everywhere else (except maybe reddit).
I do not think the parent is so obviously wrong to be worth being downvoted to −4 without even mentioning what’s wrong with it. (I actually agree with much of it. There was even a Robin Hanson post about the fact that “the poor also smile” (can’t link to it because Overcoming Bias is blacked out today).
I downvoted the grandparent for making unintuitive claims about the money:happiness relation without presenting evidence (my understanding is that subsistence-level income does have a significant and negative effect on happiness, although the plot of happiness over income levels off quickly after a basic level of financial security is achieved), for making sketchy claims about adaptation without evidence, for conflating approval with preference (particularly glaring because the point about happiness/income above only works without conflating the two), and for the entirely unnecessary swipe at perceived LW norms in the last sentence.
Oh, and by way of disclaimer, I didn’t find the original story especially compelling as a vision of utopia.
Isn’t it equally tragic that in the real world the resources that are currently maintaining my life aren’t instead being used to support other, more worthwhile, lives? (Or, well, more tragic, since it’s actually real?)
Isn’t it equally tragic that in the real world the resources that are currently maintaining my life aren’t instead being used to support other, more worthwhile, lives?
My argument isn’t that each life which might have been is more valuable, but that they are when added up.
First of all, uploads aren’t yet possible, so far fewer lives worth living could be supported with your resources in the first place. More importantly, since most resources aren’t being centrally distributed by all-powerful machine gods, we would have to tax your earnings. This involves the infamous “leaky buckets” problem acknowledged by all utilitarians. People engage in tax avoidance behaviors, work fewer hours, hide income, and some money which is captured is spent on overhead. These problems don’t exist when all resources are being created and distributed by a central depot.
Furthermore, the ability to actually get those resources to the people in need are in doubt, due to grabby governments/warlords/logistical problems etc.
But yes, overall I would say it is tragic that some of our 1st-World resources aren’t going to save marginal lives.
I don’t think this is the alternative he was proposing. I think the more relevant analogy would be our 1st-World resources going to produce extra marginal barely-worth-living lives in the third world.
What do you think people should be doing? In a post scarcity economy, it seems to me that a lot of what remains to be done is keeping each other entertained.
My problem isn’t particularly with the Ishtar’s pastimes, but with the overall system. I’m arguing that Hanson’s upload society would be better than this because it could support so many more lives worth living total and more total utility than this alleged eutopia.
So you’d be happy with this world if it all existed inside a small piece of the galactic computronium-pile, and there was lots more of it? I actually hadn’t considered that, because I just assume all post-Singularity futures are set inside the galactic computronium-pile unless explicitly stated otherwise.
What math is that? Are you talking about number of lives at any given century—effectively judging the situation as if time-periods were sentient observers to be happy or unhappy at their current situation?
Do you have any reason to believe that maximum diversity in human minds (i.e. allowing lots of different humans to exist) would be best satisfied by cramming them all in the same century, as densely as possible?
A trillion lives all crammed in the same century aren’t inherently more worthwhile than a trillion lives spread over a hundred centuries—any more than 10 people forced to live in the same flat are inherently more precious than 10 people having separate apartments. Do you have any reason to prefer the former over the latter? Surely there’s some ideal range where utility is satisfied in terms of people density spread over time and location.
You are misunderstanding my argument.
When you use up negentropy, it is used up for good, and there is a finite amount in each section of the universe. The amount being used on Ishtar could theoretically support good lives of billions of upload minds (or a smaller but still huge number of other possible lives). This isn’t a matter of a long and narrow future or a short and wide future, but of how many total, worthwhile lives will exist.
As for quality, there seems to be no reason why simulations can’t be as happy, or even happier than Ishtar.
You know you’re looking at a dystopia when even Hanson’s malthusian hell world looks good in comparison.
(Agree with the sentiment, though.)
It’s one world, or one solar system, and for all we know they’ve found a way around entropy—or this could all be a highly realistic simulation.
But even if it isn’t, I consider this option far better than Hanson’s dystopia. Its main flaw is inefficiency, which can be fixed.
Its main characteristic is inefficiency.
There’s little indication of how the utopia actually operates at a higher level, only how the artificially and consensually non-uplifted humans experience it. So there’s no way to be certain, from this small snapshot, whether it is inefficient or not.
I would instead say that it’s main flaw is that the machines allow too much of the “fun” decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren’t very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.
OTOH, nothing in that story requires that the humans are making unaided assessments. The protagonist’s environment may well have been suggested by the system in the first place as its best estimate of what will maximize her enjoyment/fulfilment/fun/Fun/utility/whatever, and she may have said “OK, sounds good.”
I’m afraid I’d prefer it that way. Having the machines decide what’s fun for us would likely lead to wireheading. Or am I missing something?
[off to read the Fun Theory sequence in case this helps me find the answer myself]
Depends on the criteria the machines are using to evaluate fun, of course—it needn’t be limited to immediate pleasure, and in fact a major point of the Fun Theory sequence is that immediate pleasure is a poor metric for capital-F Fun. Human values are complex and there’s a lot of possible ways to get them wrong, but people are pretty bad at maximizing them too.
Also known as fun.
Efficiency in fun-creation.
Efficiency in doing something that doesn’t match my utility function seems.. fairly pointless, really. An abuse of the word, even.
Yet the horror is that it’s what you might catch yourself worshiping down the line, forgetting to enjoy any of it. Just take a look at the miserable and aimless workaholics out there, if they can still handle whatever it is they’re doing, their boss will happily exploit them. Do you think your brain would care more about you if you set “efficiency” as its watchword?
Yup, if we set out to build a system that maximized our ability to enjoy life, and we ended up with a system in which we didn’t enjoy life, that would be a failure.
If we set out to build a system with some other goal, or with no particular goal in mind at all, and we ended up with a system in which we didn’t enjoy life, that’s more complicated… but at the very least, it’s not an ideal win condition. (It also describes the real world pretty accurately.)
I’m curious: do you have a vision of a win condition that you would endorse?
See more in my latest post; I’ll be adding to it.
http://lesswrong.com/r/discussion/lw/9g0/placeholder_against_dystopia_rally_before_kant/
You best be sarcastic. Waste is good! It’s signaling, it’s ease, it’s a lack of tension, it’s the life’s little luxuries that you’d wish back if they were all taken from you simultaneously, without caring much about the “efficiency” of it.
I wasn’t being sarcastic.
No, waste is by definition not good. Resource usage can be good, but the world of this story makes me pessimistic about how it is being done. It seems like the AI gods of this world have engineered a “post-scarcity” society with population control to keep the amount of resources extremely high per person—which enables this video game-like lifestyle for people who want it. Millions of lives could be supported with resources centrally allocated to her. That is a horribly anti-egalitarian form of communism.
Admittedly, it is possible that this takes place within a simulation, but that is never stated, and we have reason to believe that it isn’t true. For example, the author mentions that Ishtar knows the AI-gods won’t let her die even if she crashes, implying that this is her physical body.
Are you sure you want them to pop into existence? Why? I just can’t understand! Why must there be more people? So that you can have more smiley faces? That’s the road to paper-clipping!
Well, yes. Several popular versions of utilitarianism lead by a fairly short path to what’s probably the first paperclipping scenario I ever read about, although it’s not usually described in those terms.
Coming up with a version of utilitarianism that doesn’t have those problems or an equally unintuitive complement to them is harder than it looks, though.
Why does anyone value anything? If we could painlessly pop all but 70 human beings out of existence but make the ones who remain much happier (say, 10x as happy), would you do it? Why not? Why must there be more people?
That’s easy; we have to look at both cases in some detail.
-Forking over a part of our genes, mind, society and culture to create new beings with new complexity, knowing that less than optimal conditions await them, -
-versus refraining from erasing all of the extant and potential value and complexity of current beings, here and now, for a very mixed blessing (increasing the smileyness of faces while decreasing the amount of tiles). The second action has much greater utility, and is not very much like the first at all. So we could easily do the second while avoiding the first, and be consistent in our values and judgment.
Sorry, I’m a bit high.