There’s little indication of how the utopia actually operates at a higher level, only how the artificially and consensually non-uplifted humans experience it. So there’s no way to be certain, from this small snapshot, whether it is inefficient or not.
I would instead say that it’s main flaw is that the machines allow too much of the “fun” decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren’t very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.
OTOH, nothing in that story requires that the humans are making unaided assessments. The protagonist’s environment may well have been suggested by the system in the first place as its best estimate of what will maximize her enjoyment/fulfilment/fun/Fun/utility/whatever, and she may have said “OK, sounds good.”
I would instead say that it’s main flaw is that the machines allow too much of the “fun” decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren’t very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.
I’m afraid I’d prefer it that way. Having the machines decide what’s fun for us would likely lead to wireheading. Or am I missing something?
[off to read the Fun Theory sequence in case this helps me find the answer myself]
Depends on the criteria the machines are using to evaluate fun, of course—it needn’t be limited to immediate pleasure, and in fact a major point of the Fun Theory sequence is that immediate pleasure is a poor metric for capital-F Fun. Human values are complex and there’s a lot of possible ways to get them wrong, but people are pretty bad at maximizing them too.
Yet the horror is that it’s what you might catch yourself worshiping down the line, forgetting to enjoy any of it. Just take a look at the miserable and aimless workaholics out there, if they can still handle whatever it is they’re doing, their boss will happily exploit them. Do you think your brain would care more about you if you set “efficiency” as its watchword?
Yup, if we set out to build a system that maximized our ability to enjoy life, and we ended up with a system in which we didn’t enjoy life, that would be a failure.
If we set out to build a system with some other goal, or with no particular goal in mind at all, and we ended up with a system in which we didn’t enjoy life, that’s more complicated… but at the very least, it’s not an ideal win condition. (It also describes the real world pretty accurately.)
I’m curious: do you have a vision of a win condition that you would endorse?
You know you’re looking at a dystopia when even Hanson’s malthusian hell world looks good in comparison.
(Agree with the sentiment, though.)
It’s one world, or one solar system, and for all we know they’ve found a way around entropy—or this could all be a highly realistic simulation.
But even if it isn’t, I consider this option far better than Hanson’s dystopia. Its main flaw is inefficiency, which can be fixed.
Its main characteristic is inefficiency.
There’s little indication of how the utopia actually operates at a higher level, only how the artificially and consensually non-uplifted humans experience it. So there’s no way to be certain, from this small snapshot, whether it is inefficient or not.
I would instead say that it’s main flaw is that the machines allow too much of the “fun” decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren’t very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.
OTOH, nothing in that story requires that the humans are making unaided assessments. The protagonist’s environment may well have been suggested by the system in the first place as its best estimate of what will maximize her enjoyment/fulfilment/fun/Fun/utility/whatever, and she may have said “OK, sounds good.”
I’m afraid I’d prefer it that way. Having the machines decide what’s fun for us would likely lead to wireheading. Or am I missing something?
[off to read the Fun Theory sequence in case this helps me find the answer myself]
Depends on the criteria the machines are using to evaluate fun, of course—it needn’t be limited to immediate pleasure, and in fact a major point of the Fun Theory sequence is that immediate pleasure is a poor metric for capital-F Fun. Human values are complex and there’s a lot of possible ways to get them wrong, but people are pretty bad at maximizing them too.
Also known as fun.
Efficiency in fun-creation.
Efficiency in doing something that doesn’t match my utility function seems.. fairly pointless, really. An abuse of the word, even.
Yet the horror is that it’s what you might catch yourself worshiping down the line, forgetting to enjoy any of it. Just take a look at the miserable and aimless workaholics out there, if they can still handle whatever it is they’re doing, their boss will happily exploit them. Do you think your brain would care more about you if you set “efficiency” as its watchword?
Yup, if we set out to build a system that maximized our ability to enjoy life, and we ended up with a system in which we didn’t enjoy life, that would be a failure.
If we set out to build a system with some other goal, or with no particular goal in mind at all, and we ended up with a system in which we didn’t enjoy life, that’s more complicated… but at the very least, it’s not an ideal win condition. (It also describes the real world pretty accurately.)
I’m curious: do you have a vision of a win condition that you would endorse?
See more in my latest post; I’ll be adding to it.
http://lesswrong.com/r/discussion/lw/9g0/placeholder_against_dystopia_rally_before_kant/