There’s little indication of how the utopia actually operates at a higher level, only how the artificially and consensually non-uplifted humans experience it. So there’s no way to be certain, from this small snapshot, whether it is inefficient or not.
I would instead say that it’s main flaw is that the machines allow too much of the “fun” decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren’t very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.
OTOH, nothing in that story requires that the humans are making unaided assessments. The protagonist’s environment may well have been suggested by the system in the first place as its best estimate of what will maximize her enjoyment/fulfilment/fun/Fun/utility/whatever, and she may have said “OK, sounds good.”
I would instead say that it’s main flaw is that the machines allow too much of the “fun” decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren’t very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.
I’m afraid I’d prefer it that way. Having the machines decide what’s fun for us would likely lead to wireheading. Or am I missing something?
[off to read the Fun Theory sequence in case this helps me find the answer myself]
Depends on the criteria the machines are using to evaluate fun, of course—it needn’t be limited to immediate pleasure, and in fact a major point of the Fun Theory sequence is that immediate pleasure is a poor metric for capital-F Fun. Human values are complex and there’s a lot of possible ways to get them wrong, but people are pretty bad at maximizing them too.
There’s little indication of how the utopia actually operates at a higher level, only how the artificially and consensually non-uplifted humans experience it. So there’s no way to be certain, from this small snapshot, whether it is inefficient or not.
I would instead say that it’s main flaw is that the machines allow too much of the “fun” decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren’t very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.
OTOH, nothing in that story requires that the humans are making unaided assessments. The protagonist’s environment may well have been suggested by the system in the first place as its best estimate of what will maximize her enjoyment/fulfilment/fun/Fun/utility/whatever, and she may have said “OK, sounds good.”
I’m afraid I’d prefer it that way. Having the machines decide what’s fun for us would likely lead to wireheading. Or am I missing something?
[off to read the Fun Theory sequence in case this helps me find the answer myself]
Depends on the criteria the machines are using to evaluate fun, of course—it needn’t be limited to immediate pleasure, and in fact a major point of the Fun Theory sequence is that immediate pleasure is a poor metric for capital-F Fun. Human values are complex and there’s a lot of possible ways to get them wrong, but people are pretty bad at maximizing them too.