I would instead say that it’s main flaw is that the machines allow too much of the “fun” decision to be customized by the humans. We already know, with the help of cognitive psychology, that humans (which I assume by their behavior to have intelligence comparable to ours) aren’t very good at making assessments about what they really want. This could lead to a false dystopia if a significant proportion of humans choose their wants poorly, become miserable, then make even worse decisions in their misery.
I’m afraid I’d prefer it that way. Having the machines decide what’s fun for us would likely lead to wireheading. Or am I missing something?
[off to read the Fun Theory sequence in case this helps me find the answer myself]
Depends on the criteria the machines are using to evaluate fun, of course—it needn’t be limited to immediate pleasure, and in fact a major point of the Fun Theory sequence is that immediate pleasure is a poor metric for capital-F Fun. Human values are complex and there’s a lot of possible ways to get them wrong, but people are pretty bad at maximizing them too.
I’m afraid I’d prefer it that way. Having the machines decide what’s fun for us would likely lead to wireheading. Or am I missing something?
[off to read the Fun Theory sequence in case this helps me find the answer myself]
Depends on the criteria the machines are using to evaluate fun, of course—it needn’t be limited to immediate pleasure, and in fact a major point of the Fun Theory sequence is that immediate pleasure is a poor metric for capital-F Fun. Human values are complex and there’s a lot of possible ways to get them wrong, but people are pretty bad at maximizing them too.