I can’t believe it took me five years to think to comment on this, but judging from the thread, nobody else has either.
If Stephen’s utility function actually includes a sufficiently high-weighted term for Helen’s happiness—and vice versa—then both Stephen and Helen will accept the situation and be happy, as their partner would want them to be. They might still be angry that the situation occurred, and still want to get back together, but not because of some sort of noble sacrifice to honor the symbolic or signaling value of love, but because they actually cared about each other.
Ironically, the only comment so far that even comes close to considering Stephen’s utility in relation to what’s happening to Helen is one that proposes her increased happiness would cause him pain, which is not the shape I would expect from a utility function that can be labeled “love” in the circumstances described here.
None of that makes this a successful utopia, of course, nor do I suggest that Stephen is overreacting in the moment of revelation—you can want somebody else to be happy, after all, and still grieve their loss. But, dang it, the AI is right: the human race will be happier, and there’s nothing horrific about the fact they’ll be happier, at least to people whose utility function values their or others’ happiness sufficiently high in comparison to their preference to be happy in a different way.
(Which of course means that this comment is actually irrelevant to the main point of the article, but it seemed to me that this was a point that should still be raised: the relevance of others’ happiness as part of one’s utility function gets overlooked often enough in discussions here as it is.)
IIRC, there was an earlier discussion which conceded the point that the human race will be happier in this scenario than in the scenario with no AI; the story depends on pumping the intuition that there’s some unrealized and undescribed third possibility which is so much better than either of those scenarios that choosing either of them constitutes a tragic ending.
IIRC, the author’s response to endorsing this scenario simply because in it people are happier without their opposite-sex partners (and because their opposite-sex partners are happier without them, as you say) was to mutter something deprecating about satisficers vs optimizers.
Full disclosure: I find this particular intuition pump leaves me cold, perhaps because I’m in a same-sex relationship. I’ve no doubt we could construct an analogous pump that would intuitively horrify me, and I might react differently.
The dubiously friendly AI, instead of creating artificial significant others, merely uses its computational ability to figure out that if it breaks up all existing relationships and instead puts people together with new partners, then everyone would be happier. Again, it then separates everyone in such a way that the existing partners could not get together in a reasonable amount of time. (You couldn’t do complete sex segregation, but you could put several pairs together on the same planet as long as the particular people who you broke them up with are on other planets.) Utopia or not?
And is it good or bad? There’s more than one objection to the original scenario and the point of this is 1) to separate them out, and 2) to make them more obvious.
Or a second scenario: The AI doesn’t try to create new relationships at all, whether with artificial or natural partners. Instead it just breaks up all relationships, and then wireheads everyone. It calculates that the utility gained from the wireheading is greater than the utility lost in breaking up the relationships. Is this good or bad?
(I would suggest that putting people with artificial partners is sort of like wireheading in that people might not want to be put under such circumstances, even though once they are already in such circumstances they might be happier.)
I would suggest that putting people with artificial partners is sort of like wireheading in that people might not want to be put under such circumstances, even though once they are already in such circumstances they might be happier.
This seems to stretch the notion of wireheading beyond usefulness. Many situations exist where we might endorse options retrospectively that we wouldn’t prospectively, whether through bias, limited information, random changes in perspective, or normal lack of maturity (“eew, girls have cooties!”). Relatively few of them rely on superstimuli or break our goal structure in a strong way.
I can’t believe it took me five years to think to comment on this, but judging from the thread, nobody else has either.
If Stephen’s utility function actually includes a sufficiently high-weighted term for Helen’s happiness—and vice versa—then both Stephen and Helen will accept the situation and be happy, as their partner would want them to be. They might still be angry that the situation occurred, and still want to get back together, but not because of some sort of noble sacrifice to honor the symbolic or signaling value of love, but because they actually cared about each other.
Ironically, the only comment so far that even comes close to considering Stephen’s utility in relation to what’s happening to Helen is one that proposes her increased happiness would cause him pain, which is not the shape I would expect from a utility function that can be labeled “love” in the circumstances described here.
None of that makes this a successful utopia, of course, nor do I suggest that Stephen is overreacting in the moment of revelation—you can want somebody else to be happy, after all, and still grieve their loss. But, dang it, the AI is right: the human race will be happier, and there’s nothing horrific about the fact they’ll be happier, at least to people whose utility function values their or others’ happiness sufficiently high in comparison to their preference to be happy in a different way.
(Which of course means that this comment is actually irrelevant to the main point of the article, but it seemed to me that this was a point that should still be raised: the relevance of others’ happiness as part of one’s utility function gets overlooked often enough in discussions here as it is.)
IIRC, there was an earlier discussion which conceded the point that the human race will be happier in this scenario than in the scenario with no AI; the story depends on pumping the intuition that there’s some unrealized and undescribed third possibility which is so much better than either of those scenarios that choosing either of them constitutes a tragic ending.
IIRC, the author’s response to endorsing this scenario simply because in it people are happier without their opposite-sex partners (and because their opposite-sex partners are happier without them, as you say) was to mutter something deprecating about satisficers vs optimizers.
Full disclosure: I find this particular intuition pump leaves me cold, perhaps because I’m in a same-sex relationship. I’ve no doubt we could construct an analogous pump that would intuitively horrify me, and I might react differently.
Let’s modify the scenario a bit.
The dubiously friendly AI, instead of creating artificial significant others, merely uses its computational ability to figure out that if it breaks up all existing relationships and instead puts people together with new partners, then everyone would be happier. Again, it then separates everyone in such a way that the existing partners could not get together in a reasonable amount of time. (You couldn’t do complete sex segregation, but you could put several pairs together on the same planet as long as the particular people who you broke them up with are on other planets.) Utopia or not?
OK, I acknowledge receipt of this modified scenario.
And… what?
And is it good or bad? There’s more than one objection to the original scenario and the point of this is 1) to separate them out, and 2) to make them more obvious.
Or a second scenario: The AI doesn’t try to create new relationships at all, whether with artificial or natural partners. Instead it just breaks up all relationships, and then wireheads everyone. It calculates that the utility gained from the wireheading is greater than the utility lost in breaking up the relationships. Is this good or bad?
(I would suggest that putting people with artificial partners is sort of like wireheading in that people might not want to be put under such circumstances, even though once they are already in such circumstances they might be happier.)
This seems to stretch the notion of wireheading beyond usefulness. Many situations exist where we might endorse options retrospectively that we wouldn’t prospectively, whether through bias, limited information, random changes in perspective, or normal lack of maturity (“eew, girls have cooties!”). Relatively few of them rely on superstimuli or break our goal structure in a strong way.