The reason why I saw Friendship Is Optimal as a utopia was because it seemed like lots of value in the world was preserved, and lots of people seemed satisfied with the result. Like, if I could choose that world, or this world as it currently is, I would choose that world. Similarly with the world you describe.
This is different from saying it’s the best possible world. It’s just, like, a world which makes me compromise on comparatively few values I hold dear compared to the expected outcome of this world.
This may come down to differing definitions of utopia/dystopia. So I’d recommend against using those words in future replies.
Thanks, I believe you are right. I really regret how much time and resources are wasted arguing over the extension / reference of a word.
I’d like to remark though that I was just trying to explain what I see as problematic in FiO. I wouldn’t only say that its conclusion is suboptimal (and I believe it is bad, and many people would agree); I also think that, given what Celestia can do, they got lucky (though lucky is not the adequate word when it comes to narratives) it didn’t end up in worse ways.
As I point out in a reply to shminux, I think it’s hard to see how an AI can maximize B’s preferences in an aligned way if B’s preferences and beliefs are inconsistent (temporally or modally). If B actually regards sim-B as another self, then its sacrifice is required; I believe that people who bite this bullet will tend to agree that FiO ends in a good way, even though they dislike “the process”.
The reason why I saw Friendship Is Optimal as a utopia was because it seemed like lots of value in the world was preserved, and lots of people seemed satisfied with the result. Like, if I could choose that world, or this world as it currently is, I would choose that world. Similarly with the world you describe.
This is different from saying it’s the best possible world. It’s just, like, a world which makes me compromise on comparatively few values I hold dear compared to the expected outcome of this world.
This may come down to differing definitions of utopia/dystopia. So I’d recommend against using those words in future replies.
Thanks, I believe you are right. I really regret how much time and resources are wasted arguing over the extension / reference of a word.
I’d like to remark though that I was just trying to explain what I see as problematic in FiO. I wouldn’t only say that its conclusion is suboptimal (and I believe it is bad, and many people would agree); I also think that, given what Celestia can do, they got lucky (though lucky is not the adequate word when it comes to narratives) it didn’t end up in worse ways.
As I point out in a reply to shminux, I think it’s hard to see how an AI can maximize B’s preferences in an aligned way if B’s preferences and beliefs are inconsistent (temporally or modally). If B actually regards sim-B as another self, then its sacrifice is required; I believe that people who bite this bullet will tend to agree that FiO ends in a good way, even though they dislike “the process”.