My impression is that the optimistic idea is that people have broadly similar, or at least compatible, fundamental values, and that if people disagree strongly in the present, this is due to misunderstandings which would be extrapolated away. We all hold values like love, beauty and freedom, so the future would hold these values.
I can think of various pessimistic outcomes, such as one of the most fundamental values is the desire not to be ruled over by an AI, and so the AI immediately turns itself off, or that status games make fulfilling everyone’s values impossible.
Anyway, since I’ve heard a lot about CEV (on LW), and empathic AI (when FAI is discussed outside LW) and little about any other idea for FAI, it seems that CEV is a Schelling point, regardless of whether or not it should be.
Personally, I’m surprised I haven’t heard more about a ‘Libertarian FAI’ that implements each person’s volition separately, as long as it doesn’t non-consensually affect anyone else. Admittedly, there’s problems involving, for instance, what limits should be placed on people creating sentient beings to prevent contrived infinite torture scenarios, but I would have thought given the libertarian bent of transhumanists someone would be advocating this sort of idea.
Anyway, since I’ve heard a lot about CEV … it seems that CEV is a Schelling point
Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.
But even ignoring this, CEV is just too vague to be a Schelling point. It’s essentially defined as “all of what’s good and none of what’s bad” which is suspiciously close to the definition of God in some theologies. Human values are simply not that consistent—which is why there is an “E” that allows unlimited handwaving.
Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.
I realise that it’s not a function of what I know, what I meant is that given that I have heard a lot about CEV, it seems that a lot of people support it.
Still, I think I am using ‘Schelling point’ wrongly here—what I mean is that maybe CEV is something people could agree on with communication, like a point of compromise.
Human values are simply not that consistent—which is why there is an “E” that allows unlimited handwaving.
Do you think that it is impossible for an FAI to implement CEV?
A Schelling point, as I understand it, is a choice that has value only because of the network effect. It is not “the best” by some criterion, it’s not a compromise, in some sense it’s an irrational choice from equal candidates—it’s just that people’s minds are drawn to it.
In particular, a Schelling point is not something you agree on—in fact, it’s something you do NOT agree on (beforehand) :-)
Do you think that it is impossible for an FAI to implement CEV?
I don’t know what CEV is. I suspect it’s an impossible construct. It came into being as a solution to a problem EY ran his face into, but I don’t consider it satisfactory.
Its, you know, human values.
My impression is that the optimistic idea is that people have broadly similar, or at least compatible, fundamental values, and that if people disagree strongly in the present, this is due to misunderstandings which would be extrapolated away. We all hold values like love, beauty and freedom, so the future would hold these values.
I can think of various pessimistic outcomes, such as one of the most fundamental values is the desire not to be ruled over by an AI, and so the AI immediately turns itself off, or that status games make fulfilling everyone’s values impossible.
Anyway, since I’ve heard a lot about CEV (on LW), and empathic AI (when FAI is discussed outside LW) and little about any other idea for FAI, it seems that CEV is a Schelling point, regardless of whether or not it should be.
Personally, I’m surprised I haven’t heard more about a ‘Libertarian FAI’ that implements each person’s volition separately, as long as it doesn’t non-consensually affect anyone else. Admittedly, there’s problems involving, for instance, what limits should be placed on people creating sentient beings to prevent contrived infinite torture scenarios, but I would have thought given the libertarian bent of transhumanists someone would be advocating this sort of idea.
Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.
But even ignoring this, CEV is just too vague to be a Schelling point. It’s essentially defined as “all of what’s good and none of what’s bad” which is suspiciously close to the definition of God in some theologies. Human values are simply not that consistent—which is why there is an “E” that allows unlimited handwaving.
I realise that it’s not a function of what I know, what I meant is that given that I have heard a lot about CEV, it seems that a lot of people support it.
Still, I think I am using ‘Schelling point’ wrongly here—what I mean is that maybe CEV is something people could agree on with communication, like a point of compromise.
Do you think that it is impossible for an FAI to implement CEV?
A Schelling point, as I understand it, is a choice that has value only because of the network effect. It is not “the best” by some criterion, it’s not a compromise, in some sense it’s an irrational choice from equal candidates—it’s just that people’s minds are drawn to it.
In particular, a Schelling point is not something you agree on—in fact, it’s something you do NOT agree on (beforehand) :-)
I don’t know what CEV is. I suspect it’s an impossible construct. It came into being as a solution to a problem EY ran his face into, but I don’t consider it satisfactory.