Anyway, since I’ve heard a lot about CEV … it seems that CEV is a Schelling point
Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.
But even ignoring this, CEV is just too vague to be a Schelling point. It’s essentially defined as “all of what’s good and none of what’s bad” which is suspiciously close to the definition of God in some theologies. Human values are simply not that consistent—which is why there is an “E” that allows unlimited handwaving.
Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.
I realise that it’s not a function of what I know, what I meant is that given that I have heard a lot about CEV, it seems that a lot of people support it.
Still, I think I am using ‘Schelling point’ wrongly here—what I mean is that maybe CEV is something people could agree on with communication, like a point of compromise.
Human values are simply not that consistent—which is why there is an “E” that allows unlimited handwaving.
Do you think that it is impossible for an FAI to implement CEV?
A Schelling point, as I understand it, is a choice that has value only because of the network effect. It is not “the best” by some criterion, it’s not a compromise, in some sense it’s an irrational choice from equal candidates—it’s just that people’s minds are drawn to it.
In particular, a Schelling point is not something you agree on—in fact, it’s something you do NOT agree on (beforehand) :-)
Do you think that it is impossible for an FAI to implement CEV?
I don’t know what CEV is. I suspect it’s an impossible construct. It came into being as a solution to a problem EY ran his face into, but I don’t consider it satisfactory.
Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.
But even ignoring this, CEV is just too vague to be a Schelling point. It’s essentially defined as “all of what’s good and none of what’s bad” which is suspiciously close to the definition of God in some theologies. Human values are simply not that consistent—which is why there is an “E” that allows unlimited handwaving.
I realise that it’s not a function of what I know, what I meant is that given that I have heard a lot about CEV, it seems that a lot of people support it.
Still, I think I am using ‘Schelling point’ wrongly here—what I mean is that maybe CEV is something people could agree on with communication, like a point of compromise.
Do you think that it is impossible for an FAI to implement CEV?
A Schelling point, as I understand it, is a choice that has value only because of the network effect. It is not “the best” by some criterion, it’s not a compromise, in some sense it’s an irrational choice from equal candidates—it’s just that people’s minds are drawn to it.
In particular, a Schelling point is not something you agree on—in fact, it’s something you do NOT agree on (beforehand) :-)
I don’t know what CEV is. I suspect it’s an impossible construct. It came into being as a solution to a problem EY ran his face into, but I don’t consider it satisfactory.