I know the punchline—CEV. To me, it seemed to belabour points that felt obvious to me, while skipping over, or treating as obvious, points that are really confusing.
Regardless of whether CEV is the correct ethical system, it seems to me that CEV or CV is a reasonably good schelling point, so that could be a good argument to accept it on pragmatic grounds.
I meant ‘program the FAI to calculate CEV’ might be a reasonably good Schelling point for FAI design. I wasn’t suggesting that you or I could calculate it to inform everyday ethics.
How could programming the FAI to calculate CEV be a Schelling point when no one has any idea what CEV is? It is not the case that we only don’t know how to calculate it—we have no good idea what it is.
My impression is that the optimistic idea is that people have broadly similar, or at least compatible, fundamental values, and that if people disagree strongly in the present, this is due to misunderstandings which would be extrapolated away. We all hold values like love, beauty and freedom, so the future would hold these values.
I can think of various pessimistic outcomes, such as one of the most fundamental values is the desire not to be ruled over by an AI, and so the AI immediately turns itself off, or that status games make fulfilling everyone’s values impossible.
Anyway, since I’ve heard a lot about CEV (on LW), and empathic AI (when FAI is discussed outside LW) and little about any other idea for FAI, it seems that CEV is a Schelling point, regardless of whether or not it should be.
Personally, I’m surprised I haven’t heard more about a ‘Libertarian FAI’ that implements each person’s volition separately, as long as it doesn’t non-consensually affect anyone else. Admittedly, there’s problems involving, for instance, what limits should be placed on people creating sentient beings to prevent contrived infinite torture scenarios, but I would have thought given the libertarian bent of transhumanists someone would be advocating this sort of idea.
Anyway, since I’ve heard a lot about CEV … it seems that CEV is a Schelling point
Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.
But even ignoring this, CEV is just too vague to be a Schelling point. It’s essentially defined as “all of what’s good and none of what’s bad” which is suspiciously close to the definition of God in some theologies. Human values are simply not that consistent—which is why there is an “E” that allows unlimited handwaving.
Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.
I realise that it’s not a function of what I know, what I meant is that given that I have heard a lot about CEV, it seems that a lot of people support it.
Still, I think I am using ‘Schelling point’ wrongly here—what I mean is that maybe CEV is something people could agree on with communication, like a point of compromise.
Human values are simply not that consistent—which is why there is an “E” that allows unlimited handwaving.
Do you think that it is impossible for an FAI to implement CEV?
A Schelling point, as I understand it, is a choice that has value only because of the network effect. It is not “the best” by some criterion, it’s not a compromise, in some sense it’s an irrational choice from equal candidates—it’s just that people’s minds are drawn to it.
In particular, a Schelling point is not something you agree on—in fact, it’s something you do NOT agree on (beforehand) :-)
Do you think that it is impossible for an FAI to implement CEV?
I don’t know what CEV is. I suspect it’s an impossible construct. It came into being as a solution to a problem EY ran his face into, but I don’t consider it satisfactory.
Hmm, that’s not what I think is the punchline :P I think it’s something like “your morality is an idealized version of the computation you use to make moral decisions.”
Well, perhaps the controversy is that that’s it. That it’s okay that there’s no external morality and no universally compelling moral arguments, and that we can and should act morally in what turns out to be a fairly ordinary way, even though what we mean by “should” and “morally” depends on ourselves.
I know the punchline—CEV. To me, it seemed to belabour points that felt obvious to me, while skipping over, or treating as obvious, points that are really confusing.
Regardless of whether CEV is the correct ethical system, it seems to me that CEV or CV is a reasonably good schelling point, so that could be a good argument to accept it on pragmatic grounds.
How could it be a Schelling point when no one has any idea what it is?
I meant ‘program the FAI to calculate CEV’ might be a reasonably good Schelling point for FAI design. I wasn’t suggesting that you or I could calculate it to inform everyday ethics.
Um, doesn’t the same objection apply?
How could programming the FAI to calculate CEV be a Schelling point when no one has any idea what CEV is? It is not the case that we only don’t know how to calculate it—we have no good idea what it is.
Its, you know, human values.
My impression is that the optimistic idea is that people have broadly similar, or at least compatible, fundamental values, and that if people disagree strongly in the present, this is due to misunderstandings which would be extrapolated away. We all hold values like love, beauty and freedom, so the future would hold these values.
I can think of various pessimistic outcomes, such as one of the most fundamental values is the desire not to be ruled over by an AI, and so the AI immediately turns itself off, or that status games make fulfilling everyone’s values impossible.
Anyway, since I’ve heard a lot about CEV (on LW), and empathic AI (when FAI is discussed outside LW) and little about any other idea for FAI, it seems that CEV is a Schelling point, regardless of whether or not it should be.
Personally, I’m surprised I haven’t heard more about a ‘Libertarian FAI’ that implements each person’s volition separately, as long as it doesn’t non-consensually affect anyone else. Admittedly, there’s problems involving, for instance, what limits should be placed on people creating sentient beings to prevent contrived infinite torture scenarios, but I would have thought given the libertarian bent of transhumanists someone would be advocating this sort of idea.
Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer.
But even ignoring this, CEV is just too vague to be a Schelling point. It’s essentially defined as “all of what’s good and none of what’s bad” which is suspiciously close to the definition of God in some theologies. Human values are simply not that consistent—which is why there is an “E” that allows unlimited handwaving.
I realise that it’s not a function of what I know, what I meant is that given that I have heard a lot about CEV, it seems that a lot of people support it.
Still, I think I am using ‘Schelling point’ wrongly here—what I mean is that maybe CEV is something people could agree on with communication, like a point of compromise.
Do you think that it is impossible for an FAI to implement CEV?
A Schelling point, as I understand it, is a choice that has value only because of the network effect. It is not “the best” by some criterion, it’s not a compromise, in some sense it’s an irrational choice from equal candidates—it’s just that people’s minds are drawn to it.
In particular, a Schelling point is not something you agree on—in fact, it’s something you do NOT agree on (beforehand) :-)
I don’t know what CEV is. I suspect it’s an impossible construct. It came into being as a solution to a problem EY ran his face into, but I don’t consider it satisfactory.
Hmm, that’s not what I think is the punchline :P I think it’s something like “your morality is an idealized version of the computation you use to make moral decisions.”
Really? That seems almost tautological to me, and about as helpful as ‘do what is right’.
Well, perhaps the controversy is that that’s it. That it’s okay that there’s no external morality and no universally compelling moral arguments, and that we can and should act morally in what turns out to be a fairly ordinary way, even though what we mean by “should” and “morally” depends on ourselves.
It all adds up to normality, and don’t worry about it.
See, I can sum up an entire sequence in one sentence!
This also doesn’t seem like the most original idea, in fact I think this “you create your own values” is the central idea of existentialism.