CEV is supposed to preserve those things that people value, and would continue to value were they more intelligent and better informed.
I value the lives of my friends. Many other people value the death of people like my friends. There is no reason to think that this is because they are less intelligent or less well-informed than me, as opposed to actually having different preferences.
TimS claimed that in a situation like that, CEV would do nothing, rather than impose the extrapolated will of the majority.
My claim is that there is nothing—not one single thing—which would be a value held by every person in the world, even were they more intelligent and better informed. An intelligent, informed psychopath has utterly different values from mine, and will continue to have utterly different values upon reflection. The CEV therefore either has to impose the majority preferences upon the minority, or do nothing at all.
There is no reason to think that this is because they are less intelligent or less well-informed than me, as opposed to actually having different preferences.
There are lots of reasons to think so. For example, they might want the death of your friends because they mistakenly believe that a deity exists.
Or for any number of other, non-religious reasons. And it could well be that extrapolating those people’s preferences would lead, not to them rejecting their beliefs, but to them wishing to bring their god into existence.
Either people have fundamentally different, irreconcilable, values or they don’t. If they do, then the argument I made is valid. If they don’t, then CEV(any random person) will give exactly the same result as CEV(humanity).
That means that either calculating CEV(humanity) is an unnecessary inefficiency, or CEV(humanity) will do nothing at all, or CEV(humanity) would lead to a world that is intolerable for at least some minority of people. I actually doubt that any of the people from the SI would disagree with that (remember the torture vs flyspecks argument).
That may be considered a reasonable tradeoff by the developers of an “F”AI, but it gives those minority groups to whom the post-AI world would be inimical equally rational reasons to oppose such a development.
As someone who does not believe in moral realism, I agree that CEV over all humans who ever lived (excluding sociopaths and such) will not output anything.
But I think that a moral realist should believe that CEV will output some value system, and that the produced value system will be right.
Edit2: It appears that copying and pasting from some places includes “http” even when my browser address doesn’t. But I did something wrong when copying from the philosophy dictionary.
I agree—assuming that CEV didn’t impose a majority view on a minority. My understanding of the SI’s arguments (and it’s only my understanding) is that they believe it will impose a majority view on a minority, but that they think that would be the right thing to do—that if the choice were beween 3^^^3 people getting a dustspeck in the eye or one person getting tortured for fifty years, the FAI would always make a choice, and that choice would be for the torture rather than the dustspecks.
Now, this may well be, overall, the rational choice to make as far as humanity as a whole goes, but it would most definitely not be the rational choice for the person who was getting tortured to support it.
And since, as far as I can see, most people only value a very small subset of humanity who identify as belonging to the same groups as them, I strongly suspect that in the utilitarian calculations of a “friendly” AI programmed with CEV, they would end up in the getting-tortured group, rather than the avoiding-dustspecks one.
that if the choice were beween 3^^^3 people getting a dustspeck in the eye or one person getting tortured for fifty years, the FAI would always make a choice, and that choice would be for the torture rather than the dustspecks
That is an entirely separate issue. If CEV(everyone) outputted a moral theory that held utility was additive, then the AI implementing it would choose torture over specks. In other words, utilitarians are committed to believing that specks is the wrong choice.
But there is no guarantee that CEV will output a utilitarian theory, even if you believe it will output something. SI (Eliezer, at least) believes CEV will output a utilitarian theory because SI believes utilitarian theories are right. But everyone agrees that “whether CEV will output something” is a different issue than “what CEV will output.”
Personally, I suspect CEV(everyone in the United States) would output something deotological—and might even output something that would pick specks. Again, assuming it outputs anything.
Either people have fundamentally different, irreconcilable, values or they don’t. If they do, then the argument I made is valid. If they don’t, then CEV(any random person) will give exactly the same result as CEV(humanity).
This is a false dilemma. If people have some values that are the same or reconcilable, then you will get different output from CEV(any random person) and CEV(humanity).
And note that an actual move by virtue ethicists is to exclude sociopaths from “humanity”.
TimS claimed that in a situation like that, CEV would do nothing, rather than impose the extrapolated will of the majority.
I agree with you in general, and want to further point out that there is no such thing as “doing nothing”. If doing nothing tends to allow your friends to continue living (because they have the power to defend themselves in the status quo), that is favoring their values. If doing nothing tends to allow your friends to be killed (because they are a powerless, persecuted minority in the status quo) that is favoring the other people’s values.
Of course, a lot depends on what we’re willing to consider a minority as opposed to something outside the set of things being considered at all.
E.g., I’m in a discussion elsethread with someone who I think would argue that if we ran CEV on the set of things capable of moral judgments, it would not include psychopaths in the first place, because psychopaths are incapable of moral judgments.
I disagree with this on several levels, but my point is simply that there’s an implicit assumption in your argument that terms like “person” have shared referents in this context, and I’m not sure they do.
In which case we wouldn’t be talking about CEV(humanity) but CEV(that subset of humanity which already share our values), where “our values” in this case includes excluding a load of people from humanity before you start. Psychopaths may or may not be capable of moral judgements, but they certainly have preferences, and would certainly find living in a world where all their preferences are discounted as intolerable as the rest of us would find living in a world where only their preferences counted.
I agree that psychopaths have preferences, and would find living in a world that anti-implemented their preferences intolerable.
In which case we wouldn’t be talking about CEV(humanity) but CEV(that subset of humanity which already share our values),
If you mean to suggest that the fact that the former phrase gets used in place of the latter is compelling evidence that we all agree about who to include, I disagree.
If you mean to suggest that it would be more accurate to use the latter phrase when that’s what we mean, I agree.
Ditto “CEV(that set of preference-havers which value X, Y, and Z)”.
I hope that everyone who discusses CEV understands that a very hard part of building a CEV function would be defining the criteria for inclusion in the subset of people whose values are considered. It’s almost circular, because figuring out who to exclude as “insufficiently moral” almost inherently requires the output of a CEV-like function to process.
I’m not sure I understand the question. In reference to the sociopath issue, I think it is clearer to say: (1) “I don’t want sociopaths (and the like) in the subset from which CEV is drawn” than to say that (2) “CEV is drawn from all humanity but sociopaths are by definition not human.”
Nonetheless, I don’t think (1) and (2) are different in any important respect. They just define key terms differently in order to say the same thing. In a rational society, I suspect it would make no difference, but in the current human society, ways words can be wrong makes (2) likely to lead to errors of reasoning.
Sorry, I’m being unclear. Let me try again. For simplicity, let us say that T(x) = TRUE if x is sufficiently moral to include in CEV, and FALSE otherwise. (I don’t mean to posit that we’ve actually implemented such a test.)
I’m asking if you mean to distinguish between: (1) CEV includes x where T(x) = TRUE and x is human, and (2) CEV includes x where T(x) = TRUE
I’m still not sure I understand the question. That said, there are two issues here.
First, I would expect CEV(Klingon) to output something if CEV(human) does, but I’m not aware of any actual species that I would expect CEV(non-human species) to output for. If such a species existed (i.e. CEV(dolphins) outputs a morality), I would advocate strongly for something very like equal rights between humans and dolphins.
But even in that circumstance, I would be very surprised if CEV(all dolphins & all humans) outputted something other than “Humans, do CEV(humanity). Dolphins, do CEV(dolphin)”
Of course, I don’t expect CEV(all of humanity ever) to output because I reject moral realism.
Then since there is not one single value about which every single human being on the planet can agree, a CEV function would output nothing at all.
Tense confusion.
CEV is supposed to preserve those things that people value, and would continue to value were they more intelligent and better informed. I value the lives of my friends. Many other people value the death of people like my friends. There is no reason to think that this is because they are less intelligent or less well-informed than me, as opposed to actually having different preferences. TimS claimed that in a situation like that, CEV would do nothing, rather than impose the extrapolated will of the majority.
My claim is that there is nothing—not one single thing—which would be a value held by every person in the world, even were they more intelligent and better informed. An intelligent, informed psychopath has utterly different values from mine, and will continue to have utterly different values upon reflection. The CEV therefore either has to impose the majority preferences upon the minority, or do nothing at all.
There are lots of reasons to think so. For example, they might want the death of your friends because they mistakenly believe that a deity exists.
Or for any number of other, non-religious reasons. And it could well be that extrapolating those people’s preferences would lead, not to them rejecting their beliefs, but to them wishing to bring their god into existence.
Either people have fundamentally different, irreconcilable, values or they don’t. If they do, then the argument I made is valid. If they don’t, then CEV(any random person) will give exactly the same result as CEV(humanity).
That means that either calculating CEV(humanity) is an unnecessary inefficiency, or CEV(humanity) will do nothing at all, or CEV(humanity) would lead to a world that is intolerable for at least some minority of people. I actually doubt that any of the people from the SI would disagree with that (remember the torture vs flyspecks argument).
That may be considered a reasonable tradeoff by the developers of an “F”AI, but it gives those minority groups to whom the post-AI world would be inimical equally rational reasons to oppose such a development.
As someone who does not believe in moral realism, I agree that CEV over all humans who ever lived (excluding sociopaths and such) will not output anything.
But I think that a moral realist should believe that CEV will output some value system, and that the produced value system will be right.
In short, I think one’s belief about whether CEV will output something is isomorphic on whether one believes in [moral realism] (plato.stanford.edu/entries/moral-realism/).
Edit: link didn’t work, so separated it out.
Have you tried putting
http://
in front of the URL?(Edit: the backtick thing to show verbatim code isn’t working properly for some reason, but you know what I mean.)
moral realism.
Edit: Apparently that was the problem. Thanks.
Edit2: It appears that copying and pasting from some places includes “http” even when my browser address doesn’t. But I did something wrong when copying from the philosophy dictionary.
I agree—assuming that CEV didn’t impose a majority view on a minority. My understanding of the SI’s arguments (and it’s only my understanding) is that they believe it will impose a majority view on a minority, but that they think that would be the right thing to do—that if the choice were beween 3^^^3 people getting a dustspeck in the eye or one person getting tortured for fifty years, the FAI would always make a choice, and that choice would be for the torture rather than the dustspecks.
Now, this may well be, overall, the rational choice to make as far as humanity as a whole goes, but it would most definitely not be the rational choice for the person who was getting tortured to support it.
And since, as far as I can see, most people only value a very small subset of humanity who identify as belonging to the same groups as them, I strongly suspect that in the utilitarian calculations of a “friendly” AI programmed with CEV, they would end up in the getting-tortured group, rather than the avoiding-dustspecks one.
This is not clear.
That is an entirely separate issue. If CEV(everyone) outputted a moral theory that held utility was additive, then the AI implementing it would choose torture over specks. In other words, utilitarians are committed to believing that specks is the wrong choice.
But there is no guarantee that CEV will output a utilitarian theory, even if you believe it will output something. SI (Eliezer, at least) believes CEV will output a utilitarian theory because SI believes utilitarian theories are right. But everyone agrees that “whether CEV will output something” is a different issue than “what CEV will output.”
Personally, I suspect CEV(everyone in the United States) would output something deotological—and might even output something that would pick specks. Again, assuming it outputs anything.
This is a false dilemma. If people have some values that are the same or reconcilable, then you will get different output from CEV(any random person) and CEV(humanity).
And note that an actual move by virtue ethicists is to exclude sociopaths from “humanity”.
I agree with you in general, and want to further point out that there is no such thing as “doing nothing”. If doing nothing tends to allow your friends to continue living (because they have the power to defend themselves in the status quo), that is favoring their values. If doing nothing tends to allow your friends to be killed (because they are a powerless, persecuted minority in the status quo) that is favoring the other people’s values.
Of course, a lot depends on what we’re willing to consider a minority as opposed to something outside the set of things being considered at all.
E.g., I’m in a discussion elsethread with someone who I think would argue that if we ran CEV on the set of things capable of moral judgments, it would not include psychopaths in the first place, because psychopaths are incapable of moral judgments.
I disagree with this on several levels, but my point is simply that there’s an implicit assumption in your argument that terms like “person” have shared referents in this context, and I’m not sure they do.
In which case we wouldn’t be talking about CEV(humanity) but CEV(that subset of humanity which already share our values), where “our values” in this case includes excluding a load of people from humanity before you start. Psychopaths may or may not be capable of moral judgements, but they certainly have preferences, and would certainly find living in a world where all their preferences are discounted as intolerable as the rest of us would find living in a world where only their preferences counted.
I agree that psychopaths have preferences, and would find living in a world that anti-implemented their preferences intolerable.
If you mean to suggest that the fact that the former phrase gets used in place of the latter is compelling evidence that we all agree about who to include, I disagree.
If you mean to suggest that it would be more accurate to use the latter phrase when that’s what we mean, I agree.
Ditto “CEV(that set of preference-havers which value X, Y, and Z)”.
I definitely meant the second interpretation of that phrase.
I hope that everyone who discusses CEV understands that a very hard part of building a CEV function would be defining the criteria for inclusion in the subset of people whose values are considered. It’s almost circular, because figuring out who to exclude as “insufficiently moral” almost inherently requires the output of a CEV-like function to process.
How committed are you to the word “subset” here?
I’m not sure I understand the question. In reference to the sociopath issue, I think it is clearer to say:
(1) “I don’t want sociopaths (and the like) in the subset from which CEV is drawn”
than to say that
(2) “CEV is drawn from all humanity but sociopaths are by definition not human.”
Nonetheless, I don’t think (1) and (2) are different in any important respect. They just define key terms differently in order to say the same thing. In a rational society, I suspect it would make no difference, but in the current human society, ways words can be wrong makes (2) likely to lead to errors of reasoning.
Sorry, I’m being unclear. Let me try again.
For simplicity, let us say that T(x) = TRUE if x is sufficiently moral to include in CEV, and FALSE otherwise. (I don’t mean to posit that we’ve actually implemented such a test.)
I’m asking if you mean to distinguish between:
(1) CEV includes x where T(x) = TRUE and x is human, and
(2) CEV includes x where T(x) = TRUE
I’m still not sure I understand the question. That said, there are two issues here.
First, I would expect CEV(Klingon) to output something if CEV(human) does, but I’m not aware of any actual species that I would expect CEV(non-human species) to output for. If such a species existed (i.e. CEV(dolphins) outputs a morality), I would advocate strongly for something very like equal rights between humans and dolphins.
But even in that circumstance, I would be very surprised if CEV(all dolphins & all humans) outputted something other than “Humans, do CEV(humanity). Dolphins, do CEV(dolphin)”
Of course, I don’t expect CEV(all of humanity ever) to output because I reject moral realism.
I think that answers my question. Thanks.