As someone who does not believe in moral realism, I agree that CEV over all humans who ever lived (excluding sociopaths and such) will not output anything.
But I think that a moral realist should believe that CEV will output some value system, and that the produced value system will be right.
Edit2: It appears that copying and pasting from some places includes “http” even when my browser address doesn’t. But I did something wrong when copying from the philosophy dictionary.
I agree—assuming that CEV didn’t impose a majority view on a minority. My understanding of the SI’s arguments (and it’s only my understanding) is that they believe it will impose a majority view on a minority, but that they think that would be the right thing to do—that if the choice were beween 3^^^3 people getting a dustspeck in the eye or one person getting tortured for fifty years, the FAI would always make a choice, and that choice would be for the torture rather than the dustspecks.
Now, this may well be, overall, the rational choice to make as far as humanity as a whole goes, but it would most definitely not be the rational choice for the person who was getting tortured to support it.
And since, as far as I can see, most people only value a very small subset of humanity who identify as belonging to the same groups as them, I strongly suspect that in the utilitarian calculations of a “friendly” AI programmed with CEV, they would end up in the getting-tortured group, rather than the avoiding-dustspecks one.
that if the choice were beween 3^^^3 people getting a dustspeck in the eye or one person getting tortured for fifty years, the FAI would always make a choice, and that choice would be for the torture rather than the dustspecks
That is an entirely separate issue. If CEV(everyone) outputted a moral theory that held utility was additive, then the AI implementing it would choose torture over specks. In other words, utilitarians are committed to believing that specks is the wrong choice.
But there is no guarantee that CEV will output a utilitarian theory, even if you believe it will output something. SI (Eliezer, at least) believes CEV will output a utilitarian theory because SI believes utilitarian theories are right. But everyone agrees that “whether CEV will output something” is a different issue than “what CEV will output.”
Personally, I suspect CEV(everyone in the United States) would output something deotological—and might even output something that would pick specks. Again, assuming it outputs anything.
As someone who does not believe in moral realism, I agree that CEV over all humans who ever lived (excluding sociopaths and such) will not output anything.
But I think that a moral realist should believe that CEV will output some value system, and that the produced value system will be right.
In short, I think one’s belief about whether CEV will output something is isomorphic on whether one believes in [moral realism] (plato.stanford.edu/entries/moral-realism/).
Edit: link didn’t work, so separated it out.
Have you tried putting
http://
in front of the URL?(Edit: the backtick thing to show verbatim code isn’t working properly for some reason, but you know what I mean.)
moral realism.
Edit: Apparently that was the problem. Thanks.
Edit2: It appears that copying and pasting from some places includes “http” even when my browser address doesn’t. But I did something wrong when copying from the philosophy dictionary.
I agree—assuming that CEV didn’t impose a majority view on a minority. My understanding of the SI’s arguments (and it’s only my understanding) is that they believe it will impose a majority view on a minority, but that they think that would be the right thing to do—that if the choice were beween 3^^^3 people getting a dustspeck in the eye or one person getting tortured for fifty years, the FAI would always make a choice, and that choice would be for the torture rather than the dustspecks.
Now, this may well be, overall, the rational choice to make as far as humanity as a whole goes, but it would most definitely not be the rational choice for the person who was getting tortured to support it.
And since, as far as I can see, most people only value a very small subset of humanity who identify as belonging to the same groups as them, I strongly suspect that in the utilitarian calculations of a “friendly” AI programmed with CEV, they would end up in the getting-tortured group, rather than the avoiding-dustspecks one.
This is not clear.
That is an entirely separate issue. If CEV(everyone) outputted a moral theory that held utility was additive, then the AI implementing it would choose torture over specks. In other words, utilitarians are committed to believing that specks is the wrong choice.
But there is no guarantee that CEV will output a utilitarian theory, even if you believe it will output something. SI (Eliezer, at least) believes CEV will output a utilitarian theory because SI believes utilitarian theories are right. But everyone agrees that “whether CEV will output something” is a different issue than “what CEV will output.”
Personally, I suspect CEV(everyone in the United States) would output something deotological—and might even output something that would pick specks. Again, assuming it outputs anything.