On why the suffering of one species would be more important than the suffering of another:
Because one of those species is mine?
Does that also apply to race and gender? If not, why not? Assuming a line-up of ancestors, always mother and daughter, from Homo sapiens back to the common ancestor of humans and chickens and forward in time again to modern chickens, where would you draw the line? A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother? Is that really the criterion you want to use for making your decisions? And does it at all bother you that racists or sexists can use an analogous line of defense?
Does that also apply to race and gender? If not, why not?
I feel psychologically similar to humans of different races and genders but I don’t feel psychologically similar to members of most different species.
A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother?
Uh, no. System 1 doesn’t know what a species is; that’s just a word System 2 is using to approximately communicate an underlying feeling System 1 has. But System 1 knows what a friend is. Other humans can be my friends, at least in principle. Probably various kinds of posthumans and AIs can as well. As far as I can tell, a fish can’t, not really.
This general argument of “the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad” strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn’t have this property?
And does it at all bother you that racists or sexists can use an analogous line of defense?
Also no. I think current moral fashion is prejudiced against prejudice. Racism and sexism are not crazy or evil points of view; historically, they were points of view held by many sane humans who would have been regarded by their peers as morally upstanding. Have you read What You Can’t Say?
I should add to this that even if I endorse what you call “prejudice against prejudice” here—that is, even if I agree with current moral fashion that racism and sexism are not as good as their absence—it doesn’t follow that because racists or sexists can use a particular argument A as a line of defense, there’s therefore something wrong with A.
There are all sorts of positions which I endorse and which racists and sexists (and Babyeaters and Nazis and Sith Lords and...) might also endorse.
This general argument of “the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad” strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn’t have this property?
Actually, I do. I try to rely on System 1 as little as possible when it comes to figuring out my terminal value(s). One reason for that, I guess, is that at some point I started out with the premise that I don’t want to be the sort of person that would have been racist or sexist in previous centuries. If you don’t share that premise, there is no way for me to show that you’re being inconsistent—I acknowledge that.
On why the suffering of one species would be more important than the suffering of another:
Does that also apply to race and gender? If not, why not? Assuming a line-up of ancestors, always mother and daughter, from Homo sapiens back to the common ancestor of humans and chickens and forward in time again to modern chickens, where would you draw the line? A common definition for species is biology is that two groups of organisms belong to different species if they cannot have fertile offspring. Is that really a morally relevant criterion that justifies treating a daughter different from her mother? Is that really the criterion you want to use for making your decisions? And does it at all bother you that racists or sexists can use an analogous line of defense?
I feel psychologically similar to humans of different races and genders but I don’t feel psychologically similar to members of most different species.
Uh, no. System 1 doesn’t know what a species is; that’s just a word System 2 is using to approximately communicate an underlying feeling System 1 has. But System 1 knows what a friend is. Other humans can be my friends, at least in principle. Probably various kinds of posthumans and AIs can as well. As far as I can tell, a fish can’t, not really.
This general argument of “the algorithm you claim to be using to make moral decisions might fail on some edge cases, therefore it is bad” strikes me as disingenuous. Do you have an algorithm you use to make moral decisions that doesn’t have this property?
Also no. I think current moral fashion is prejudiced against prejudice. Racism and sexism are not crazy or evil points of view; historically, they were points of view held by many sane humans who would have been regarded by their peers as morally upstanding. Have you read What You Can’t Say?
I should add to this that even if I endorse what you call “prejudice against prejudice” here—that is, even if I agree with current moral fashion that racism and sexism are not as good as their absence—it doesn’t follow that because racists or sexists can use a particular argument A as a line of defense, there’s therefore something wrong with A.
There are all sorts of positions which I endorse and which racists and sexists (and Babyeaters and Nazis and Sith Lords and...) might also endorse.
Actually, I do. I try to rely on System 1 as little as possible when it comes to figuring out my terminal value(s). One reason for that, I guess, is that at some point I started out with the premise that I don’t want to be the sort of person that would have been racist or sexist in previous centuries. If you don’t share that premise, there is no way for me to show that you’re being inconsistent—I acknowledge that.
Wow! So you’ve solved friendly AI? Eliezer will be happy to hear that.
I’m pretty sure Eliezer already knew our brains contained the basis of morality.