I have no idea what the conclusion of this article is. I suspect the author wants to argue for moral eliminativism, and hopes to support moral eliminativism by claiming that nothing would change if someone (or is it everyone?) was convinced their moral beliefs were wrong. I’m not sure how exactly the author intends that to work out.
But in any case, my comment only intended to criticise the methodology of the article, and was not aimed at discussing moral eliminativism. I simply pointed out that the question asked—what would happen is someone (or everyone?) was convinced their moral beliefs were wrong—was vague in several important aspects. And any results from intuition would be suspect, especially if the person holding those intuitions was a moral eliminativist. I was not “objecting” to anything, as the article didn’t actually make any positive claims.
I might as well clarify and support myself by listing all the variations on the question possible.
(1) What would you personally do if you had no moral beliefs?
(2) What would you personally do if you believed in (some form of) moral eliminativism—e.g. that nothing is right or wrong?
(3) What would you personally do if you were convinced your moral beliefs were wrong?
What would a randomly selected person from the populace of the Earth do if (1), (2) or (3) happened to them?
What would happen if everyone in a society/ the world simultaneously had (1), (2) or (3) happen to them?
I simply pointed out that the question asked—what would happen is someone (or everyone?) was convinced their moral beliefs were wrong—was vague in several important aspects.
It’s vague in an additional way: you interpreted it to mean “what would you do if you were convinced that your moral beliefs were wrong”. But I think Eliezer was asking “what would you do if your moral beliefs actually were wrong and you were aware of that.”
That has its own problem. It’s like asking “if someone could prove that creationism was true and evolution isn’t, would you agree that scientists are closed-minded in rejecting it?” A hypothetical world in which creationism was true wouldn’t be exactly like our own except that it contains a piece of paper with a proof of creationism written down on it. In a world where creationism really was true, scientists would either have figured it out, or would have not figured it out but would be a lot more clueless than actual-world scientists. Likewise, a world where moral beliefs were all wrong would be very unlike our world, if indeed it’s a coherent concept at all—it would not be a world that is exactly like this one with the exception that I am now in possession of a proof.
It’s like asking “if someone could prove that creationism was true and evolution isn’t, would you agree that scientists are closed-minded in rejecting it?”
For my own part, I don’t have a problem with that question either, though how I answer it depends a lot on whether (and to what extent) I think we’re engaged in idea-exploration vs. tribal boundary-defending. If the former, my answer is “sure” and I wait to see what follows. If the latter, I challenge the question (not unlike your answer) or otherwise push back on the boundary violation.
Very true. I didn’t get that from reading the article at first, but now I’m getting that vibe. I guess the more charitable reading is ‘what would you do if you were convinced that your moral beliefs were wrong’ or one of my variations, because you rightly point out that ‘what would you do if your moral beliefs actually were wrong and you were aware of that’ is an exceedingly presumptuous question.
I have no idea what the conclusion of this article is. I suspect the author wants to argue for moral eliminativism, and hopes to support moral eliminativism by claiming that nothing would change if someone (or is it everyone?) was convinced their moral beliefs were wrong. I’m not sure how exactly the author intends that to work out.
But in any case, my comment only intended to criticise the methodology of the article, and was not aimed at discussing moral eliminativism. I simply pointed out that the question asked—what would happen is someone (or everyone?) was convinced their moral beliefs were wrong—was vague in several important aspects. And any results from intuition would be suspect, especially if the person holding those intuitions was a moral eliminativist. I was not “objecting” to anything, as the article didn’t actually make any positive claims.
I might as well clarify and support myself by listing all the variations on the question possible.
(1) What would you personally do if you had no moral beliefs? (2) What would you personally do if you believed in (some form of) moral eliminativism—e.g. that nothing is right or wrong? (3) What would you personally do if you were convinced your moral beliefs were wrong? What would a randomly selected person from the populace of the Earth do if (1), (2) or (3) happened to them? What would happen if everyone in a society/ the world simultaneously had (1), (2) or (3) happen to them?
It’s vague in an additional way: you interpreted it to mean “what would you do if you were convinced that your moral beliefs were wrong”. But I think Eliezer was asking “what would you do if your moral beliefs actually were wrong and you were aware of that.”
That has its own problem. It’s like asking “if someone could prove that creationism was true and evolution isn’t, would you agree that scientists are closed-minded in rejecting it?” A hypothetical world in which creationism was true wouldn’t be exactly like our own except that it contains a piece of paper with a proof of creationism written down on it. In a world where creationism really was true, scientists would either have figured it out, or would have not figured it out but would be a lot more clueless than actual-world scientists. Likewise, a world where moral beliefs were all wrong would be very unlike our world, if indeed it’s a coherent concept at all—it would not be a world that is exactly like this one with the exception that I am now in possession of a proof.
For my own part, I don’t have a problem with that question either, though how I answer it depends a lot on whether (and to what extent) I think we’re engaged in idea-exploration vs. tribal boundary-defending. If the former, my answer is “sure” and I wait to see what follows. If the latter, I challenge the question (not unlike your answer) or otherwise push back on the boundary violation.
Very true. I didn’t get that from reading the article at first, but now I’m getting that vibe. I guess the more charitable reading is ‘what would you do if you were convinced that your moral beliefs were wrong’ or one of my variations, because you rightly point out that ‘what would you do if your moral beliefs actually were wrong and you were aware of that’ is an exceedingly presumptuous question.
Thanks for clarifying.