There are several things wrong with this post. Firstly, I’m sure different people would react to being convinced their moral philosophy was wrong in different ways. Some might wail and scream and commit suicide. Some might question search further and try to find a more convincing moral philosophy. Some would just carry go on living there lives and not caring.
Furthermore, the outcome would be different if you could simultaneously convince everyone in a society, and give everyone the knowledge that everyone had been convinced. Perhaps the society would break down as the police and institutions upholding the law abandoned their tasks due to both apathy and a desire to capitalise on the new state of affairs, with no guilt. Who knows.
The fundamental flaw of this article is that it asks us to consult our intuitions about what would happen if so and so. Consulting our intuitions is something I believe this site shuns, so it is quite hypocritical that the author has requested we place so much emphasis on them in this instance. Furthermore, anyone answering this question who believes in moral eliminativism has a confirmation bias to say ‘nothing would change’ as this is seen by them to support their beliefs.
Consulting your intuition in a matter of descriptive questions should be done with caution. (But even then, it’s not forbidden or even really discouraged, since intuition can offer valuable—if non-rigorous—insights.) Using your intuition when confronting normative or prescriptive problems, on the other hand, is perfectly fine, because there’s no “should” without an intuition about what “should” be. (Unless, of course, you think that normative problems are also descriptive, in which case you believe in objective morality, which has its own problems.)
The fundamental flaw of this article is that it asks us to consult our intuitions about what would happen if so and so.
This seems a bizarre claim. If you think the conclusion that EY is intuition-pumping to advocate for is false (which you seem to, given your first two paragraphs), surely that’s a more fundamental flaw than the fact that he’s intuition-pumping to advocate for it.
That said, I’ll admit I don’t really understand on what grounds you oppose the conclusion. (In fact, it’s not even clear to me what you think the advocated-for conclusion is.)
I mean, your point seems to be that not everyone would respond to discovering that “nothing is moral and nothing is right; that everything is permissible and nothing is forbidden” in the same way, either as individuals or as collectives. And I agree with that, but I don’t see how it relates to any claims made by the post you reply to.
Taking another stab at clarifying your objections might be worthwhile, if only to get clearer in your own mind about what you believe and what you expect.
I have no idea what the conclusion of this article is. I suspect the author wants to argue for moral eliminativism, and hopes to support moral eliminativism by claiming that nothing would change if someone (or is it everyone?) was convinced their moral beliefs were wrong. I’m not sure how exactly the author intends that to work out.
But in any case, my comment only intended to criticise the methodology of the article, and was not aimed at discussing moral eliminativism. I simply pointed out that the question asked—what would happen is someone (or everyone?) was convinced their moral beliefs were wrong—was vague in several important aspects. And any results from intuition would be suspect, especially if the person holding those intuitions was a moral eliminativist. I was not “objecting” to anything, as the article didn’t actually make any positive claims.
I might as well clarify and support myself by listing all the variations on the question possible.
(1) What would you personally do if you had no moral beliefs?
(2) What would you personally do if you believed in (some form of) moral eliminativism—e.g. that nothing is right or wrong?
(3) What would you personally do if you were convinced your moral beliefs were wrong?
What would a randomly selected person from the populace of the Earth do if (1), (2) or (3) happened to them?
What would happen if everyone in a society/ the world simultaneously had (1), (2) or (3) happen to them?
I simply pointed out that the question asked—what would happen is someone (or everyone?) was convinced their moral beliefs were wrong—was vague in several important aspects.
It’s vague in an additional way: you interpreted it to mean “what would you do if you were convinced that your moral beliefs were wrong”. But I think Eliezer was asking “what would you do if your moral beliefs actually were wrong and you were aware of that.”
That has its own problem. It’s like asking “if someone could prove that creationism was true and evolution isn’t, would you agree that scientists are closed-minded in rejecting it?” A hypothetical world in which creationism was true wouldn’t be exactly like our own except that it contains a piece of paper with a proof of creationism written down on it. In a world where creationism really was true, scientists would either have figured it out, or would have not figured it out but would be a lot more clueless than actual-world scientists. Likewise, a world where moral beliefs were all wrong would be very unlike our world, if indeed it’s a coherent concept at all—it would not be a world that is exactly like this one with the exception that I am now in possession of a proof.
It’s like asking “if someone could prove that creationism was true and evolution isn’t, would you agree that scientists are closed-minded in rejecting it?”
For my own part, I don’t have a problem with that question either, though how I answer it depends a lot on whether (and to what extent) I think we’re engaged in idea-exploration vs. tribal boundary-defending. If the former, my answer is “sure” and I wait to see what follows. If the latter, I challenge the question (not unlike your answer) or otherwise push back on the boundary violation.
Very true. I didn’t get that from reading the article at first, but now I’m getting that vibe. I guess the more charitable reading is ‘what would you do if you were convinced that your moral beliefs were wrong’ or one of my variations, because you rightly point out that ‘what would you do if your moral beliefs actually were wrong and you were aware of that’ is an exceedingly presumptuous question.
The facebook relationship status would be “It’s complicated”.
Basically Kahneman did find out that intuition or System I is quite useful. Various people in decision science manage to run study indicating that heuristics are important and this community is aware of that.
CFAR speaks about integrating system I and system II.
Yeah...what are the chances that in 50 years time psychologists and neurophysicists still believe system I and II are useful heuristics to describe brain processes?
Not so bad, I think. I’d give roughly equal probability to (1) substantially the same dichotomy still being convenient, though perhaps with different names, (2) more careful investigation having refined the ideas enough to require a change in terminology (e.g., maybe it will turn out that what Kahneman calls “system 1” is better considered as two related systems, or something), and (3) the idea being largely abandoned because what’s really going on turns out to be very different and it’s just good/bad luck that the system 1 / system 2 dichotomy looks good in the early 21st century.
Even in case 3 I would expect there to be some parallels between system 1 / system 2 and whatever replaces it. There doesn’t seem to be much doubt that our brains do some things quickly and without conscious effort and some things slowly and effortfully, or that there are ways in which the quick effortless stuff can go systematically wrong.
Nevertheless, the use of this currently tenuous scientific theory to found our entire understanding of intuition would seem a little bit premature, especially if the theory contradicts what other influential and valued institutions have had to say about intuition (for instance, philosophy).
We should found our understanding of intuition (or anything else) on the best information we currently have. Whether something’s likely to be overthrown in the next 50 years is obviously related to how much we should trust it now for any given purpose, but not all that tightly. (For instance: we know that current theories of fundamental physics are wrong because we have no theory that encompasses both GR and QFT; but I for one am extremely comfortable assuming these theories are right for all “everyday” purposes—both because it seems fairly certain that whatever new discoveries we make will have little impact on predictions governing “everyday” events, and because at present we have no good rival theories that make different predictions and seem at all likely to be correct.
The use of the “system 1 / system 2” dichotomy here on LW doesn’t appear to me to depend much on subtle details of what’s going on. It looks to me—though I am not an expert and will willingly be corrected by those who are—as if we have quite robust evidence that some human cognitive processes are slow, under conscious control, and about as accurate as we choose to take the trouble to make them, while others are fast, not under conscious control, highly inaccurate in some identifiable circumstances, and hard to make much more accurate. And it doesn’t look to me as if anything on LW requires much more than that. (Maybe some of CFAR’s training makes stronger assumptions; I don’t know.)
what other influential and valued institutions have had to say about intuition (for instance, philosophy)
What matters is not how influential and valued those institutions are, but what reason we have to think they’re right in what they say about intuition. “Philosophy” is of course a tremendously broad thing, covering thousands of years of human endeavour. What (say) Plato thought about intuition may be very interesting—he was very clever, and his opinions were influential—but human knowledge has moved on a lot since his day, and in so far as we want our ideas about intuition to be correct we should give rather little weight to agreeing with Plato.
Would you like to be more specific about how our opinions about intuition should differ from those currently popular on LW, as a result of taking into account what influential and valued institutions like philosophy have said about it?
Without further information, it’s difficult to say. That being said, it’s the best model we have right now. Unless you have a better model to offer, questioning the validity of the latest in current neuroscience is unlikely to be productive.
There’s a reason why I said “It’s complicated”. I don’t believe system I and system II to be perfect terms and I doubt the majority of LW thinks the terms are perfect.
There are several things wrong with this post. Firstly, I’m sure different people would react to being convinced their moral philosophy was wrong in different ways. Some might wail and scream and commit suicide. Some might question search further and try to find a more convincing moral philosophy. Some would just carry go on living there lives and not caring.
Furthermore, the outcome would be different if you could simultaneously convince everyone in a society, and give everyone the knowledge that everyone had been convinced. Perhaps the society would break down as the police and institutions upholding the law abandoned their tasks due to both apathy and a desire to capitalise on the new state of affairs, with no guilt. Who knows.
The fundamental flaw of this article is that it asks us to consult our intuitions about what would happen if so and so. Consulting our intuitions is something I believe this site shuns, so it is quite hypocritical that the author has requested we place so much emphasis on them in this instance. Furthermore, anyone answering this question who believes in moral eliminativism has a confirmation bias to say ‘nothing would change’ as this is seen by them to support their beliefs.
Consulting your intuition in a matter of descriptive questions should be done with caution. (But even then, it’s not forbidden or even really discouraged, since intuition can offer valuable—if non-rigorous—insights.) Using your intuition when confronting normative or prescriptive problems, on the other hand, is perfectly fine, because there’s no “should” without an intuition about what “should” be. (Unless, of course, you think that normative problems are also descriptive, in which case you believe in objective morality, which has its own problems.)
This seems a bizarre claim. If you think the conclusion that EY is intuition-pumping to advocate for is false (which you seem to, given your first two paragraphs), surely that’s a more fundamental flaw than the fact that he’s intuition-pumping to advocate for it.
That said, I’ll admit I don’t really understand on what grounds you oppose the conclusion. (In fact, it’s not even clear to me what you think the advocated-for conclusion is.)
I mean, your point seems to be that not everyone would respond to discovering that “nothing is moral and nothing is right; that everything is permissible and nothing is forbidden” in the same way, either as individuals or as collectives. And I agree with that, but I don’t see how it relates to any claims made by the post you reply to.
Taking another stab at clarifying your objections might be worthwhile, if only to get clearer in your own mind about what you believe and what you expect.
I have no idea what the conclusion of this article is. I suspect the author wants to argue for moral eliminativism, and hopes to support moral eliminativism by claiming that nothing would change if someone (or is it everyone?) was convinced their moral beliefs were wrong. I’m not sure how exactly the author intends that to work out.
But in any case, my comment only intended to criticise the methodology of the article, and was not aimed at discussing moral eliminativism. I simply pointed out that the question asked—what would happen is someone (or everyone?) was convinced their moral beliefs were wrong—was vague in several important aspects. And any results from intuition would be suspect, especially if the person holding those intuitions was a moral eliminativist. I was not “objecting” to anything, as the article didn’t actually make any positive claims.
I might as well clarify and support myself by listing all the variations on the question possible.
(1) What would you personally do if you had no moral beliefs? (2) What would you personally do if you believed in (some form of) moral eliminativism—e.g. that nothing is right or wrong? (3) What would you personally do if you were convinced your moral beliefs were wrong? What would a randomly selected person from the populace of the Earth do if (1), (2) or (3) happened to them? What would happen if everyone in a society/ the world simultaneously had (1), (2) or (3) happen to them?
It’s vague in an additional way: you interpreted it to mean “what would you do if you were convinced that your moral beliefs were wrong”. But I think Eliezer was asking “what would you do if your moral beliefs actually were wrong and you were aware of that.”
That has its own problem. It’s like asking “if someone could prove that creationism was true and evolution isn’t, would you agree that scientists are closed-minded in rejecting it?” A hypothetical world in which creationism was true wouldn’t be exactly like our own except that it contains a piece of paper with a proof of creationism written down on it. In a world where creationism really was true, scientists would either have figured it out, or would have not figured it out but would be a lot more clueless than actual-world scientists. Likewise, a world where moral beliefs were all wrong would be very unlike our world, if indeed it’s a coherent concept at all—it would not be a world that is exactly like this one with the exception that I am now in possession of a proof.
For my own part, I don’t have a problem with that question either, though how I answer it depends a lot on whether (and to what extent) I think we’re engaged in idea-exploration vs. tribal boundary-defending. If the former, my answer is “sure” and I wait to see what follows. If the latter, I challenge the question (not unlike your answer) or otherwise push back on the boundary violation.
Very true. I didn’t get that from reading the article at first, but now I’m getting that vibe. I guess the more charitable reading is ‘what would you do if you were convinced that your moral beliefs were wrong’ or one of my variations, because you rightly point out that ‘what would you do if your moral beliefs actually were wrong and you were aware of that’ is an exceedingly presumptuous question.
Thanks for clarifying.
That’s not true. Our relationship to intuition is just more complex.
Huh. And there you had me thinking you two had split up. So are you two in an open relationship, or what?
The facebook relationship status would be “It’s complicated”.
Basically Kahneman did find out that intuition or System I is quite useful. Various people in decision science manage to run study indicating that heuristics are important and this community is aware of that.
CFAR speaks about integrating system I and system II.
Yeah...what are the chances that in 50 years time psychologists and neurophysicists still believe system I and II are useful heuristics to describe brain processes?
Not so bad, I think. I’d give roughly equal probability to (1) substantially the same dichotomy still being convenient, though perhaps with different names, (2) more careful investigation having refined the ideas enough to require a change in terminology (e.g., maybe it will turn out that what Kahneman calls “system 1” is better considered as two related systems, or something), and (3) the idea being largely abandoned because what’s really going on turns out to be very different and it’s just good/bad luck that the system 1 / system 2 dichotomy looks good in the early 21st century.
Even in case 3 I would expect there to be some parallels between system 1 / system 2 and whatever replaces it. There doesn’t seem to be much doubt that our brains do some things quickly and without conscious effort and some things slowly and effortfully, or that there are ways in which the quick effortless stuff can go systematically wrong.
Nevertheless, the use of this currently tenuous scientific theory to found our entire understanding of intuition would seem a little bit premature, especially if the theory contradicts what other influential and valued institutions have had to say about intuition (for instance, philosophy).
We should found our understanding of intuition (or anything else) on the best information we currently have. Whether something’s likely to be overthrown in the next 50 years is obviously related to how much we should trust it now for any given purpose, but not all that tightly. (For instance: we know that current theories of fundamental physics are wrong because we have no theory that encompasses both GR and QFT; but I for one am extremely comfortable assuming these theories are right for all “everyday” purposes—both because it seems fairly certain that whatever new discoveries we make will have little impact on predictions governing “everyday” events, and because at present we have no good rival theories that make different predictions and seem at all likely to be correct.
The use of the “system 1 / system 2” dichotomy here on LW doesn’t appear to me to depend much on subtle details of what’s going on. It looks to me—though I am not an expert and will willingly be corrected by those who are—as if we have quite robust evidence that some human cognitive processes are slow, under conscious control, and about as accurate as we choose to take the trouble to make them, while others are fast, not under conscious control, highly inaccurate in some identifiable circumstances, and hard to make much more accurate. And it doesn’t look to me as if anything on LW requires much more than that. (Maybe some of CFAR’s training makes stronger assumptions; I don’t know.)
What matters is not how influential and valued those institutions are, but what reason we have to think they’re right in what they say about intuition. “Philosophy” is of course a tremendously broad thing, covering thousands of years of human endeavour. What (say) Plato thought about intuition may be very interesting—he was very clever, and his opinions were influential—but human knowledge has moved on a lot since his day, and in so far as we want our ideas about intuition to be correct we should give rather little weight to agreeing with Plato.
Would you like to be more specific about how our opinions about intuition should differ from those currently popular on LW, as a result of taking into account what influential and valued institutions like philosophy have said about it?
Without further information, it’s difficult to say. That being said, it’s the best model we have right now. Unless you have a better model to offer, questioning the validity of the latest in current neuroscience is unlikely to be productive.
There’s a reason why I said “It’s complicated”. I don’t believe system I and system II to be perfect terms and I doubt the majority of LW thinks the terms are perfect.