The emphasis on common folk strikes me as unfortunate in itself. Such focus makes me wary; equivocation becomes too easy, as do apparent victories.
Note that philosophers usually have the same judgments as ‘common folk’ on trolley problems (Fischer & Ravizza, 1992). Also see this post from Eric Schwitzgebel. He suggests that philosophers are actually more likely to rationalize because (1) they have more powerful tools for rationalization, (2) rationalization for them has a broader field of play (via tossing more of morality into doubt), and (3) they have more psychological occasion for rationalization (by nurturing the tendency to reflect on principles rather than simply take things for granted).
In one experiment, Schwitzgebel found that “philosophers, more than other professors and more than non-academics, tended to endorse moral principles in labile ways to match up with psychologically manipulated intuitions about particular cases.” In another study he found that “professional ethicists, more than professors in other fields, seemed to exhibit self-congratulatory rationalization in their normative attitudes about replying to emails from students.”
But the equivocation is my real problem. I dislike the terminology and think it is insidious. “Utilitarian” and “deontological” modules? “Deontological” judgment? The connection between the folk morality of the common man and the deontologies of the philosophers is not well-made in this post; hinting that the same neurological processes could perhaps lead to both, based on a few studies, just isn’t enough to justify the provocative terminology.
Did you see the bit where Greene explains in more detail what he means by these terms? I quoted some of it here.
I do not find the relevant parts of Greene’s “The Secret Joke of Kant’s Soul” very persuasive
If you have time, I’d like to hear more about this. Were there, for example, methodological problems with the studies linking deontological-style judgments to emotional processing, or with the studies linking utilitarian judgments to more ‘cognitive’ kinds of mental processing?
Note that philosophers usually have the same judgments as ‘common folk’ on trolley problems (Fischer & Ravizza, 1992).
‘Course, but mere correlation of binary judgments tells us little about the similarity of causal mechanisms that lead to their judgments. We should expect philosophers to have more reasons, and more disjunctive ones. Even overlap of reasons doesn’t necessarily give us license to imply that if deontologist philosophers weren’t biased in the same way as common folk are then they wouldn’t be deontologists; we must be careful with our connotations. True beliefs have many disjunctive supporting reasons, and it would be unwise to presuppose that parsimony is on the side of deontology being ‘true’ or ‘false’ such that finding a single reason for or against it substantially changes the balance. If you want to believe true things then your wanting to believe something becomes correlated with its truth… “rationalization” is complex and in some ways an essential part of rationality.
All that said, Schwitzgebel’s experiment does seem to indicate commonplace ‘bad’ rationalization. (ETA: I need to look closer at effect sizes, prestige of philosophers, etc, to get a better sense of this though.)
Did you see the bit where Greene explains in more detail what he means by these terms?
Yeah, and I see their logic and appeal; still, the equivocations seem to be unnecessary and distracting. (It would’ve been much less contentious to use less provocative terms to describe the research and then separately follow that up with research like Schwitzgebel’s; this would allow readers to have more precise models while also minimizing distraction.) If this were anywhere except Less Wrong I’d think it was meh, but here we should perhaps make sure to correct errors of conceptualization like that. This has worked in the past. That said, it would have been more work for you, which is non-trivial. Furthermore I am known to be much more paranoid than most about these kinds of things. I’d argue that that’s a good thing, but, meh.
Were there, for example, methodological problems with the studies linking deontological-style judgments to emotional processing, or with the studies linking utilitarian judgments to more ‘cognitive’ kinds of mental processing?
Neither, the “relevant parts” I was speaking of were the parts where he argued that Kant and other philosophers were falling to the same errors as the members of the studies. I still find his arguments to be weak; e.g. the section Missing the Deontological Point struck me as anti-persuasive. However Schwitzgebel’s experiment makes up for Greene’s lack of argument. Are there any meta-studies of that nature? (Presumably not, especially as that experiment seems to have been done in the last year.)
I still find his arguments to be weak; e.g. the section Missing the Deontological Point struck me as anti-persuasive. However Schwitzebel’s experiment makes up for Greene’s lack of argument. Are there any meta-studies of that nature?
Recent experimental philosophy arguments have raised trouble for philosophers’ reliance on armchair intuitions. One popular line of response has been the expertise defense: philosophers are highly-trained experts, whereas the subjects in the experimental philosophy studies have generally been ordinary undergraduates, and so there’s no reason to think philosophers will make the same mistakes… We consider three promising hypotheses concerning what philosophical expertise might consist in: (i) better conceptual schemata; (ii) mastery of entrenched theories; and (iii) general practical know-how with the entertaining of hypotheticals. On inspection, none seem to provide us with good reason to endorse this key empirical premise of the expertise defense.
Note that philosophers usually have the same judgments as ‘common folk’ on trolley problems (Fischer & Ravizza, 1992). Also see this post from Eric Schwitzgebel. He suggests that philosophers are actually more likely to rationalize because (1) they have more powerful tools for rationalization, (2) rationalization for them has a broader field of play (via tossing more of morality into doubt), and (3) they have more psychological occasion for rationalization (by nurturing the tendency to reflect on principles rather than simply take things for granted).
In one experiment, Schwitzgebel found that “philosophers, more than other professors and more than non-academics, tended to endorse moral principles in labile ways to match up with psychologically manipulated intuitions about particular cases.” In another study he found that “professional ethicists, more than professors in other fields, seemed to exhibit self-congratulatory rationalization in their normative attitudes about replying to emails from students.”
Did you see the bit where Greene explains in more detail what he means by these terms? I quoted some of it here.
If you have time, I’d like to hear more about this. Were there, for example, methodological problems with the studies linking deontological-style judgments to emotional processing, or with the studies linking utilitarian judgments to more ‘cognitive’ kinds of mental processing?
‘Course, but mere correlation of binary judgments tells us little about the similarity of causal mechanisms that lead to their judgments. We should expect philosophers to have more reasons, and more disjunctive ones. Even overlap of reasons doesn’t necessarily give us license to imply that if deontologist philosophers weren’t biased in the same way as common folk are then they wouldn’t be deontologists; we must be careful with our connotations. True beliefs have many disjunctive supporting reasons, and it would be unwise to presuppose that parsimony is on the side of deontology being ‘true’ or ‘false’ such that finding a single reason for or against it substantially changes the balance. If you want to believe true things then your wanting to believe something becomes correlated with its truth… “rationalization” is complex and in some ways an essential part of rationality.
All that said, Schwitzgebel’s experiment does seem to indicate commonplace ‘bad’ rationalization. (ETA: I need to look closer at effect sizes, prestige of philosophers, etc, to get a better sense of this though.)
Yeah, and I see their logic and appeal; still, the equivocations seem to be unnecessary and distracting. (It would’ve been much less contentious to use less provocative terms to describe the research and then separately follow that up with research like Schwitzgebel’s; this would allow readers to have more precise models while also minimizing distraction.) If this were anywhere except Less Wrong I’d think it was meh, but here we should perhaps make sure to correct errors of conceptualization like that. This has worked in the past. That said, it would have been more work for you, which is non-trivial. Furthermore I am known to be much more paranoid than most about these kinds of things. I’d argue that that’s a good thing, but, meh.
Neither, the “relevant parts” I was speaking of were the parts where he argued that Kant and other philosophers were falling to the same errors as the members of the studies. I still find his arguments to be weak; e.g. the section Missing the Deontological Point struck me as anti-persuasive. However Schwitzgebel’s experiment makes up for Greene’s lack of argument. Are there any meta-studies of that nature? (Presumably not, especially as that experiment seems to have been done in the last year.)
Sure. There’s Weinberg et al.: