But if that were the case, then moral philosophers—who reason about ethical principles all day long—should be more virtuous than other people. Are they? The philosopher Eric Schwitzgebel tried to find out. He used surveys and more surreptitious methods to measure how often moral philosophers give to charity, vote, call their mothers, donate blood, donate organs, clean up after themselves at philosophy conferences, and respond to emails purportedly from students. And in none of these ways are moral philosophers better than other philosophers or professors in other fields.
Schwitzgebel even scrounged up the missing-book lists from dozens of libraries and found that academic books on ethics, which are presumably mostly borrowed by ethicists, are more likely to be stolen or just never returned than books in other areas of philosophy. In other words, expertise in moral reasoning does not seem to improve moral behavior, and it might even make it worse (perhaps by making the rider more skilled at post hoc justification). Schwitzgebel still has yet to find a single measure on which moral philosophers behave better than other philosophers.
Jonathon Haidt, discussing the idea that ethical reasoning causes good behaviour, in his book ‘The Righteous Mind’.
I found the book-stealing thing quite funny, although I imagine that some of the results described could be explained by popularity; if more people get into / like ethics, then there are more people who might steal library books, more antisocial people who don’t respond to emails, etc. This hasn’t been demonstrated to my knowledge though, and I’m otherwise inclined to believe that people who spend their days thinking about ethics in the abstract, are simply better at coming up with rationales for their instinctive feelings. Joshua Greene says rights are an example of this, where we need a dictum against whatever our emotions are telling us is despicable, even though we can’t find any utilitarian justification for it.
There’s probably a selection effect at work. Would a highly moral person with a capable and flexible mind become a full-time moral philosopher? Take their sustenance from society’s philanthropy budget?
Or would they take the talmudists’ advice and learn a trade so they can support themselves, and study moral philosophy in their free time? Or perhaps Givewell’s advice and learn the most lucrative art they can and give most of it to charity? Or study whichever field allows them to make the biggest difference in peoples’ lives? (Probably medicine, engineering or diplomacy.)
Granted, such a person might think they could make such a large contribution to the field of moral philosophy that it would be comparable in impact to other research fields. This seems unlikely.
The same reasoning would keep highly moral people out of other sorts of philosophy, but people who don’t have an interest in moral philosophy per se might not notice the point. It’s hard to avoid if you specifically study it.
This could happen, but I think it’s mostly dwarfed by the far larger selection effect that people who are not financially privileged mostly don’t attempt to become humanities academics these days—and for good reason.
While that case has been made in a few isolated studies, I was more generally referring to the fact that people who don’t come from money will usually choose careers that make them money, and humanities academia doesn’t.
Wasn’t sure about that, so I tracked down some research (Goyette & Mullen 2006). Turns out you’re right: conditioned on getting into college in the first place, higher socioeconomic status (as proxied by parents’ educational achievement) is correlated with going into arts and sciences over vocational fields (engineering, education, business). The paper also finds a nonsignificant trend toward choosing arts and humanities over math and science, within the arts and science category.
(Within the vocational majors, though, engineering is the highest-SES category. Business and education are both significantly lower. I don’t know which of those would be most lucrative on average but I suspect it’d be engineering.)
(Within the vocational majors, though, engineering is the highest-SES category. Business and education are both significantly lower. I don’t know which of those would be most lucrative on average but I suspect it’d be engineering.)
I think there are several trade-offs there: engineering looks like the highest expected value to us, because we (on LessWrong, mostly) had pre-university educations focused on math, science, and technology. People from lower SES… did not, so fewer of them will survive the weed-out courses taught in “we damn well hope you learned this in AP class” style. And then there’s the acclimation to discipline and acclimation to obsessive work-habits (necessary for engineering school) that come from professional parentage… and so on. And then of course, many low-SES people probably want to go into teaching as a helping profession, but that’s not a very quantitative explanation and I’m probably just making it up.
On the other hand, engineering colleges tend to have abnormally large quantities of international students and immigrants blatantly focused on careerism. So yeah.
Granted, such a person might think they could make such a large contribution to the field of moral philosophy that it would be comparable in impact to other research fields. This seems unlikely.
Unlikely that they would make such contribution? Yes. Unlikely that they think they would make such contribution? Maybe no.
But I guess they probably don’t even think this way, i.e. don’t try to maximize their impact. More likely it is something like: “My contribution to society exceeds my salary, so I am a net benefit to the society”. Which is actually possible. Yeah, some people, especially the effective altruists, would consider such thinking an evidence against their competence as a moral philosopher.
“In 1971, John Rawls coined the term “reflective equilibrium” to denote “a state of balance or coherence among a set of beliefs arrived at by a process of deliberative mutual adjustment among general principles and particular judgments”. In practical terms, reflective equilibrium is about how we identify and resolve logical inconsistencies in our prevailing moral compass. Examples such as the rejection of slavery and of innumerable “isms” (sexism, ageism, etc.) are quite clear: the arguments that worked best were those highlighting the hypocrisy of maintaining acceptance of existing attitudes in the face of already-established contrasting attitudes in matters that were indisputably analogous.”
I find it quite arguable whether or not “reflective equilibrium” is a real thing that actually happens in our cognition, or a little game played by philosophy academics. Actual cognitive dissonance caused by holding mutually contradicting ideas in simultaneous salience is well-evidenced, but that’s not exactly an equilibrium across all ideas we hold, merely across the ones we’re holding in short-term verbal memory at the time.
I actually put up another quote arguing for it, by Joshua Greene, making an analogy between successsful moral argument and the invention of new technology; even though a person rarely invents a whole new piece of technology, our world is defined by technological advance. Similarly, even though it is rare for a moral norm to change as a result of abstract argument, our social norms have change dramatically since times gone by.
Nonetheless, the quote works with empirical evidence, the ultimate arbiter of reality. It looks like, whilst moral argument can change our thoughts (and behaviour) on ethical issues, a lot of the time it doesn’t. Like technology, the big changes transform our world, but for the most part we’re just playing angry birds.
This hasn’t been demonstrated to my knowledge though, and I’m otherwise inclined to believe that people who spend their days thinking about ethics in the abstract, are simply better at coming up with rationales for their instinctive feelings.
I think it more likely they’re better at coming up with rationales to ignore their instinctive feelings.
I think that someone can believe that their instinctive feelings are an approximation to what is ethical, then try to formalize it, then conclude that they have identified areas where the approximation is in error. So their ethics code could be highly based on their instinctive feelings without following them 100% of the time.
That seems unlikely. People’s instinctive feelings are generally pretty selfish. (Small sample size, obviously. I think 2 other people where I’ve spoken with enough about this kind of thing to judge.)
No, but I don’t see why children should have an effect; favoring your children over strangers is no less selfish than favoring yourself over strangers, and both are strong instincts.
By instinctive I just mean system 1; the judgments made before you take time to think through what you should do.
No, but I don’t see why children should have an effect; favoring your children over strangers is no less selfish than favoring yourself over strangers, and both are strong instincts.
I had intended to draw attention to the phenomenon of favouring one’s children over oneself. It appears I was right about the test demographic.
And “no less selfish”? At what point would you consider the widening circle to be “less selfish”? To favour your village over others, your country over others, humanity over animals; are these are all no less selfish? Is nothing unselfish but a life of exaninition and unceasing service to everyone and everything but oneself?
By instinctive I just mean system 1; the judgments made before you take time to think through what you should do.
System 1 is susceptible to training—that is what training is. We may be born with the neurological mechanism, but not its entire content. “Instinct” more usually means (quoting Wikipedia) “performed without being based upon prior experience”. A human without prior experience is a baby.
Standard definitions of system 1 describe it as ‘instinctive’, but if you need a separate definition of instinctive responses, ‘untrained system 1 responses’ works.
At what point would you consider the widening circle to be “less selfish”? To favour your village over others, your country over others, humanity over animals; are these are all no less selfish? Is nothing unselfish but a life of exaninition and unceasing service to everyone and everything but oneself?
That depends. Any of those things can be unselfish, if you’re doing it because you think it’s a good thing to do independent of whether it’s an outcome/action you like, and the wider the circle the more likely that’s the motivation. If it’s based on ‘I like these people and want them to be happy, therefore I will take this action’ that’s still selfish.
Lest this sound like I’m saying anything that isn’t done for abstract reasons is selfish, I’d contrast it with things done for reasons of compassion. The lines there can get blurry when the people you’re feeling compassion for are in your ingroup, but things like the place-quarters-here-for-adorable-sad-children variety of charity are clearly trying to induce compassionate motivation (and it works).
From conversations I have had with my own parents (not as comprehensive or in-depth, but heartfelt), it seemed pretty clear that the parenting instinct is much more ‘these kids are mine and I will take care of them come hell or high water’ than a compassionate reflex.
Jonathon Haidt, discussing the idea that ethical reasoning causes good behaviour, in his book ‘The Righteous Mind’.
I found the book-stealing thing quite funny, although I imagine that some of the results described could be explained by popularity; if more people get into / like ethics, then there are more people who might steal library books, more antisocial people who don’t respond to emails, etc. This hasn’t been demonstrated to my knowledge though, and I’m otherwise inclined to believe that people who spend their days thinking about ethics in the abstract, are simply better at coming up with rationales for their instinctive feelings. Joshua Greene says rights are an example of this, where we need a dictum against whatever our emotions are telling us is despicable, even though we can’t find any utilitarian justification for it.
There’s probably a selection effect at work. Would a highly moral person with a capable and flexible mind become a full-time moral philosopher? Take their sustenance from society’s philanthropy budget?
Or would they take the talmudists’ advice and learn a trade so they can support themselves, and study moral philosophy in their free time? Or perhaps Givewell’s advice and learn the most lucrative art they can and give most of it to charity? Or study whichever field allows them to make the biggest difference in peoples’ lives? (Probably medicine, engineering or diplomacy.)
Granted, such a person might think they could make such a large contribution to the field of moral philosophy that it would be comparable in impact to other research fields. This seems unlikely.
The same reasoning would keep highly moral people out of other sorts of philosophy, but people who don’t have an interest in moral philosophy per se might not notice the point. It’s hard to avoid if you specifically study it.
This could happen, but I think it’s mostly dwarfed by the far larger selection effect that people who are not financially privileged mostly don’t attempt to become humanities academics these days—and for good reason.
Are you saying that financially privileged people tend to be less moral?
While that case has been made in a few isolated studies, I was more generally referring to the fact that people who don’t come from money will usually choose careers that make them money, and humanities academia doesn’t.
Wasn’t sure about that, so I tracked down some research (Goyette & Mullen 2006). Turns out you’re right: conditioned on getting into college in the first place, higher socioeconomic status (as proxied by parents’ educational achievement) is correlated with going into arts and sciences over vocational fields (engineering, education, business). The paper also finds a nonsignificant trend toward choosing arts and humanities over math and science, within the arts and science category.
(Within the vocational majors, though, engineering is the highest-SES category. Business and education are both significantly lower. I don’t know which of those would be most lucrative on average but I suspect it’d be engineering.)
I think there are several trade-offs there: engineering looks like the highest expected value to us, because we (on LessWrong, mostly) had pre-university educations focused on math, science, and technology. People from lower SES… did not, so fewer of them will survive the weed-out courses taught in “we damn well hope you learned this in AP class” style. And then there’s the acclimation to discipline and acclimation to obsessive work-habits (necessary for engineering school) that come from professional parentage… and so on. And then of course, many low-SES people probably want to go into teaching as a helping profession, but that’s not a very quantitative explanation and I’m probably just making it up.
On the other hand, engineering colleges tend to have abnormally large quantities of international students and immigrants blatantly focused on careerism. So yeah.
How does that fact impact the morality of moral philosophers as measured?
Unlikely that they would make such contribution? Yes. Unlikely that they think they would make such contribution? Maybe no.
But I guess they probably don’t even think this way, i.e. don’t try to maximize their impact. More likely it is something like: “My contribution to society exceeds my salary, so I am a net benefit to the society”. Which is actually possible. Yeah, some people, especially the effective altruists, would consider such thinking an evidence against their competence as a moral philosopher.
If someone’s studying moral philosophy in their free time, then wouldn’t they be taking academic books on ethics out of the library?
“In 1971, John Rawls coined the term “reflective equilibrium” to denote “a state of balance or coherence among a set of beliefs arrived at by a process of deliberative mutual adjustment among general principles and particular judgments”. In practical terms, reflective equilibrium is about how we identify and resolve logical inconsistencies in our prevailing moral compass. Examples such as the rejection of slavery and of innumerable “isms” (sexism, ageism, etc.) are quite clear: the arguments that worked best were those highlighting the hypocrisy of maintaining acceptance of existing attitudes in the face of already-established contrasting attitudes in matters that were indisputably analogous.”
-Aubrey de Grey, The Overdue Demise Of Monogamy
This passage argues that reasoning does impact ethical behavior. Steven Pinker and Peter Singer make similar arguments, which I find convincing.
I find it quite arguable whether or not “reflective equilibrium” is a real thing that actually happens in our cognition, or a little game played by philosophy academics. Actual cognitive dissonance caused by holding mutually contradicting ideas in simultaneous salience is well-evidenced, but that’s not exactly an equilibrium across all ideas we hold, merely across the ones we’re holding in short-term verbal memory at the time.
I actually put up another quote arguing for it, by Joshua Greene, making an analogy between successsful moral argument and the invention of new technology; even though a person rarely invents a whole new piece of technology, our world is defined by technological advance. Similarly, even though it is rare for a moral norm to change as a result of abstract argument, our social norms have change dramatically since times gone by.
Nonetheless, the quote works with empirical evidence, the ultimate arbiter of reality. It looks like, whilst moral argument can change our thoughts (and behaviour) on ethical issues, a lot of the time it doesn’t. Like technology, the big changes transform our world, but for the most part we’re just playing angry birds.
I think it more likely they’re better at coming up with rationales to ignore their instinctive feelings.
I think that someone can believe that their instinctive feelings are an approximation to what is ethical, then try to formalize it, then conclude that they have identified areas where the approximation is in error. So their ethics code could be highly based on their instinctive feelings without following them 100% of the time.
That seems unlikely. People’s instinctive feelings are generally pretty selfish. (Small sample size, obviously. I think 2 other people where I’ve spoken with enough about this kind of thing to judge.)
None of your sample were people with children, then?
And there’s also the question of what is “instinctive” versus whatever the opposite is. What is this distinction and how do you tell?
No, but I don’t see why children should have an effect; favoring your children over strangers is no less selfish than favoring yourself over strangers, and both are strong instincts.
By instinctive I just mean system 1; the judgments made before you take time to think through what you should do.
I had intended to draw attention to the phenomenon of favouring one’s children over oneself. It appears I was right about the test demographic.
And “no less selfish”? At what point would you consider the widening circle to be “less selfish”? To favour your village over others, your country over others, humanity over animals; are these are all no less selfish? Is nothing unselfish but a life of exaninition and unceasing service to everyone and everything but oneself?
System 1 is susceptible to training—that is what training is. We may be born with the neurological mechanism, but not its entire content. “Instinct” more usually means (quoting Wikipedia) “performed without being based upon prior experience”. A human without prior experience is a baby.
Standard definitions of system 1 describe it as ‘instinctive’, but if you need a separate definition of instinctive responses, ‘untrained system 1 responses’ works.
That depends. Any of those things can be unselfish, if you’re doing it because you think it’s a good thing to do independent of whether it’s an outcome/action you like, and the wider the circle the more likely that’s the motivation. If it’s based on ‘I like these people and want them to be happy, therefore I will take this action’ that’s still selfish.
Lest this sound like I’m saying anything that isn’t done for abstract reasons is selfish, I’d contrast it with things done for reasons of compassion. The lines there can get blurry when the people you’re feeling compassion for are in your ingroup, but things like the place-quarters-here-for-adorable-sad-children variety of charity are clearly trying to induce compassionate motivation (and it works).
From conversations I have had with my own parents (not as comprehensive or in-depth, but heartfelt), it seemed pretty clear that the parenting instinct is much more ‘these kids are mine and I will take care of them come hell or high water’ than a compassionate reflex.
Hypothesis: At least some of the people who are interested in ethics are concerned because they have a problem behaving ethically.