a) Relatively thoughtless application of heuristics; (system-1integrated + fast)
b) Taking time to reflect and notice how things seem to you once you’ve had more space for reflection, for taking in other peoples’ experiences, for noticing what still seems to matter once you’ve fallen out of the day-to-day urgencies, and for tuning into the “still, quiet voice” of conscience; (system-1-integrated + slow, after a pause)
c) Ethical reasoning (system-2-heavy, medium-paced or slow).
The brief version of my position is that (b) is awesome, while (c) is good when it assists (b) but is damaging when it is acted on in a way that disempowers rather than empowers (b).
--
The long-winded version (which may be entirely in agreement with your (Tristan’s) comment, but which goes into detail because I want to understand this stuff):
I agree with you and Eccentricity that most people, including me and IMO most LWers and EAers, could benefit from doing more (b) than we tend to do.
I also agree with you that (c) can assist in doing (b). For example, it can be good for a person to ask themselves “how does this action, which I’m inclined to take, differ from the actions I condemned in others?”, “what is likely to happen if I do this?”, and “do my concepts and actions fit the world I’m in, or is there a tiny note of discord?”
At the same time, I don’t want to just say “c is great! do more c!” because I share with the OP a concern that EA-ers, LW-ers, and people in general who attempt explicit ethical reasoning sometimes end up using these to talk themselves into doing dumb, harmful things, with the OP’s example of “leave inaccurate reviews at vegan restaurants to try to save animals” giving a good example of the flavor of these errors, and with historical communism giving a good example of their potential magnitude.
My take as to the difference between virtuous use of explicit ethical reasoning, and vicious/damaging use of explicit ethical reasoning, is that virtuous use of such reasoning is aimed at cultivating and empowering a person’s [prudence, phronesis, common sense, or whatever you want to call a central faculty of judgment that draws on and integrates everything the person discerns and cares about], whereas vicious/damaging uses of ethical reasoning involve taking some piece of the total set of things we care about, stabilizing it into an identity and/or a social movement (“I am a hedonistic utilitarian”, “we are (communists/social justice/QAnon/EA)”, and having this artificially stabilized fragment of the total set of things one cares about, act directly in the world without being filtered through one’s total discernment (“Action A is the X thing to do, and I am an X, so I will take action A”).
(Prudence was classically considered not only a virtue, but the “queen of the virtues”—as Wikipedia puts it “Prudence points out which course of action is to be taken in any concrete circumstances… Without prudence, bravery becomes foolhardiness, mercy sinks into weakness, free self-expression and kindness into censure, humility into degradation and arrogance, selflessness into corruption, and temperance into fanaticism.” Folk ethics, or commonsense ethics, has at its heart the cultivation of a total faculty of discernment, plus the education of this faculty to include courage/kindness/humility/whatever other virtues.)
My current guess as to how to develop prudence is basically to take an interest in things, care, notice tiny notes of discord, notice what actions have historically had what effects, notice when one is oneself “hijacked” into acting on something other than one’s best judgment and how to avoid this, etc. I think this is part of what you have in mind about bringing ethical reasoning into daily life, so as to see how kindness applies in specific rather than merely claiming it’d be good to apply somehow?
Absent identity-based or social-movement-based artificial stabilization, people can and do make mistakes, including e.g. leaving inaccurate reviews in an attempt to help animals. But I think those mistakes are more likely to be part of a fairly rapid process of developing prudence (which seems pretty worth it to me), and are less likely to be frozen in and acted on for years.
(My understanding isn’t great here; more dialog would be great.)
Thanks for this response; I find it helpful.
Reading it over, I want to distinguish between:
a) Relatively thoughtless application of heuristics; (system-1integrated + fast)
b) Taking time to reflect and notice how things seem to you once you’ve had more space for reflection, for taking in other peoples’ experiences, for noticing what still seems to matter once you’ve fallen out of the day-to-day urgencies, and for tuning into the “still, quiet voice” of conscience; (system-1-integrated + slow, after a pause)
c) Ethical reasoning (system-2-heavy, medium-paced or slow).
The brief version of my position is that (b) is awesome, while (c) is good when it assists (b) but is damaging when it is acted on in a way that disempowers rather than empowers (b).
--
The long-winded version (which may be entirely in agreement with your (Tristan’s) comment, but which goes into detail because I want to understand this stuff):
I agree with you and Eccentricity that most people, including me and IMO most LWers and EAers, could benefit from doing more (b) than we tend to do.
I also agree with you that (c) can assist in doing (b). For example, it can be good for a person to ask themselves “how does this action, which I’m inclined to take, differ from the actions I condemned in others?”, “what is likely to happen if I do this?”, and “do my concepts and actions fit the world I’m in, or is there a tiny note of discord?”
At the same time, I don’t want to just say “c is great! do more c!” because I share with the OP a concern that EA-ers, LW-ers, and people in general who attempt explicit ethical reasoning sometimes end up using these to talk themselves into doing dumb, harmful things, with the OP’s example of “leave inaccurate reviews at vegan restaurants to try to save animals” giving a good example of the flavor of these errors, and with historical communism giving a good example of their potential magnitude.
My take as to the difference between virtuous use of explicit ethical reasoning, and vicious/damaging use of explicit ethical reasoning, is that virtuous use of such reasoning is aimed at cultivating and empowering a person’s [prudence, phronesis, common sense, or whatever you want to call a central faculty of judgment that draws on and integrates everything the person discerns and cares about], whereas vicious/damaging uses of ethical reasoning involve taking some piece of the total set of things we care about, stabilizing it into an identity and/or a social movement (“I am a hedonistic utilitarian”, “we are (communists/social justice/QAnon/EA)”, and having this artificially stabilized fragment of the total set of things one cares about, act directly in the world without being filtered through one’s total discernment (“Action A is the X thing to do, and I am an X, so I will take action A”).
(Prudence was classically considered not only a virtue, but the “queen of the virtues”—as Wikipedia puts it “Prudence points out which course of action is to be taken in any concrete circumstances… Without prudence, bravery becomes foolhardiness, mercy sinks into weakness, free self-expression and kindness into censure, humility into degradation and arrogance, selflessness into corruption, and temperance into fanaticism.” Folk ethics, or commonsense ethics, has at its heart the cultivation of a total faculty of discernment, plus the education of this faculty to include courage/kindness/humility/whatever other virtues.)
My current guess as to how to develop prudence is basically to take an interest in things, care, notice tiny notes of discord, notice what actions have historically had what effects, notice when one is oneself “hijacked” into acting on something other than one’s best judgment and how to avoid this, etc. I think this is part of what you have in mind about bringing ethical reasoning into daily life, so as to see how kindness applies in specific rather than merely claiming it’d be good to apply somehow?
Absent identity-based or social-movement-based artificial stabilization, people can and do make mistakes, including e.g. leaving inaccurate reviews in an attempt to help animals. But I think those mistakes are more likely to be part of a fairly rapid process of developing prudence (which seems pretty worth it to me), and are less likely to be frozen in and acted on for years.
(My understanding isn’t great here; more dialog would be great.)