Yeah, as a mostly-consequentialist (with caveats for moral uncertainty), whether an action is useful is also what “ultimately matters” to me. And I’m not really, intrinsically very interested in whether an action is in line with norms or a sense of virtue. But for practical purposes, I think having an explicit separation between understandability, justifiability, and usefulness has also itself been quite useful, for me.
For example, if I didn’t have this clear distinction, I think:
I might sometimes have a harder time double-checking whether something I’m about to do really is useful, because it just feels so damn clearly understandable and justifiable. And if I did double-check, I might be more inclined to rationalise what I’m about to do, rather than being able to console myself that it really is “fair enough” that I feel this way and want to do this thing, but it’s just still not actually helpful to do it.
I might have a harder time expressing empathy and understanding of others, or be forced to either (a) express empathy and understanding and also condone whatever they’d done or plan to do, or (b) just seem to be tut-tutting at them for being disgusting fleshy emotiony humans rather than ideal rational utility maximisers
I might have a harder time thinking clearly, for purposes of understanding and predictions, about actions that really pissed me off or actions of classic “evildoers” (serial killers, dictators), which could lead to black and white thinking and firm categories of “goodies” and “baddies”, and a sense of indignation anytime someone tries to understand how a regular human actually ends up as a “baddie”
This naturally then extends to how I think about the actions of others, such that I mostly wonder whether or not what they are doing is useful, and if it is not I am sympathetic to the difficulty of predicting the effects of our causes.
Do you mean that, if you can’t see the usefulness of an action, you stay sympathetic because you recognise that either you or them might just be having a hard time predicting the action’s effects?
That makes sense, and I use that logic a lot with things like EA cause area preferences and career choices. But I think for everyday life, and especially but not only non-rationalists and non-EAs, people often aren’t even beginning to predict the effects of their actions, just acting out of habit, norms, etc. That’s the case for many of my actions.
So my sympathy would often focus more on how those habits, norms, drives, feelings, etc. that led to the action are understandable and—often—“justifiable”, rather than on the prediction difficulties, as they may not have engaged in any prediction.
Yeah, as a mostly-consequentialist (with caveats for moral uncertainty), whether an action is useful is also what “ultimately matters” to me. And I’m not really, intrinsically very interested in whether an action is in line with norms or a sense of virtue. But for practical purposes, I think having an explicit separation between understandability, justifiability, and usefulness has also itself been quite useful, for me.
For example, if I didn’t have this clear distinction, I think:
I might sometimes have a harder time double-checking whether something I’m about to do really is useful, because it just feels so damn clearly understandable and justifiable. And if I did double-check, I might be more inclined to rationalise what I’m about to do, rather than being able to console myself that it really is “fair enough” that I feel this way and want to do this thing, but it’s just still not actually helpful to do it.
I might have a harder time expressing empathy and understanding of others, or be forced to either (a) express empathy and understanding and also condone whatever they’d done or plan to do, or (b) just seem to be tut-tutting at them for being disgusting fleshy emotiony humans rather than ideal rational utility maximisers
I might have a harder time thinking clearly, for purposes of understanding and predictions, about actions that really pissed me off or actions of classic “evildoers” (serial killers, dictators), which could lead to black and white thinking and firm categories of “goodies” and “baddies”, and a sense of indignation anytime someone tries to understand how a regular human actually ends up as a “baddie”
Do you mean that, if you can’t see the usefulness of an action, you stay sympathetic because you recognise that either you or them might just be having a hard time predicting the action’s effects?
That makes sense, and I use that logic a lot with things like EA cause area preferences and career choices. But I think for everyday life, and especially but not only non-rationalists and non-EAs, people often aren’t even beginning to predict the effects of their actions, just acting out of habit, norms, etc. That’s the case for many of my actions.
So my sympathy would often focus more on how those habits, norms, drives, feelings, etc. that led to the action are understandable and—often—“justifiable”, rather than on the prediction difficulties, as they may not have engaged in any prediction.