[ epistemic status: My modeling of this rings true for me, but I don’t know how universal it is. ]
Interesting discussion, and I’m somewhat disappointed but also somewhat relieved that you didn’t discover any actual disagreement or crux, just explored some details and noted that there’s far more similarity in practice than differences. I find discussion of moral theory kind of dissatisfying when it doesn’t lead to different actions or address conflicts.
My underlying belief is that it’s a lot like software development methodology: it’s important to HAVE a theory and some consistency of method, but it doesn’t matter very much WHICH methodology you follow.
In the vast majority of humans, legible morality is downstream of decision-making. We usually make up stories to justify our actions. There is a bit of far-mode moral discussions that have some influence over near-mode decisions, making those stories easier to tell (and actually true, for loose definitions of “true”).
Thus, any moral system implemented in humans has a fair bit of loopholes, and many exceptions. This is uncertainty and inconsistent modeling in Consequentialist stories, or ambiguity and weighting in deontological or virtue stories.
Which makes these systems roughly equivalent in terms of actual human behavior. Except they’re very different in how it makes the adherents feel, which in turn makes them behave differently. The mechanism is not legible or part of the moral system, it’s an underlying psychological change in how one interacts with one’s thinking part and how humans communicate and interact.
Interesting discussion, and I’m somewhat disappointed but also somewhat relieved that you didn’t discover any actual disagreement or crux, just explored some details and noted that there’s far more similarity in practice than differences.
I feel very similarly actually. At first when I heard how Gordon is a big practitioner of virtue ethics it seemed likely that we’d (easily?) find some cruxes, which is something I had been wanting to do for some time.
But then when we realized how non-naive versions of these different approaches seem to mostly converge on one another, I dunno, that’s kinda nice too. It kinda simplifies discussions. And makes it easier for people to work together.
In the vast majority of humans, legible morality is downstream of decision-making. We usually make up stories to justify our actions. There is a bit of far-mode moral discussions that have some influence over near-mode decisions, making those stories easier to tell (and actually true, for loose definitions of “true”).
I agree. There’s a sort of confusion that happens for many folks where they think their idea of how they make decisions is how they actually make decisions, and they may try to use System 2 thinking to explicitly make that so, but in reality most decisions are a System 1 affair and any theory is an after-the-fact explanation to make legible to self and others why we do the things we do.
That said, the System 2 thinking has an important place as part of a feedback mechanism to direct what System 1 should do. For example, if you keep murdering kittens, having something in System 2 that suggests that murdering kittens is bad is a good way to eventually get you to stop murdering kittens, and over time rework System 1 so that it no longer produces in you the desire for kitten murder.
What matters most, as I think you suggest at the end of your comment, is that you have some theory that can be part of this feedback mechanism so you don’t just do what you want in the moment to the exclusion of what would be good to have done long term because it is prosocial, has good secondary effects, etc.
[ epistemic status: My modeling of this rings true for me, but I don’t know how universal it is. ]
Interesting discussion, and I’m somewhat disappointed but also somewhat relieved that you didn’t discover any actual disagreement or crux, just explored some details and noted that there’s far more similarity in practice than differences. I find discussion of moral theory kind of dissatisfying when it doesn’t lead to different actions or address conflicts.
My underlying belief is that it’s a lot like software development methodology: it’s important to HAVE a theory and some consistency of method, but it doesn’t matter very much WHICH methodology you follow.
In the vast majority of humans, legible morality is downstream of decision-making. We usually make up stories to justify our actions. There is a bit of far-mode moral discussions that have some influence over near-mode decisions, making those stories easier to tell (and actually true, for loose definitions of “true”).
Thus, any moral system implemented in humans has a fair bit of loopholes, and many exceptions. This is uncertainty and inconsistent modeling in Consequentialist stories, or ambiguity and weighting in deontological or virtue stories.
Which makes these systems roughly equivalent in terms of actual human behavior. Except they’re very different in how it makes the adherents feel, which in turn makes them behave differently. The mechanism is not legible or part of the moral system, it’s an underlying psychological change in how one interacts with one’s thinking part and how humans communicate and interact.
I feel very similarly actually. At first when I heard how Gordon is a big practitioner of virtue ethics it seemed likely that we’d (easily?) find some cruxes, which is something I had been wanting to do for some time.
But then when we realized how non-naive versions of these different approaches seem to mostly converge on one another, I dunno, that’s kinda nice too. It kinda simplifies discussions. And makes it easier for people to work together.
I agree. There’s a sort of confusion that happens for many folks where they think their idea of how they make decisions is how they actually make decisions, and they may try to use System 2 thinking to explicitly make that so, but in reality most decisions are a System 1 affair and any theory is an after-the-fact explanation to make legible to self and others why we do the things we do.
That said, the System 2 thinking has an important place as part of a feedback mechanism to direct what System 1 should do. For example, if you keep murdering kittens, having something in System 2 that suggests that murdering kittens is bad is a good way to eventually get you to stop murdering kittens, and over time rework System 1 so that it no longer produces in you the desire for kitten murder.
What matters most, as I think you suggest at the end of your comment, is that you have some theory that can be part of this feedback mechanism so you don’t just do what you want in the moment to the exclusion of what would be good to have done long term because it is prosocial, has good secondary effects, etc.