My impression is that morality is all about chasing dangling nodes. There are questions about how certain outcomes make you feel. There are questions about how people actually act. There are questions about what actions would lead to the world being a “better place” (however you define it). But asking about whether something is “moral” seems to be chasing a dangling node to me.
Yes and no. Morality is certainly less fundamental than physics, but I would argue no less real a concept than “breakfast” or “love,” and has enough coherence – thingness – to be useful to try to outline and reason about.
The central feature of morality that needs explaining, as I understand it, is how certain behaviors or decisions make you feel in relation to how other people feel about your behaviors. Which is not something you have full control over. It is a distributed cognitive algorithm, a mechanism for directing social behavior through the sharing of affective judgements.
I’ll attempt to make this more concrete. Actions that are morally prohibited have consequences, both in the form of direct social censure (due to the moral rule itself) and indirect effects that might be social or otherwise. You can think of the direct social consequences as a fail-safe that stops dangerous behavior before real harm can occur, though of course it doesn’t always work very well. In this way the prudential sense of should is closely tied to the moral sense of should – sometimes in a pure, self-sustaining way, the original or imagined harm becoming a lost purpose.
None of this means that morality is a false concept. Even though you might explain why moral rules and emotions exist, or point out their arbitrariness, it’s still simplest and I’d argue ontologically justified to deal with morality the way most people do. Morality is a standing wave of behaviors and predictable shared attitudes towards them, and is as real as sound waves within the resonating cavity of a violin. Social behavior-and-attitude space is immense, but seems to contain attractors that we would recognize as moral.
That said, I do think it’s valuable to ask the more grounded questions of how outcomes make individuals feel, how people actually act, etc.
How about asking what you should do? By refusing to ask that question you haven’t explained morality, you’ve merely stuffed it under the rug. Since the should question actually is important (and pressing) you’ll find yourself having to sneak in connotations to answer it.
For example, you wrote:
There are questions about how certain outcomes make you feel. There are questions about how people actually act. There are questions about what actions would lead to the world being a “better place” (however you define it).
It’s possible to substitute add should-like connotations to any of the above questions and end up with a (generally bad) theory of morality.
What determines whether or not you “should” do something?
My thoughts are that “should requires an axiom”. You could say “you shouldn’t kill people… if you don’t want people to suffer”. Or “you should kill people… if you want to go to jail”.
In practice, I think people have similar ideas about how outcomes make them feel. Outcome X feels just. Outcome Y feels unjust etc.
When people use the word “should”, I think they’re implicitly saying “should… in order to achieve the outcomes that me/society feel are just”.
This is basically the issue of whether categorical imperatives are a coherent concept. I have the same feeling as you: that they are not, and that I don’t even understand what it would mean for them to be. I’m continually baffled by the fact that so many human minds are apparently able to believe that categorical imperatives are a thing. This strikes me as a difficult problem somewhere at the intersection between philosophy, linguistics, and cognitive psychology.
If you don’t even understand what it would mean, this could be a symptom that you are understanding “categorical imperative” differently than they do. I’m going to guess that you are assuming metaethical motivational internalism.
My impression is that morality is all about chasing dangling nodes. There are questions about how certain outcomes make you feel. There are questions about how people actually act. There are questions about what actions would lead to the world being a “better place” (however you define it). But asking about whether something is “moral” seems to be chasing a dangling node to me.
Yes and no. Morality is certainly less fundamental than physics, but I would argue no less real a concept than “breakfast” or “love,” and has enough coherence – thingness – to be useful to try to outline and reason about.
The central feature of morality that needs explaining, as I understand it, is how certain behaviors or decisions make you feel in relation to how other people feel about your behaviors. Which is not something you have full control over. It is a distributed cognitive algorithm, a mechanism for directing social behavior through the sharing of affective judgements.
I’ll attempt to make this more concrete. Actions that are morally prohibited have consequences, both in the form of direct social censure (due to the moral rule itself) and indirect effects that might be social or otherwise. You can think of the direct social consequences as a fail-safe that stops dangerous behavior before real harm can occur, though of course it doesn’t always work very well. In this way the prudential sense of should is closely tied to the moral sense of should – sometimes in a pure, self-sustaining way, the original or imagined harm becoming a lost purpose.
None of this means that morality is a false concept. Even though you might explain why moral rules and emotions exist, or point out their arbitrariness, it’s still simplest and I’d argue ontologically justified to deal with morality the way most people do. Morality is a standing wave of behaviors and predictable shared attitudes towards them, and is as real as sound waves within the resonating cavity of a violin. Social behavior-and-attitude space is immense, but seems to contain attractors that we would recognize as moral.
That said, I do think it’s valuable to ask the more grounded questions of how outcomes make individuals feel, how people actually act, etc.
How about asking what you should do? By refusing to ask that question you haven’t explained morality, you’ve merely stuffed it under the rug. Since the should question actually is important (and pressing) you’ll find yourself having to sneak in connotations to answer it.
For example, you wrote:
It’s possible to substitute add should-like connotations to any of the above questions and end up with a (generally bad) theory of morality.
What determines whether or not you “should” do something?
My thoughts are that “should requires an axiom”. You could say “you shouldn’t kill people… if you don’t want people to suffer”. Or “you should kill people… if you want to go to jail”.
In practice, I think people have similar ideas about how outcomes make them feel. Outcome X feels just. Outcome Y feels unjust etc.
When people use the word “should”, I think they’re implicitly saying “should… in order to achieve the outcomes that me/society feel are just”.
This is basically the issue of whether categorical imperatives are a coherent concept. I have the same feeling as you: that they are not, and that I don’t even understand what it would mean for them to be. I’m continually baffled by the fact that so many human minds are apparently able to believe that categorical imperatives are a thing. This strikes me as a difficult problem somewhere at the intersection between philosophy, linguistics, and cognitive psychology.
If you don’t even understand what it would mean, this could be a symptom that you are understanding “categorical imperative” differently than they do. I’m going to guess that you are assuming metaethical motivational internalism.
Therein lies your difficulty.
No, it doesn’t, because your guess is wrong.
That is precisely the question we are trying to answer.