I’m not sure what ‘should’ means if it doesn’t somehow cash out as preference.
Yeah, “somehow” the two concepts are connected, we can see that, because moral considerations act on our preferences, and most moral philosophies take the preferences of others in considerations when deciding what’s the moral thing to do.
But the first thing that you must see is that the concepts are not identical. “I prefer X to happen” and “I find X morally better” are different things.
Take random parent X and they’ll care more about the well-being of their own child than for the welfare of a million other children in the far corner of the world. That doesn’t mean they evaluate a world where a million other children suffer to be a morally better world than a world where just theirs does.
Here’s what I think “should” means. I think “should” is an attempted abstract calculation of our preferences in the attempted depersonalization of the provided context. To put it differently, I think “should” is what we believe we’d prefer to happen if we had no personal stakes involved, or what we believe we’d feel about the situation if our empathy was not centralized around our closest and dearest.
EDIT TO ADD: If I had to guess further, I’d guess that the primary evolutionary reason for our sense of morality is probably not to drive us via guilt and duty but to drive us via moral outrage—and that guilt is there only as in our imagined perception of the moral outrage of others. To test that I’d like to see if there’s been studies to determine if people who are guilt-free (e.g. psychopaths) are also free of a sense of moral outrage.
I’m not sure what ‘should’ means if it doesn’t somehow cash out as preference.
Yeah, “somehow” the two concepts are connected, we can see that, because moral considerations act on our preferences, and most moral philosophies take the preferences of others in considerations when deciding what’s the moral thing to do.
But the first thing that you must see is that the concepts are not identical. “I prefer X to happen” and “I find X morally better” are different things.
Take random parent X and they’ll care more about the well-being of their own child than for the welfare of a million other children in the far corner of the world. That doesn’t mean they evaluate a world where a million other children suffer to be a morally better world than a world where just theirs does.
Here’s what I think “should” means. I think “should” is an attempted abstract calculation of our preferences in the attempted depersonalization of the provided context. To put it differently, I think “should” is what we believe we’d prefer to happen if we had no personal stakes involved, or what we believe we’d feel about the situation if our empathy was not centralized around our closest and dearest.
EDIT TO ADD: If I had to guess further, I’d guess that the primary evolutionary reason for our sense of morality is probably not to drive us via guilt and duty but to drive us via moral outrage—and that guilt is there only as in our imagined perception of the moral outrage of others. To test that I’d like to see if there’s been studies to determine if people who are guilt-free (e.g. psychopaths) are also free of a sense of moral outrage.
Well, anonymity does lead to antisocial behavior in experiments … and in 4chan, for that matter.
On the other hand, 4chan is also known for group hatefests of moral outrage which erupt into DDOS attacks and worse.