Punishment is extremely distinct from moral evaluation. We make moral judgements about agents and actions—are they acting “properly”, based on whatever we think is proper. Are they doing the best they can, in whatever context they are in? If not, why not?
Moral judgement is often cited as a reason to impose punishment, but it’s not actually part of the moral evaluation—it’s a power tactic to enforce behavior in the absence of moral agreement. Judging someone as morally flawed is NOT a punishment. It’s just an evaluation in one’s value system.
The important question isn’t whether we judge someone else as morally flawed, but whether or not using a moral system leads us to judge ourselves as morally flawed, in which case the punitive element may become more clear.
But maybe not, but if you don’t see a moral system which leads one to regard oneself as morally flawed as having an inherent punitive element, I’m going to question whether you experience morality, or whether you just think about it.
To your title and general idea, I don’t understand how anyone can claim that context and luck (meaning anything outside the control of the moral agent in question) isn’t a large part of any value system. Ethical systems can claim to be situation-independent, but they’re just deluded. Most serious attempts I’ve seen acknowledge the large role of situation in the evaluation—the moral comparison is “what one did compared to what one could have done”. Both actual and possible-counterfactual are contingent on situation.
By making the moral standard to judge against the counterfactual, the specific terribleness of the situation stops being relevant to the moral self-worth of the actors—only how much better their actions leave the world, compared to inaction or their nonexistence, is morally relevant. A moral system which produces moral action which is impossible isn’t actually useful.
But maybe not, but if you don’t see a moral system which leads one to regard oneself as morally flawed as having an inherent punitive element, I’m going to question whether you experience morality, or whether you just think about it.
Wow. Now I’m curious whether your moral framwork applies only to yourself, or to all people, or to all people who “experience morality” similarly to you.
I do primarily use system 2 for moral evaluations—my gut reactions tend to be fairly short-term and selfish for my reasoned preferences. And I do recognize that I (and all known instances of a moral agent) am flawed—I sometimes do things that I don’t think are best, and my reasons for the failures aren’t compelling to me.
Wow. Now I’m curious whether your moral framwork applies only to yourself, or to all people, or to all people who “experience morality” similarly to you.
Mu? It applies to whoever thinks it useful.
I think morality is an experience, which people have greater or lesser access to; I don’t think it is actually meaningful to judge other people’s morality. Insofar as you judge other people immoral, I think you’re missing the substantive nature of morality in favor of a question of whether or not other people sufficiently maximize your values.
If this seems like an outlandish claim, consider whether or not a hurricane is moral or immoral. Okay, the hurricane isn’t making decisions—really morality is about how we make decisions. Well, separating it from decision theory—that is, assuming morality is in fact distinct from decision theory—it is not in fact about how decisions are arrived at. So what is it about?
Consider an unspecified animal. Is it a moral agent? Okay, what if I specify that the animal can experience guilt?
I’d say morality is a cluster of concepts we have which are related to a specific set of experiences we have in making decisions. These experiences are called “moral intuition”; morality is the exercise in figuring out the common elements, the common values, which give rise to these experiences, such that we can, for example, feel guilt, and figuring out a way of living which is in harmony with these values, such that we improve our personal well-being with respect to those experiences.
If your moral system leads to a reduced personal well-being with respect to those experiences in spite of doing your relative best—that is, if your moral system makes you feel fundamentally flawed in an unfixable way—then I think your moral system is faulty. It’s making you miserable for no reason.
I think morality is an experience, which people have greater or lesser access to;
Interesting. I’ll have to think on that. My previous conception of the topic is that it’s a focal topic for a subset of decision theory—it’s a lens to look at which predictions and payouts should be considered for impact on other people.
The important question isn’t whether we judge someone else as morally flawed, but whether or not using a moral system leads us to judge ourselves as morally flawed, in which case the punitive element may become more clear.
But maybe not, but if you don’t see a moral system which leads one to regard oneself as morally flawed as having an inherent punitive element, I’m going to question whether you experience morality, or whether you just think about it.
By making the moral standard to judge against the counterfactual, the specific terribleness of the situation stops being relevant to the moral self-worth of the actors—only how much better their actions leave the world, compared to inaction or their nonexistence, is morally relevant. A moral system which produces moral action which is impossible isn’t actually useful.
Wow. Now I’m curious whether your moral framwork applies only to yourself, or to all people, or to all people who “experience morality” similarly to you.
I do primarily use system 2 for moral evaluations—my gut reactions tend to be fairly short-term and selfish for my reasoned preferences. And I do recognize that I (and all known instances of a moral agent) am flawed—I sometimes do things that I don’t think are best, and my reasons for the failures aren’t compelling to me.
Mu? It applies to whoever thinks it useful.
I think morality is an experience, which people have greater or lesser access to; I don’t think it is actually meaningful to judge other people’s morality. Insofar as you judge other people immoral, I think you’re missing the substantive nature of morality in favor of a question of whether or not other people sufficiently maximize your values.
If this seems like an outlandish claim, consider whether or not a hurricane is moral or immoral. Okay, the hurricane isn’t making decisions—really morality is about how we make decisions. Well, separating it from decision theory—that is, assuming morality is in fact distinct from decision theory—it is not in fact about how decisions are arrived at. So what is it about?
Consider an unspecified animal. Is it a moral agent? Okay, what if I specify that the animal can experience guilt?
I’d say morality is a cluster of concepts we have which are related to a specific set of experiences we have in making decisions. These experiences are called “moral intuition”; morality is the exercise in figuring out the common elements, the common values, which give rise to these experiences, such that we can, for example, feel guilt, and figuring out a way of living which is in harmony with these values, such that we improve our personal well-being with respect to those experiences.
If your moral system leads to a reduced personal well-being with respect to those experiences in spite of doing your relative best—that is, if your moral system makes you feel fundamentally flawed in an unfixable way—then I think your moral system is faulty. It’s making you miserable for no reason.
Interesting. I’ll have to think on that. My previous conception of the topic is that it’s a focal topic for a subset of decision theory—it’s a lens to look at which predictions and payouts should be considered for impact on other people.