“Morality” is a useful word in that it labels a commonly used cluster of ideaspace. Points in that cluster, however, are not castable to an integer or floating point type. You seem to believe that they do implement comparison operators. How do those work, in your view?
You are using some terminology that I don’t recognize, so I’m uncertain if this is responsive, but here goes.
We are faced with “choices” all the time. The things that motivate us to make a particular decision in a choice are called “values.” As it happens, values can be roughly divided into categories like aesthetic values, moral values, etc.
Value can conflict (i.e. support inconsistent decisions). Functionally, every person has a table listing all the values that the person finds persuasive. The values are ranked, so that a person faced with a decision where value A supports a different decision than value B knows that the decision to make is to follow the higher ranked value.
Thus, Socrates says that Aristotle made an immoral choice iff Aristotle was faced with a choice that Socrates would decide using moral values, and Aristotle made a different choice than Socrates would make.
Caveats:
I’m describing a model, not asserting a theory about the territory (i.e. I’m no neurologist)
My statements are attempting to provide a more rigorous definition of value. Hopefully, it and the other words I invoke rigorously (choice, moral, decision) correspond well to ordinary usage of those words.
That’s a good start. Let’s take as given that “morality” refers to an ordered list of values. How do you compare two such lists? Is the greater morality:
The longer list?
The list that prohibits more actions?
The list that prohibits fewer actions?
The closest to alphabetical ordering?
Something else?
Once you decide what actually makes one list better than another, then consider what observable evidence that difference would produce. With a prediction in hand, you can look at the world and gather evidence for or against the hypothesis that “morality” is increasing.
People measure morality be comparing their agreement on moral choices. It’s purely behavioral.
As a corollary, a morality that does not tell a person how to make a choice is functionally defective, but it is not immoral.
There are lots of ways of resolving moral disputes (majority rule, check the oracle, might makes right). But the decision of which resolution method to pick is itself a moral choice. You can force me to make a particular choice, but you can’t use force to make me think that choice was right.
Ok, I like “ordered list of (abstract concepts people use to make decisions).”
I reiterate my points above: When people say a decision is better, they mean the decision was more consistent with their list than alternative decisions. When people disagree about how to make a choice, the conflict resolution procedure each side prefers is also determined by their list.
“Morality” is a useful word in that it labels a commonly used cluster of ideaspace. Points in that cluster, however, are not castable to an integer or floating point type. You seem to believe that they do implement comparison operators. How do those work, in your view?
You are using some terminology that I don’t recognize, so I’m uncertain if this is responsive, but here goes.
We are faced with “choices” all the time. The things that motivate us to make a particular decision in a choice are called “values.” As it happens, values can be roughly divided into categories like aesthetic values, moral values, etc.
Value can conflict (i.e. support inconsistent decisions). Functionally, every person has a table listing all the values that the person finds persuasive. The values are ranked, so that a person faced with a decision where value A supports a different decision than value B knows that the decision to make is to follow the higher ranked value.
Thus, Socrates says that Aristotle made an immoral choice iff Aristotle was faced with a choice that Socrates would decide using moral values, and Aristotle made a different choice than Socrates would make.
Caveats:
I’m describing a model, not asserting a theory about the territory (i.e. I’m no neurologist)
My statements are attempting to provide a more rigorous definition of value. Hopefully, it and the other words I invoke rigorously (choice, moral, decision) correspond well to ordinary usage of those words.
Is this what you are asking?
That’s a good start. Let’s take as given that “morality” refers to an ordered list of values. How do you compare two such lists? Is the greater morality:
The longer list?
The list that prohibits more actions?
The list that prohibits fewer actions?
The closest to alphabetical ordering?
Something else?
Once you decide what actually makes one list better than another, then consider what observable evidence that difference would produce. With a prediction in hand, you can look at the world and gather evidence for or against the hypothesis that “morality” is increasing.
People measure morality be comparing their agreement on moral choices. It’s purely behavioral.
As a corollary, a morality that does not tell a person how to make a choice is functionally defective, but it is not immoral.
There are lots of ways of resolving moral disputes (majority rule, check the oracle, might makes right). But the decision of which resolution method to pick is itself a moral choice. You can force me to make a particular choice, but you can’t use force to make me think that choice was right.
Sorry, I don’t know what morality is. I thought we were talking about “morality”. Taboo your words.
Ok, I like “ordered list of (abstract concepts people use to make decisions).”
I reiterate my points above: When people say a decision is better, they mean the decision was more consistent with their list than alternative decisions. When people disagree about how to make a choice, the conflict resolution procedure each side prefers is also determined by their list.