Ah! So this mysterious notion is (like ‘3 X 3 = 9’) something “analytic a priori”. Ok, suppose I made the following claim:
Morality is simply rational self-interest, as it would play out in an idealized social environment. The idealization is that everything known by any agent is common knowledge among all agents. This means that every agent knows the utility function of every other agent, every agent estimates the same consequences as other agents, and every agent knows what other agents do. So, for example, morality requires that you act as if your actions are public knowledge, even though you know they are not public and you could ‘get away with it’.
Now, further suppose that you disagree with my claim. On what grounds would you disagree? If you say “No, that is not morality!”, what evidence or argument could you offer other than your own moral intuitions and those of the rest of mankind? I ask because those moral intuitions do not have the same analytic a priori character as ‘3 X 3 = 9’. And they can change.
Or suppose you asked me to defend my claim, and I submit mathematical proofs that rational agents cannot reach Pareto optimal bargains unless payoffs, consequences, and actions are common knowledge among every participant in the bargain. These proofs are every bit as unchanging as ‘3 X 3 = 9’, but are they also just as irrelevant?
Or suppose you asked me to defend my claim, and I submit mathematical proofs that rational agents cannot reach Pareto optimal bargains unless payoffs, consequences, and actions are common knowledge among every participant in the bargain. These proofs are every bit as unchanging as ‘3 X 3 = 9’, but are they also just as irrelevant?
Well, they’re relevant if you make a claim that morality should be certain things—but since that’s awfully close to a moral claim, I’d say the argument is self-defeating. In fact, that sort of argument might be generalizable to show that this morality is unsupportable—not contradicted, but merely unsupported.
Hmmm. My understanding is that this is a meta-ethical claim; it answers the question of what morality is. Moral claims would answer questions like “What action, if any, does morality require of me?” in some given situation.
Your phrasing of ‘what morality is’ as ‘what morality should be’ strikes me as simply playing with words.
If we ignore the object “morality” and just look at basic actions, your proposal about what morality is labels some actions as right and others as wrong (or good and bad, or moral and immoral). It’s really by that standard that I call it a “moral claim,” in a similar class to “it’s immoral to kick puppies.”
I guess I don’t agree that my example claim says anything directly about which actions are moral and immoral. What it does is to suggest an algorithm for finding out. And the first step is to find out some empirical facts—for example, “What are puppies and how do people feel about them? If I kick puppies, will there be negative consequences in how other people treat me?”
A meta-ethical theory, unlike a normative ethical theory, does not attempt to evaluate specific choices as being better, worse, good, bad, or evil; although it may have profound implications as to the validity and meaning of normative ethical claims
I guess I don’t see the difference between “the no-kicking-puppies morality is correct” and “don’t kick puppies.”
I don’t see much difference either. But the algorithm I proposed says neither of those two things.
It says “If you want to know whether kicking puppies is moral, here is how to find out.” The algorithm is the same for Americans, Laotians, BabyEaters, FAIs, uFAIs, and presumably Neanderthals before the dog was invented as a domesticated wolf. The algorithm instructs the user to consider an idealized version of the society in which he is embedded.
Please consider the possibility that some executions of that algorithm might yield different results than did the execution which you performed, using your own society.
Well, but then it’s “kicking puppies is immoral if X.” A conditional doesn’t seem to change the fact that something is a moral claim. Hmm… or would it in some situations? I can’t think of any. Oh, you could just rephrase it as “kicking puppies when X is immoral,” which is more clearly a moral claim.
A conditional doesn’t seem to change the fact that something is a moral claim. Hmm… or would it in some situations? I can’t think of any.
Only (an exception) when there is something after the “IF” that indirectly or directly supplies the moral unit. Then it could be a mere logical claim—but most will be unable to distinguish that from a moral claim anyway. The decision to apply an unambiguous, fully specified logical deduction to based on a moral value is usually considered a moral judgement itself.
Morality is simply rational self-interest, as it would play out in an idealized social environment. The idealization is [...]
Now, further suppose that you disagree with my claim. On what grounds would you disagree?
It doesn’t seem to capture the social-signalling side of morality. Morality, in part, is a way for humans to show what goodie-two-shoes they are to other humans—who might be prospective mates, collaborators, or allies. That involves less self-interest—and more signalling unselfishness.
It doesn’t seem to capture the “manipulation” side of morality very well either. Moral systems are frequenttly applied to get others to stop doing what you don’t want them to do—by punishing, shaming, embarassing, etc.
So, my assessment would be: incomplete hypothesis.
I don’t see how this is responsive. You realize, don’t you, that this discussion is proceeding under Nesov’s stipulation that moral truth is a priori (like ‘3 X 3 = 9’). We are operating here under a stance of moral realism and ethical non-naturalism.
If your concept of morality doesn’t fit into this framework, this is not the place for you to step in.
Your idealisation makes signalling seem pointless—since everybody knows everything about the other players. Indeed, I don’t really see the point of your model. You are not attempting to model very much of the biology involved. You asked for criticism—and that is an obvious one. Another criticism is that you present a model—but it isn’t clear what it is for.
Checking back, that does appear to have been the context of the discussion.
Check again. Carefully.
You asked for criticism
I did not. I asked a question about Nesov’s metaethical position, using that toy theory of ethics as an example. I asked what kinds of grounds might be used to reject the toy theory. (The grounds you suggest don’t fit (IMHO) the metaethical stance Nesov had already committed to.)
Was I really so unclear? Please read the wikipedia entry on metaethics and reread the thread before responding, if you wish to respond.
Oh, and when I think back on the number of times you have inserted a comment about signaling into a discussion that seemed to be about something else entirely, I conclude that you really, really want to have a discussion with somebody, anybody on that topic. May I suggest that you produce a top-level posting explaining your ideas.
Ah! So this mysterious notion is (like ‘3 X 3 = 9’) something “analytic a priori”. Ok, suppose I made the following claim:
Now, further suppose that you disagree with my claim. On what grounds would you disagree? If you say “No, that is not morality!”, what evidence or argument could you offer other than your own moral intuitions and those of the rest of mankind? I ask because those moral intuitions do not have the same analytic a priori character as ‘3 X 3 = 9’. And they can change.
Or suppose you asked me to defend my claim, and I submit mathematical proofs that rational agents cannot reach Pareto optimal bargains unless payoffs, consequences, and actions are common knowledge among every participant in the bargain. These proofs are every bit as unchanging as ‘3 X 3 = 9’, but are they also just as irrelevant?
Well, they’re relevant if you make a claim that morality should be certain things—but since that’s awfully close to a moral claim, I’d say the argument is self-defeating. In fact, that sort of argument might be generalizable to show that this morality is unsupportable—not contradicted, but merely unsupported.
Hmmm. My understanding is that this is a meta-ethical claim; it answers the question of what morality is. Moral claims would answer questions like “What action, if any, does morality require of me?” in some given situation.
Your phrasing of ‘what morality is’ as ‘what morality should be’ strikes me as simply playing with words.
If we ignore the object “morality” and just look at basic actions, your proposal about what morality is labels some actions as right and others as wrong (or good and bad, or moral and immoral). It’s really by that standard that I call it a “moral claim,” in a similar class to “it’s immoral to kick puppies.”
I guess I don’t agree that my example claim says anything directly about which actions are moral and immoral. What it does is to suggest an algorithm for finding out. And the first step is to find out some empirical facts—for example, “What are puppies and how do people feel about them? If I kick puppies, will there be negative consequences in how other people treat me?”
ETA: Wikipedia seems to back me up on this distinction between metaethics and normative ethics:
But your algorithm is evaluable—I guess I don’t see the difference between “the no-kicking-puppies morality is correct” and “don’t kick puppies.”
I don’t see much difference either. But the algorithm I proposed says neither of those two things.
It says “If you want to know whether kicking puppies is moral, here is how to find out.” The algorithm is the same for Americans, Laotians, BabyEaters, FAIs, uFAIs, and presumably Neanderthals before the dog was invented as a domesticated wolf. The algorithm instructs the user to consider an idealized version of the society in which he is embedded.
Please consider the possibility that some executions of that algorithm might yield different results than did the execution which you performed, using your own society.
Well, but then it’s “kicking puppies is immoral if X.” A conditional doesn’t seem to change the fact that something is a moral claim. Hmm… or would it in some situations? I can’t think of any. Oh, you could just rephrase it as “kicking puppies when X is immoral,” which is more clearly a moral claim.
Only (an exception) when there is something after the “IF” that indirectly or directly supplies the moral unit. Then it could be a mere logical claim—but most will be unable to distinguish that from a moral claim anyway. The decision to apply an unambiguous, fully specified logical deduction to based on a moral value is usually considered a moral judgement itself.
Apparently you and I interpret the quoted Wikipedia passage differently, and I don’t see how to resolve it.
Nor, now that I think about it, do I see a reason why either of us should care. Why are we engaged in arguing about definitions? I am bowing out.
It doesn’t seem to capture the social-signalling side of morality. Morality, in part, is a way for humans to show what goodie-two-shoes they are to other humans—who might be prospective mates, collaborators, or allies. That involves less self-interest—and more signalling unselfishness.
It doesn’t seem to capture the “manipulation” side of morality very well either. Moral systems are frequenttly applied to get others to stop doing what you don’t want them to do—by punishing, shaming, embarassing, etc.
So, my assessment would be: incomplete hypothesis.
I don’t see how this is responsive. You realize, don’t you, that this discussion is proceeding under Nesov’s stipulation that moral truth is a priori (like ‘3 X 3 = 9’). We are operating here under a stance of moral realism and ethical non-naturalism.
If your concept of morality doesn’t fit into this framework, this is not the place for you to step in.
I thought you were talking about human morality. Checking back, that does appear to have been the context of the discussion.
Science has studied that topic, we have more to go on than intuition. An example of morality-as-signalling: Signaling Goodness: Social Rules and Public Choice.
Your idealisation makes signalling seem pointless—since everybody knows everything about the other players. Indeed, I don’t really see the point of your model. You are not attempting to model very much of the biology involved. You asked for criticism—and that is an obvious one. Another criticism is that you present a model—but it isn’t clear what it is for.
I was not.
Check again. Carefully.
I did not. I asked a question about Nesov’s metaethical position, using that toy theory of ethics as an example. I asked what kinds of grounds might be used to reject the toy theory. (The grounds you suggest don’t fit (IMHO) the metaethical stance Nesov had already committed to.)
Was I really so unclear? Please read the wikipedia entry on metaethics and reread the thread before responding, if you wish to respond.
Oh, and when I think back on the number of times you have inserted a comment about signaling into a discussion that seemed to be about something else entirely, I conclude that you really, really want to have a discussion with somebody, anybody on that topic. May I suggest that you produce a top-level posting explaining your ideas.