So you’re saying that there’s one single set of behaviors, which, even though different agents will assign drastically different values to the same potential outcomes, balances their conflicting interests to provide the most net utility across the group. That could be true, although I’m not convinced.
Even if it is, though, what the optimal strategy is will change if the net values across the group changes. The only point I have ever tried to make in these threads is that the origin of any applicable moral value must be the subjective preferences of the agents involved.
The reason any agent would agree to follow such a rule set is if you could demonstrate convincingly that such behaviors maximize that agent’s utility. It all comes down to subjective values. There exists no other motivating force.
… what the optimal strategy is will change if the net values across the group changes.
True, but that may not be as telling an objection as you seem to think. For example, if you run into someone (not me!) who claims that the entire moral code is based on the ‘Golden Rule’ of “Do unto others as you would have others do unto you.” Tell that guy that moral behavior changes if preferences change. He will respond “Well, duh! What is your point?”.
Not to me. I didn’t downvote, and in any case I was the first to use the rude “duh!”, so if you were rude back I probably deserved it. Unfortunately, I’m afraid I still don’t understand your point.
Perhaps you were rude to those unnamed people who you suggest “do not recognize this”.
It’s easy to bristle when someone in response to you points out something you thought it was obvious that you knew. This happens all the time when people think they’re smart :)
I’m fond of including clarification like, “subjective values (values defined in the broadest possible sense, to include even things like your desire to get right with your god, to see other people happy, to not feel guilty, or even to “be good”).”
Some ways I’ve found to dissolve people’s language back to subjective utility:
If someone says something is good, right, bad, or wrong, ask, “For what purpose?”
If someone declares something immoral, unjust, unethical, ask, “So what unhappiness will I suffer as a result?”
But use sparingly, because there is a big reason many people resist dissolving this confusion.
So you’re saying that there’s one single set of behaviors, which, even though different agents will assign drastically different values to the same potential outcomes, balances their conflicting interests to provide the most net utility across the group. That could be true, although I’m not convinced.
Even if it is, though, what the optimal strategy is will change if the net values across the group changes. The only point I have ever tried to make in these threads is that the origin of any applicable moral value must be the subjective preferences of the agents involved.
The reason any agent would agree to follow such a rule set is if you could demonstrate convincingly that such behaviors maximize that agent’s utility. It all comes down to subjective values. There exists no other motivating force.
True, but that may not be as telling an objection as you seem to think. For example, if you run into someone (not me!) who claims that the entire moral code is based on the ‘Golden Rule’ of “Do unto others as you would have others do unto you.” Tell that guy that moral behavior changes if preferences change. He will respond “Well, duh! What is your point?”.
There are people who do not recognize this. It was, in fact, my point.
Edit: Hmm, did I say something rude Perplexed?
Not to me. I didn’t downvote, and in any case I was the first to use the rude “duh!”, so if you were rude back I probably deserved it. Unfortunately, I’m afraid I still don’t understand your point.
Perhaps you were rude to those unnamed people who you suggest “do not recognize this”.
I think we may have reached the somewhat common on LW point where we’re arguing even though we have no disagreement.
It’s easy to bristle when someone in response to you points out something you thought it was obvious that you knew. This happens all the time when people think they’re smart :)
I’m fond of including clarification like, “subjective values (values defined in the broadest possible sense, to include even things like your desire to get right with your god, to see other people happy, to not feel guilty, or even to “be good”).”
Some ways I’ve found to dissolve people’s language back to subjective utility:
If someone says something is good, right, bad, or wrong, ask, “For what purpose?”
If someone declares something immoral, unjust, unethical, ask, “So what unhappiness will I suffer as a result?”
But use sparingly, because there is a big reason many people resist dissolving this confusion.