We need not ask whether the rich deserve their wealth, or who is ultimately to blame for a thing; every question must come down only to what decision will maximize utility.
For example, framed this way inequality of wealth is not justice or injustice. The consequentialist defence of the market recognises that because of the diminishing marginal utility of wealth, today’s unequal distribution of wealth has a cost in utility compared to the same wealth divided equally, a cost that we could in principle measure given a wealth/utility curve, and goes on to argue that the total extra output resulting from this inequality more than pays for it.
I didn’t intend to criticize any real or hypothetical political system. The same emotional exploit could easily defeat a community of rationalists independently evaluating political measures for their utility, as ciphergoth seems to propose.
The same emotional exploit could easily defeat a community of rationalists independently evaluating political measures for their utility
Well, since you’ve easily recognized this exploit already at the hypothetical stage, this kind of vulnerability won’t be a problem. Any consequentialist framework should be able to fight moral sabotage, for example by introducing laws that disincentivize it.
Before disincentivizing, you face the problem of defining and recognizing moral sabotage. It doesn’t sound trivial to me. Remember, groups don’t admit to using the outrage tactic; they do it sincerely, sometimes over several generations of members. I repeat the question: how does a rationalist tell “warranted” emotional disutility from “unwarranted” in a fair way?
Incentive effects are hugely important, but a utilitarian decision process that causes predictable harm is not a true utilitarian decision process. Your question is a tough one, but it’s answerable in principle.
I don’t see the problem in principle with a utilitarian deciding that giving in to an instance of moral sabotage will greatly increase later incidence of moral sabotage, resulting in total disutility greater than the manufactured weeping and gnashing of teeth you face if you stand against it now.
So a powerful agent (or a mass of tiny agents with large total power) needs a different utility function on future worlds than that of a lone rationalist observer, due to the need to avoid exploits. Well… which should I pick, then?
Looks like we’ve run into another of those nasty recursive problems: I choose my utility function depending on what every other agent could do to exploit me, and everyone else does the same. The only natural solution might well turn out to be everyone caring about their own welfare and no one else’s, to avoid “mugging by suffering”. Let’s model the problem mathematically and look for other solutions—I love this stuff.
So a powerful agent (or a mass of tiny agents with large total power) needs a different utility function on future worlds than that of a lone rationalist observer, due to the need to avoid exploits.
No, it needs a different method of maximizing expected utility. Avoiding moral sabotage doesn’t reflect a preference, it’s purely instrumental.
A related idea: moral sabotage is what happens when one player in the Ultimatum game insists on taking more than a fair share, even if what fare share is depends on his preferences.
I meant to attack this part of ciphergoth’s post:
I didn’t intend to criticize any real or hypothetical political system. The same emotional exploit could easily defeat a community of rationalists independently evaluating political measures for their utility, as ciphergoth seems to propose.
Well, since you’ve easily recognized this exploit already at the hypothetical stage, this kind of vulnerability won’t be a problem. Any consequentialist framework should be able to fight moral sabotage, for example by introducing laws that disincentivize it.
Before disincentivizing, you face the problem of defining and recognizing moral sabotage. It doesn’t sound trivial to me. Remember, groups don’t admit to using the outrage tactic; they do it sincerely, sometimes over several generations of members. I repeat the question: how does a rationalist tell “warranted” emotional disutility from “unwarranted” in a fair way?
Incentive effects are hugely important, but a utilitarian decision process that causes predictable harm is not a true utilitarian decision process. Your question is a tough one, but it’s answerable in principle.
I don’t see the problem in principle with a utilitarian deciding that giving in to an instance of moral sabotage will greatly increase later incidence of moral sabotage, resulting in total disutility greater than the manufactured weeping and gnashing of teeth you face if you stand against it now.
So a powerful agent (or a mass of tiny agents with large total power) needs a different utility function on future worlds than that of a lone rationalist observer, due to the need to avoid exploits. Well… which should I pick, then?
Looks like we’ve run into another of those nasty recursive problems: I choose my utility function depending on what every other agent could do to exploit me, and everyone else does the same. The only natural solution might well turn out to be everyone caring about their own welfare and no one else’s, to avoid “mugging by suffering”. Let’s model the problem mathematically and look for other solutions—I love this stuff.
No, it needs a different method of maximizing expected utility. Avoiding moral sabotage doesn’t reflect a preference, it’s purely instrumental.
Thanks, this clicked.
A related idea: moral sabotage is what happens when one player in the Ultimatum game insists on taking more than a fair share, even if what fare share is depends on his preferences.