If politicians start following expected utility consequentialism, special interest groups will be able to exploit the system by manufacturing in themselves “offense” (extreme emotional disutility) at unfavored measures, forcing your maximizer to give in to their demands. To avoid this, you need a procedure for distinguishing “warranted” offense from “unwarranted” offense: some baseline of personal rights ultimately derived from something other than self-assessed emotional utility.
If you see a way around this difficulty, let me know, because it seems insurmountable to me right now. Until we sort this out, I find it hard to talk about politics from a consequentialist standpoint, because most successful interest groups today are already heavily using the exploit I’ve described.
I don’t see the object of attack in the room. An exploration of potential utility-maximization political frameworks and their practical pitfalls would possibly be interesting, although in practice I expect this sort of institution to turn into a kind of market, not so much politician-mediated.
We need not ask whether the rich deserve their wealth, or who is ultimately to blame for a thing; every question must come down only to what decision will maximize utility.
For example, framed this way inequality of wealth is not justice or injustice. The consequentialist defence of the market recognises that because of the diminishing marginal utility of wealth, today’s unequal distribution of wealth has a cost in utility compared to the same wealth divided equally, a cost that we could in principle measure given a wealth/utility curve, and goes on to argue that the total extra output resulting from this inequality more than pays for it.
I didn’t intend to criticize any real or hypothetical political system. The same emotional exploit could easily defeat a community of rationalists independently evaluating political measures for their utility, as ciphergoth seems to propose.
The same emotional exploit could easily defeat a community of rationalists independently evaluating political measures for their utility
Well, since you’ve easily recognized this exploit already at the hypothetical stage, this kind of vulnerability won’t be a problem. Any consequentialist framework should be able to fight moral sabotage, for example by introducing laws that disincentivize it.
Before disincentivizing, you face the problem of defining and recognizing moral sabotage. It doesn’t sound trivial to me. Remember, groups don’t admit to using the outrage tactic; they do it sincerely, sometimes over several generations of members. I repeat the question: how does a rationalist tell “warranted” emotional disutility from “unwarranted” in a fair way?
Incentive effects are hugely important, but a utilitarian decision process that causes predictable harm is not a true utilitarian decision process. Your question is a tough one, but it’s answerable in principle.
I don’t see the problem in principle with a utilitarian deciding that giving in to an instance of moral sabotage will greatly increase later incidence of moral sabotage, resulting in total disutility greater than the manufactured weeping and gnashing of teeth you face if you stand against it now.
So a powerful agent (or a mass of tiny agents with large total power) needs a different utility function on future worlds than that of a lone rationalist observer, due to the need to avoid exploits. Well… which should I pick, then?
Looks like we’ve run into another of those nasty recursive problems: I choose my utility function depending on what every other agent could do to exploit me, and everyone else does the same. The only natural solution might well turn out to be everyone caring about their own welfare and no one else’s, to avoid “mugging by suffering”. Let’s model the problem mathematically and look for other solutions—I love this stuff.
So a powerful agent (or a mass of tiny agents with large total power) needs a different utility function on future worlds than that of a lone rationalist observer, due to the need to avoid exploits.
No, it needs a different method of maximizing expected utility. Avoiding moral sabotage doesn’t reflect a preference, it’s purely instrumental.
A related idea: moral sabotage is what happens when one player in the Ultimatum game insists on taking more than a fair share, even if what fare share is depends on his preferences.
If politicians start following expected utility consequentialism, special interest groups will be able to exploit the system by manufacturing in themselves “offense” (extreme emotional disutility) at unfavored measures, forcing your maximizer to give in to their demands. To avoid this, you need a procedure for distinguishing “warranted” offense from “unwarranted” offense: some baseline of personal rights ultimately derived from something other than self-assessed emotional utility.
If you see a way around this difficulty, let me know, because it seems insurmountable to me right now. Until we sort this out, I find it hard to talk about politics from a consequentialist standpoint, because most successful interest groups today are already heavily using the exploit I’ve described.
I don’t see the object of attack in the room. An exploration of potential utility-maximization political frameworks and their practical pitfalls would possibly be interesting, although in practice I expect this sort of institution to turn into a kind of market, not so much politician-mediated.
I meant to attack this part of ciphergoth’s post:
I didn’t intend to criticize any real or hypothetical political system. The same emotional exploit could easily defeat a community of rationalists independently evaluating political measures for their utility, as ciphergoth seems to propose.
Well, since you’ve easily recognized this exploit already at the hypothetical stage, this kind of vulnerability won’t be a problem. Any consequentialist framework should be able to fight moral sabotage, for example by introducing laws that disincentivize it.
Before disincentivizing, you face the problem of defining and recognizing moral sabotage. It doesn’t sound trivial to me. Remember, groups don’t admit to using the outrage tactic; they do it sincerely, sometimes over several generations of members. I repeat the question: how does a rationalist tell “warranted” emotional disutility from “unwarranted” in a fair way?
Incentive effects are hugely important, but a utilitarian decision process that causes predictable harm is not a true utilitarian decision process. Your question is a tough one, but it’s answerable in principle.
I don’t see the problem in principle with a utilitarian deciding that giving in to an instance of moral sabotage will greatly increase later incidence of moral sabotage, resulting in total disutility greater than the manufactured weeping and gnashing of teeth you face if you stand against it now.
So a powerful agent (or a mass of tiny agents with large total power) needs a different utility function on future worlds than that of a lone rationalist observer, due to the need to avoid exploits. Well… which should I pick, then?
Looks like we’ve run into another of those nasty recursive problems: I choose my utility function depending on what every other agent could do to exploit me, and everyone else does the same. The only natural solution might well turn out to be everyone caring about their own welfare and no one else’s, to avoid “mugging by suffering”. Let’s model the problem mathematically and look for other solutions—I love this stuff.
No, it needs a different method of maximizing expected utility. Avoiding moral sabotage doesn’t reflect a preference, it’s purely instrumental.
Thanks, this clicked.
A related idea: moral sabotage is what happens when one player in the Ultimatum game insists on taking more than a fair share, even if what fare share is depends on his preferences.