My real-world working theory on utility monsters of the type you describe is basically to keep in mind that some people are more sensitive than others, but if anyone reaches utility monster levels (roughly indicated by whether I think “this is completely absurd”), I flip the sign on their utility function.
Excuse me, but I think you should recheck your moral philosophy before you get the chance to act on that. Are you sure that shouldn’t be “become indifferent with respect to optimizing their utility function”, or perhaps “rescale their utility function to a more reasonable range”? Because according my moral philosophy, explicitly flipping the sign of another agent’s utility function and then optimizing is an evil act.
My own real-world working theory is that if someone I respect in general expresses a sensitivity that I consider completely absurd, I reduce my level of commitment to my process for evaluating the absurdity of sensitivities.
My real-world working theory on utility monsters of the type you describe is basically to keep in mind that some people are more sensitive than others, but if anyone reaches utility monster levels (roughly indicated by whether I think “this is completely absurd”), I flip the sign on their utility function.
Excuse me, but I think you should recheck your moral philosophy before you get the chance to act on that. Are you sure that shouldn’t be “become indifferent with respect to optimizing their utility function”, or perhaps “rescale their utility function to a more reasonable range”? Because according my moral philosophy, explicitly flipping the sign of another agent’s utility function and then optimizing is an evil act.
My own real-world working theory is that if someone I respect in general expresses a sensitivity that I consider completely absurd, I reduce my level of commitment to my process for evaluating the absurdity of sensitivities.
So you consider it to be a major source of positive utility to antagonize them?
Tongue-in-cheek, yes.