“that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God).”
Utilitarian would rightly attack this, since the probabilities almost certainly won’t wind up exactly balancing. A better argument is that wasting time thinking about Christianity will distract you from more probable weird-physics and Simulation Hypothesis Wagers.
A more important criticism is that humans just physiologically don’t have any emotions that scale linearly. To the extent that we approximate utility functions, we approximate ones with bounded utility, although utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences, i.e. they have a bounded interest in ‘shutting up and multiplying.’
utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences
I know this is not what you were suggesting, but this made me think of goal systems of the form “take the action that I think idealized agent X is most likely to take,” e.g. WWAIXID.
A huge problem with these goal systems is that the idealized agent will probably have very low-entropy probability distributions, while your own beliefs have very high entropy. So you’ll end up acting as if you believed with near-certainty the single most likely scenario you can think of.
Another problem, of course, is that you’ll take actions that only make sense for an agent much more competent than you are. For example, AIXI would be happy to bet $1 million that it can beat Cho Chikun at Go.
This seems like a non-standard way of thinking that needs some explanation. It’s not clear to me that it matters whether my emotions scale linearly, if I’ll reflectively endorse the statement “if there are X good things, and you add an additional good thing, the goodness of that doesn’t depend on what X is”. It’s also not clear to me that utilitarians can be seen as having an intrinsic preference for utilitarian behavior as opposed to a belief that their “true” preferences are utilitarian.
“that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God).” Utilitarian would rightly attack this, since the probabilities almost certainly won’t wind up exactly balancing. A better argument is that wasting time thinking about Christianity will distract you from more probable weird-physics and Simulation Hypothesis Wagers.
A more important criticism is that humans just physiologically don’t have any emotions that scale linearly. To the extent that we approximate utility functions, we approximate ones with bounded utility, although utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences, i.e. they have a bounded interest in ‘shutting up and multiplying.’
I know this is not what you were suggesting, but this made me think of goal systems of the form “take the action that I think idealized agent X is most likely to take,” e.g. WWAIXID.
A huge problem with these goal systems is that the idealized agent will probably have very low-entropy probability distributions, while your own beliefs have very high entropy. So you’ll end up acting as if you believed with near-certainty the single most likely scenario you can think of.
Another problem, of course, is that you’ll take actions that only make sense for an agent much more competent than you are. For example, AIXI would be happy to bet $1 million that it can beat Cho Chikun at Go.
In the relevant circumstances, I too might be happy to bet $1M that AIXI can beat Cho Chikun at go.
This seems like a non-standard way of thinking that needs some explanation. It’s not clear to me that it matters whether my emotions scale linearly, if I’ll reflectively endorse the statement “if there are X good things, and you add an additional good thing, the goodness of that doesn’t depend on what X is”. It’s also not clear to me that utilitarians can be seen as having an intrinsic preference for utilitarian behavior as opposed to a belief that their “true” preferences are utilitarian.