The assumption I’m talking about is that the state of the rest of the universe (or multiverse) does not affect the marginal utility of there also being someone having certain experiences at some location in the uni-/multi-verse.
Now, I am not a friend of probabilities / utilities separately; instead, consider your decision function.
Linearity means that your decisions are independent of observations of far parts of the universe. In other words, you have one system over which your agent optimizes expected utility; and now compare it to the situation where you have two systems. Your utility function is linear iff you can make decisions locally, that is, without considering the state of the other system.
Clearly, almost nothing has a linear decision / utility function.
I think people mistake the following (amazing) heuristic for linear utility: If there are very many local systems, and you have a sufficiently smooth utility and probability distribution for all of them, then you can do mean-field: You don’t need to look, the law of large numbers guarantees strong bounds. In this sense, you don’t need to couple all the systems, they just all couple to the mean-field.
To be more practical: Someone might claim to have almost linear (altruistic) utility for QALYs over the 5 years (so time-discounting is irrelevant). Equivalently, whether some war in the middle east is terrible or not does not influence his/her malaria-focused charity work (say, he/she only decides on this specific topic).
Awesome, he/she does not need to read the news! And this is true to some extent, but becomes bullshit at the tails of the distribution. (the news become relevant if e.g. the nukes fly, because they bring you into a more nonlinear regime for utility; on the other hand, given an almost fixed background population, log-utility and linear-utility are indistinguishable by Taylor’s rule)
Re pascal’s muggle: Obviously your chance of getting a stroke and hallucinating weird stuff outweighs your chance of witnessing magic. I think that it is quite clear that you can forget about the marginal cost of giving 5 bucks to an imaginary mugger before the ambulance arrives to maybe save you; decision-theoretically, you win by precommitting to pay the mugger and call an ambulance once you observe something sufficiently weird.
Now, I am not a friend of probabilities / utilities separately; instead, consider your decision function.
Linearity means that your decisions are independent of observations of far parts of the universe. In other words, you have one system over which your agent optimizes expected utility; and now compare it to the situation where you have two systems. Your utility function is linear iff you can make decisions locally, that is, without considering the state of the other system.
Clearly, almost nothing has a linear decision / utility function.
I think people mistake the following (amazing) heuristic for linear utility: If there are very many local systems, and you have a sufficiently smooth utility and probability distribution for all of them, then you can do mean-field: You don’t need to look, the law of large numbers guarantees strong bounds. In this sense, you don’t need to couple all the systems, they just all couple to the mean-field.
To be more practical: Someone might claim to have almost linear (altruistic) utility for QALYs over the 5 years (so time-discounting is irrelevant). Equivalently, whether some war in the middle east is terrible or not does not influence his/her malaria-focused charity work (say, he/she only decides on this specific topic).
Awesome, he/she does not need to read the news! And this is true to some extent, but becomes bullshit at the tails of the distribution. (the news become relevant if e.g. the nukes fly, because they bring you into a more nonlinear regime for utility; on the other hand, given an almost fixed background population, log-utility and linear-utility are indistinguishable by Taylor’s rule)
Re pascal’s muggle: Obviously your chance of getting a stroke and hallucinating weird stuff outweighs your chance of witnessing magic. I think that it is quite clear that you can forget about the marginal cost of giving 5 bucks to an imaginary mugger before the ambulance arrives to maybe save you; decision-theoretically, you win by precommitting to pay the mugger and call an ambulance once you observe something sufficiently weird.
Perhaps you could edit and un-bold this comment
I have now done this.
Thanks, and sorry for presumably messing up the formatting.