Given that you consider universes to be infinite binary sequences, I’m not comfortable with the restriction that a rational agent’s utility function must be bounded. Unless the idea is just that this is an interesting and tractable case to look at.
I’m doubtful that a human’s utility function, if there could be such a thing, would be bounded in a universe in which, for example, they lived forever (and surely some Tegmark IV universes are such). I assume that a utility function is supposed to be a rational idealization of an agent’s motivations, not a literal description of any given agent. So you need not point out that no living humans seem to be risking everything, i.e. despising all finite utilities, on a plan to make living forever a little more likely.
On a more positive note—I do like your approach to humans’ “native domain” and ontological problems.
Unbounded utility functions come with a world of trouble. For example, if your utility function is computable, its Solomonoff expectation value will almost always diverge since utilities will grow as BB(Kolmogorov complexity) whereas probabilities will only fall as 2^{-Kolmogorov complexity}. Essentially, it’s Pascal mugging.
It is possible to consider utility functions that give a finite extra award for living forever. For example, say that the utility for T years of life is 1 - exp(-T / tau) whereas the utility for an infinite number of years of life is 2. Such a utility function is not lower semicontinuous, but as I explained in the post it seems that we only need upper semicontinuity.
“Finite extra reward”—sneaky, I like it! I’m still in doubt, mind you. Pascal’s mugging might be part of a reason to abandon cardinal utility altogether, rather than restricting it to bounded forms.
Given that you consider universes to be infinite binary sequences, I’m not comfortable with the restriction that a rational agent’s utility function must be bounded. Unless the idea is just that this is an interesting and tractable case to look at.
I’m doubtful that a human’s utility function, if there could be such a thing, would be bounded in a universe in which, for example, they lived forever (and surely some Tegmark IV universes are such). I assume that a utility function is supposed to be a rational idealization of an agent’s motivations, not a literal description of any given agent. So you need not point out that no living humans seem to be risking everything, i.e. despising all finite utilities, on a plan to make living forever a little more likely.
On a more positive note—I do like your approach to humans’ “native domain” and ontological problems.
Thx for commenting!
Unbounded utility functions come with a world of trouble. For example, if your utility function is computable, its Solomonoff expectation value will almost always diverge since utilities will grow as BB(Kolmogorov complexity) whereas probabilities will only fall as 2^{-Kolmogorov complexity}. Essentially, it’s Pascal mugging.
It is possible to consider utility functions that give a finite extra award for living forever. For example, say that the utility for T years of life is 1 - exp(-T / tau) whereas the utility for an infinite number of years of life is 2. Such a utility function is not lower semicontinuous, but as I explained in the post it seems that we only need upper semicontinuity.
“Finite extra reward”—sneaky, I like it! I’m still in doubt, mind you. Pascal’s mugging might be part of a reason to abandon cardinal utility altogether, rather than restricting it to bounded forms.