Does the utility function at the time of the choice have some sort of preferred status in the calculation
Yes, it does. Your present utility function may make reference to the utility functions of your future selves—eg, you want your future selves to be happy—but structurally speaking, present-day preferences about your future selves are the only way in which those other utility functions can bear on your decisions.
My utility function maximises (and think this is neither entirely nonsensical nor entirely trivial in the context) utilons. I want my future selves to be “happy”, which is ill-defined.
I don’t know how to say this precisely, but I want as many utilons as possible from as many future selves as possible. The problem arises when it appears that actively changing my future selves’ utility functions to match their worlds is the best way to do that, but my current self recoils from the proposition. If I shut up and multiply, I get the opposite result that Eliezer does and I tend to trust his calculations more than my own.
But surely you must have some constraints about what you consider future selves—some weighting function that prevents you from simply reducing yourself to a utilon-busybeaver.
As far as I can tell, the only things that keep me from reducing myself to a utilon-busybeaver are
a) insufficiently detailed information on the likelihoods of each potential future-me function, and
b) an internally inconsistent utility function
What I’m addressing here is b) - my valuation of a universe composed entirely of minds that most-value a universe composed entirely of themselves is path-dependent. My initial reaction is that that universe is very negative on my current function, but I find it hard to believe that it’s truly of larger magnitude than {number of minds}*{length of existence of this universe}*{number of utilons per mind}*{my personal utility of another mind’s utilon}
Even for a very small positive value for the last (and it’s definitely not negative or 0 - I’d need some justification to torture someone to death), the sheer scale of the other values should trivialize my personal preference that the universe include discovery and exploration.
Yes, it does. Your present utility function may make reference to the utility functions of your future selves—eg, you want your future selves to be happy—but structurally speaking, present-day preferences about your future selves are the only way in which those other utility functions can bear on your decisions.
My utility function maximises (and think this is neither entirely nonsensical nor entirely trivial in the context) utilons. I want my future selves to be “happy”, which is ill-defined.
I don’t know how to say this precisely, but I want as many utilons as possible from as many future selves as possible. The problem arises when it appears that actively changing my future selves’ utility functions to match their worlds is the best way to do that, but my current self recoils from the proposition. If I shut up and multiply, I get the opposite result that Eliezer does and I tend to trust his calculations more than my own.
But surely you must have some constraints about what you consider future selves—some weighting function that prevents you from simply reducing yourself to a utilon-busybeaver.
As far as I can tell, the only things that keep me from reducing myself to a utilon-busybeaver are a) insufficiently detailed information on the likelihoods of each potential future-me function, and b) an internally inconsistent utility function
What I’m addressing here is b) - my valuation of a universe composed entirely of minds that most-value a universe composed entirely of themselves is path-dependent. My initial reaction is that that universe is very negative on my current function, but I find it hard to believe that it’s truly of larger magnitude than {number of minds}*{length of existence of this universe}*{number of utilons per mind}*{my personal utility of another mind’s utilon}
Even for a very small positive value for the last (and it’s definitely not negative or 0 - I’d need some justification to torture someone to death), the sheer scale of the other values should trivialize my personal preference that the universe include discovery and exploration.