Yeah, the notion of “twice as good as things are now” doesn’t actually make sense, because utility is only defined up to affine transformations. (That is, if you decided to raise your utility for every outcome by 1000, you’d make the same decisions afterward as you did before; it’s the relative distances that matter, not the scaling or the place you call 0. It’s rather like the Fahrenheit and Celsius scales for temperature.)
But anyway, you can figure out the relative distances in the same way; call what you have right now 1000, imagine some particular awesome scenario and call that 2000, and then figure out the utility of having another stroke, relative to that. For any plausible scenario (excluding things that could only happen post-Singularity), you should wind up again with an extremely negative (but not ridiculous) number for a stroke.
On the other hand, conscious introspection is a very poor tool for figuring out our relative utilities (to the degree that our decisions can be said to flow from a utility function at all!), because of signaling reasons in particular.
Not that I know of. Just a warning not to be too certain of the results you get from this algorithm- your extrapolations to actual decisions may be far from what you’d actually do.
Yeah, the notion of “twice as good as things are now” doesn’t actually make sense, because utility is only defined up to affine transformations. (That is, if you decided to raise your utility for every outcome by 1000, you’d make the same decisions afterward as you did before; it’s the relative distances that matter, not the scaling or the place you call 0. It’s rather like the Fahrenheit and Celsius scales for temperature.)
But anyway, you can figure out the relative distances in the same way; call what you have right now 1000, imagine some particular awesome scenario and call that 2000, and then figure out the utility of having another stroke, relative to that. For any plausible scenario (excluding things that could only happen post-Singularity), you should wind up again with an extremely negative (but not ridiculous) number for a stroke.
On the other hand, conscious introspection is a very poor tool for figuring out our relative utilities (to the degree that our decisions can be said to flow from a utility function at all!), because of signaling reasons in particular.
Certainly. Or, really, much of anything else. Is there a better tool available in this case?
Not that I know of. Just a warning not to be too certain of the results you get from this algorithm- your extrapolations to actual decisions may be far from what you’d actually do.