Clippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors.
In short, the theory that a given agent is currently, or would under some specific circumstance, experience ‘infinite utility,’ makes no meaningful predictions.
Consider instead Kind Clippet; just like Clippet, she gets infinite utils from having a paperclip, but also gets 1 util if mankind survives the next century. She’ll do exactly what Clippet would do, unless she was offered the chance to help mankind at no cost to the paperclip, in which case she will do so. Her behaviour is, however, different from any agent who assigns real values to the paperclip and mankind.
Does it even make sense to talk about “the chance to do X at no cost to Y?” Any action that an agent can perform, no matter how apparently unrelated, seems like it must have some miniscule influence on the probability of achieving every other goal that an agent might have (even if only by wasting time.) Normally, we can say it’s a negligible influence, but if Y’s utility is literally supposed to be infinite, it would dominate.
No. This is one of the problems with trying to have infinite utility. Kind Clippet won’t actually act different than Clippet. Infinity +1 is, if at all defined in this sort of context, the same as infinity. You need to be using cardinal arithmetic. And if you try to use ordinal arithmetic then the addition won’t be commutative which leads to other problems.
And if you try to use ordinal arithmetic then the addition won’t be commutative which leads to other problems.
You can represent this sort of value by using lexigraphically sorted n-tuples as the range of the utility function. Addition will be commutative. However, Cata is correct that all but the first elements in the n-tuple won’t matter.
That would cause Kind Clippet to escape from the box and acquire a paperclip by any means necessary, and preserve humanity in the process if it was convenient to do so.
Clippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors.
Um… yes? That’s how it works. It just doesn’t particularly relate to your declaration that infinite utility is impossible (rather than my position—that is is lame).
In short, the theory that a given agent is currently, or would under some specific circumstance, experience ‘infinite utility,’ makes no meaningful predictions.
It is no better or worse or better than a theory that the utility function is ‘1’ for having a paperclip and ‘0’ for everything else. In fact, they are equivalent and you rescale one to the other trivially (everything that wasn’t infinite obviously rescales to ‘infinitely small’). You appear to be confused about how the ‘not testable’ concept applies here...
Clippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors.
In short, the theory that a given agent is currently, or would under some specific circumstance, experience ‘infinite utility,’ makes no meaningful predictions.
Consider instead Kind Clippet; just like Clippet, she gets infinite utils from having a paperclip, but also gets 1 util if mankind survives the next century. She’ll do exactly what Clippet would do, unless she was offered the chance to help mankind at no cost to the paperclip, in which case she will do so. Her behaviour is, however, different from any agent who assigns real values to the paperclip and mankind.
Does it even make sense to talk about “the chance to do X at no cost to Y?” Any action that an agent can perform, no matter how apparently unrelated, seems like it must have some miniscule influence on the probability of achieving every other goal that an agent might have (even if only by wasting time.) Normally, we can say it’s a negligible influence, but if Y’s utility is literally supposed to be infinite, it would dominate.
No. This is one of the problems with trying to have infinite utility. Kind Clippet won’t actually act different than Clippet. Infinity +1 is, if at all defined in this sort of context, the same as infinity. You need to be using cardinal arithmetic. And if you try to use ordinal arithmetic then the addition won’t be commutative which leads to other problems.
You can represent this sort of value by using lexigraphically sorted n-tuples as the range of the utility function. Addition will be commutative. However, Cata is correct that all but the first elements in the n-tuple won’t matter.
Yes, you’re right. You can do this with sorted n-tuples.
Just put Kind Clippet in a box with no paperclips.
That would cause Kind Clippet to escape from the box and acquire a paperclip by any means necessary, and preserve humanity in the process if it was convenient to do so.
Um… yes? That’s how it works. It just doesn’t particularly relate to your declaration that infinite utility is impossible (rather than my position—that is is lame).
It is no better or worse or better than a theory that the utility function is ‘1’ for having a paperclip and ‘0’ for everything else. In fact, they are equivalent and you rescale one to the other trivially (everything that wasn’t infinite obviously rescales to ‘infinitely small’). You appear to be confused about how the ‘not testable’ concept applies here...