What’s with all this ‘infinite utility/disutility’ nonsense? Utility is a measure of preference, and ‘preference’ itself is a theoretical construct used to predict future decisions and actions. No one could possibly gain infinite utility from anything, because for that to happen, they’d have to be willing and able to give up infinite resources or opportunities or something else of value to them in order to get it, which (barring hyperinflation so cataclysmic that some government starts issuing banknotes with aleph numbers on them, and further market conditions so inconceivably bizarre that such notes are widely accepted at face value) isn’t even remotely possible. Protestations of willingness in the absence of demonstrated ability don’t count; talk is cheap, if you really cared that much you’d be finding a way instead of whining.
I’ve had a funny feeling about this subject for a while, but the logic finally clicked just recently. Still, there could be some flaw I missed. ~98%
No one could possibly gain infinite utility from anything, because for that to happen, they’d have to be willing and able to give up infinite resources or opportunities or something else of value to them in order to get it,
Just willing. If they want it infinitely much and someone else gives it to them then they have infinite utility. Their wishes may also be arbitrarily trivial to achieve. They could assign infinite utility to having a single paperclip and be willing to do anything they can to make sure they have a paperclip. Since they (probably) do have the ability to get and keep a paperclip they probably do have infinite utility.
Call her “Clippet”, she’s a Paperclip Satisficer. Mind you she will probably still take over the universe so that she can make sure nobody else takes her paperclip away from her but while she’s doing that she’ll already have infinite utility.
The problem with infinities in the utility function is that it’s stupid, not that it’s impossible.
Clippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors.
In short, the theory that a given agent is currently, or would under some specific circumstance, experience ‘infinite utility,’ makes no meaningful predictions.
Consider instead Kind Clippet; just like Clippet, she gets infinite utils from having a paperclip, but also gets 1 util if mankind survives the next century. She’ll do exactly what Clippet would do, unless she was offered the chance to help mankind at no cost to the paperclip, in which case she will do so. Her behaviour is, however, different from any agent who assigns real values to the paperclip and mankind.
Does it even make sense to talk about “the chance to do X at no cost to Y?” Any action that an agent can perform, no matter how apparently unrelated, seems like it must have some miniscule influence on the probability of achieving every other goal that an agent might have (even if only by wasting time.) Normally, we can say it’s a negligible influence, but if Y’s utility is literally supposed to be infinite, it would dominate.
No. This is one of the problems with trying to have infinite utility. Kind Clippet won’t actually act different than Clippet. Infinity +1 is, if at all defined in this sort of context, the same as infinity. You need to be using cardinal arithmetic. And if you try to use ordinal arithmetic then the addition won’t be commutative which leads to other problems.
And if you try to use ordinal arithmetic then the addition won’t be commutative which leads to other problems.
You can represent this sort of value by using lexigraphically sorted n-tuples as the range of the utility function. Addition will be commutative. However, Cata is correct that all but the first elements in the n-tuple won’t matter.
That would cause Kind Clippet to escape from the box and acquire a paperclip by any means necessary, and preserve humanity in the process if it was convenient to do so.
Clippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors.
Um… yes? That’s how it works. It just doesn’t particularly relate to your declaration that infinite utility is impossible (rather than my position—that is is lame).
In short, the theory that a given agent is currently, or would under some specific circumstance, experience ‘infinite utility,’ makes no meaningful predictions.
It is no better or worse or better than a theory that the utility function is ‘1’ for having a paperclip and ‘0’ for everything else. In fact, they are equivalent and you rescale one to the other trivially (everything that wasn’t infinite obviously rescales to ‘infinitely small’). You appear to be confused about how the ‘not testable’ concept applies here...
Thanks; I’m rather disappointed in myself for not guessing that. I’d imagined you having a lapse of thought while eating a grapefruit while typing it up, or thinking about doing so; but that now seems precluded to a rather ridiculous degree by Occam’s Razor.
What’s with all this ‘infinite utility/disutility’ nonsense? Utility is a measure of preference, and ‘preference’ itself is a theoretical construct used to predict future decisions and actions. No one could possibly gain infinite utility from anything, because for that to happen, they’d have to be willing and able to give up infinite resources or opportunities or something else of value to them in order to get it, which (barring hyperinflation so cataclysmic that some government starts issuing banknotes with aleph numbers on them, and further market conditions so inconceivably bizarre that such notes are widely accepted at face value) isn’t even remotely possible. Protestations of willingness in the absence of demonstrated ability don’t count; talk is cheap, if you really cared that much you’d be finding a way instead of whining.
I’ve had a funny feeling about this subject for a while, but the logic finally clicked just recently. Still, there could be some flaw I missed. ~98%
Just willing. If they want it infinitely much and someone else gives it to them then they have infinite utility. Their wishes may also be arbitrarily trivial to achieve. They could assign infinite utility to having a single paperclip and be willing to do anything they can to make sure they have a paperclip. Since they (probably) do have the ability to get and keep a paperclip they probably do have infinite utility.
Call her “Clippet”, she’s a Paperclip Satisficer. Mind you she will probably still take over the universe so that she can make sure nobody else takes her paperclip away from her but while she’s doing that she’ll already have infinite utility.
The problem with infinities in the utility function is that it’s stupid, not that it’s impossible.
Clippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors.
In short, the theory that a given agent is currently, or would under some specific circumstance, experience ‘infinite utility,’ makes no meaningful predictions.
Consider instead Kind Clippet; just like Clippet, she gets infinite utils from having a paperclip, but also gets 1 util if mankind survives the next century. She’ll do exactly what Clippet would do, unless she was offered the chance to help mankind at no cost to the paperclip, in which case she will do so. Her behaviour is, however, different from any agent who assigns real values to the paperclip and mankind.
Does it even make sense to talk about “the chance to do X at no cost to Y?” Any action that an agent can perform, no matter how apparently unrelated, seems like it must have some miniscule influence on the probability of achieving every other goal that an agent might have (even if only by wasting time.) Normally, we can say it’s a negligible influence, but if Y’s utility is literally supposed to be infinite, it would dominate.
No. This is one of the problems with trying to have infinite utility. Kind Clippet won’t actually act different than Clippet. Infinity +1 is, if at all defined in this sort of context, the same as infinity. You need to be using cardinal arithmetic. And if you try to use ordinal arithmetic then the addition won’t be commutative which leads to other problems.
You can represent this sort of value by using lexigraphically sorted n-tuples as the range of the utility function. Addition will be commutative. However, Cata is correct that all but the first elements in the n-tuple won’t matter.
Yes, you’re right. You can do this with sorted n-tuples.
Just put Kind Clippet in a box with no paperclips.
That would cause Kind Clippet to escape from the box and acquire a paperclip by any means necessary, and preserve humanity in the process if it was convenient to do so.
Um… yes? That’s how it works. It just doesn’t particularly relate to your declaration that infinite utility is impossible (rather than my position—that is is lame).
It is no better or worse or better than a theory that the utility function is ‘1’ for having a paperclip and ‘0’ for everything else. In fact, they are equivalent and you rescale one to the other trivially (everything that wasn’t infinite obviously rescales to ‘infinitely small’). You appear to be confused about how the ‘not testable’ concept applies here...
I’d be interested in the train of thought that lead to “paperclip” being switched out in favor of “grapefruit.”
Failed to switch out a grapefruit to paperclip when I was revising. (Clips seemed more appropriate.)
Thanks; I’m rather disappointed in myself for not guessing that. I’d imagined you having a lapse of thought while eating a grapefruit while typing it up, or thinking about doing so; but that now seems precluded to a rather ridiculous degree by Occam’s Razor.