Clippet is, at any given time, applying vast but finite resources to the problem of getting and keeping that one paperclip. There is no action Clippet would take that would not also be taken by an entity that derived one utilon from the presence of at least one paperclip and zero utilons from any other possible stimuli, and thus had decidedly finite utility, or an entity that simply assigned some factor a utility value vastly greater than the sum of all other possible factors.
Um… yes? That’s how it works. It just doesn’t particularly relate to your declaration that infinite utility is impossible (rather than my position—that is is lame).
In short, the theory that a given agent is currently, or would under some specific circumstance, experience ‘infinite utility,’ makes no meaningful predictions.
It is no better or worse or better than a theory that the utility function is ‘1’ for having a paperclip and ‘0’ for everything else. In fact, they are equivalent and you rescale one to the other trivially (everything that wasn’t infinite obviously rescales to ‘infinitely small’). You appear to be confused about how the ‘not testable’ concept applies here...
Um… yes? That’s how it works. It just doesn’t particularly relate to your declaration that infinite utility is impossible (rather than my position—that is is lame).
It is no better or worse or better than a theory that the utility function is ‘1’ for having a paperclip and ‘0’ for everything else. In fact, they are equivalent and you rescale one to the other trivially (everything that wasn’t infinite obviously rescales to ‘infinitely small’). You appear to be confused about how the ‘not testable’ concept applies here...