This does raise an interesting issue: if I’m a strictly selfish utilitarian, do I not want my utility function to be that which will attain the highest expected utility?
This is a particular form of wireheading
I’d say it’s rather a form of conceptual confusion: you can’t change a concept (“change” is itself a “timeful” concept, meaningful only as a property within structures which are processes in the appropriate sense). But it’s plausible that creating agents with slightly different explicit preference will result in a better outcome than, all else equal, if you give those agents your own preference. Of course, you’d probably need to be a superintelligence to correctly make decisions like this, at which point creation of agents with given preference might cease to be a natural concept.
I’d say it’s rather a form of conceptual confusion: you can’t change a concept (“change” is itself a “timeful” concept, meaningful only as a property within structures which are processes in the appropriate sense). But it’s plausible that creating agents with slightly different explicit preference will result in a better outcome than, all else equal, if you give those agents your own preference. Of course, you’d probably need to be a superintelligence to correctly make decisions like this, at which point creation of agents with given preference might cease to be a natural concept.