Going into the details of Fun Theory helps you see that eudaimonia is actually complicated—that there are a lot of properties necessary for a mind to lead a worthwhile existence. Which helps you appreciate just how worthless a galaxy would end up looking (with extremely high probability) if it was optimized by something with a utility function rolled up at random.
Something with a utility function “rolled at random” typically does not “optimise the universe”. Rather it dies out. Of those agents with utility functions that do actually spread themselves throughout the universe, it is not remotely obvious that most of them are “worthless” or “uninteresting”—unless you choose to define the term “worth” so that this is true, for some reason.
Indeed, rather the opposite—since such agents would construct galactic-scale civilisations, they would probably be highly interesting and valuable instances of living systems in the universal community.
Complex challenges? Novelty? Individualism? Self-awareness? Experienced happiness? A paperclip maximizer cares not about these things.
Sure it would: as proximate goals. Animals are expected gene-fitness maximisers. Expected gene-fitness is not somehow intrinsically more humane than expected paperclip number. Both have about the same chance of leading to the things you mentioned being proximate goals.
Novelty-seeking and self-awareness are things you get out of any sufficiently-powerful optimisation process—just as they all develop fusion, space travel, nanotechnology—and so on.
Something with a utility function “rolled at random” typically does not “optimise the universe”. Rather it dies out. Of those agents with utility functions that do actually spread themselves throughout the universe, it is not remotely obvious that most of them are “worthless” or “uninteresting”—unless you choose to define the term “worth” so that this is true, for some reason.
Indeed, rather the opposite—since such agents would construct galactic-scale civilisations, they would probably be highly interesting and valuable instances of living systems in the universal community.
Sure it would: as proximate goals. Animals are expected gene-fitness maximisers. Expected gene-fitness is not somehow intrinsically more humane than expected paperclip number. Both have about the same chance of leading to the things you mentioned being proximate goals.
Novelty-seeking and self-awareness are things you get out of any sufficiently-powerful optimisation process—just as they all develop fusion, space travel, nanotechnology—and so on.