The very definition of “rational utility maximizer” implies that it will try to maximize utilons as fast and as efficient as possible.
The problem is that “utility” has to be defined. To maximize expected utility does not imply certain actions, efficiency and economic behavior, or the drive to protect yourself. You can also rationally maximize paperclips without protecting yourself if it is not part of your goal parameters.
I know what kind of agent you assume. I am just pointing out what needs to be true in conjunction to make the overall premise true. Expected utility maximizing does not equal what you assume. You can also assign utility to maximize paperclips as long as nothing turns you off but don’t care about being turned off. If an AI is not explicitly programmed to care about it, then it won’t.
The problem is that “utility” has to be defined. To maximize expected utility does not imply certain actions, efficiency and economic behavior, or the drive to protect yourself. You can also rationally maximize paperclips without protecting yourself if it is not part of your goal parameters.
I know what kind of agent you assume. I am just pointing out what needs to be true in conjunction to make the overall premise true. Expected utility maximizing does not equal what you assume. You can also assign utility to maximize paperclips as long as nothing turns you off but don’t care about being turned off. If an AI is not explicitly programmed to care about it, then it won’t.