If you just tell it to maximize paperclips then this can be realized in an infinite number of ways
If the AI has the goal to maximize the number of paperclips in the universe and it is a rational utility maximizer it will try to find the most efficient way to do that, and there is probably only one (i.e. recursive self-improvement, acquiring ressources, etc..)
You’re right, if the AI isn’t a rational utility maximizer it could do anything.
You’re right, if the AI isn’t a rational utility maximizer it could do anything.
I don’t think this follows. Even a rational utility maximizer can maximize paperclips in a lot of different ways. How it does it is fundamentally dependent on its utility-function and how precisely it was defined. If there are no constraints in the form of design and goal parameters then it can maximize paperclips in all sorts of ways that don’t demand recursive self-improvement. “Utility” does only become well-defined if we precisely define what it means to maximize it. Just maximizing paperclips doesn’t define how quickly and how economically it is supposed to happen.
My intuition is this: If the AI has the goal “The more paperclips the better: e.g. an universe containing 1002 paperclips is “400 utilons” better than an universe containing 602″ then it will try to maximize paperclips. And if it tries this by reciting poems from the Bible then it isn’t a rational AI, since it does not employ the most efficient strategy for maximizing paperclips.
The very definition of “rational utility maximizer” implies that it will try to maximize utilons as fast and as efficient as possible. Sure, it’s possible that recursive self-improvement isn’t a good strategy for doing so, but I think it’s not unlikely. Am I missing something?
If the AI has a different utility function like “paperclips are pretty cool, but not as awesome as other things” then it will do other things.
The very definition of “rational utility maximizer” implies that it will try to maximize utilons as fast and as efficient as possible. Sure, it’s possible that recursive self-improvement isn’t a good strategy for doing so, but I think it’s not unlikely. Am I missing something?
No, you are not missing something at least not here. XiXiDu simply doesn’t have a firm grasp on the concept of optimization. Don’t let this confuse you.
The very definition of “rational utility maximizer” implies that it will try to maximize utilons as fast and as efficient as possible.
The problem is that “utility” has to be defined. To maximize expected utility does not imply certain actions, efficiency and economic behavior, or the drive to protect yourself. You can also rationally maximize paperclips without protecting yourself if it is not part of your goal parameters.
I know what kind of agent you assume. I am just pointing out what needs to be true in conjunction to make the overall premise true. Expected utility maximizing does not equal what you assume. You can also assign utility to maximize paperclips as long as nothing turns you off but don’t care about being turned off. If an AI is not explicitly programmed to care about it, then it won’t.
If the AI has the goal to maximize the number of paperclips in the universe and it is a rational utility maximizer it will try to find the most efficient way to do that, and there is probably only one (i.e. recursive self-improvement, acquiring ressources, etc..) You’re right, if the AI isn’t a rational utility maximizer it could do anything.
I don’t think this follows. Even a rational utility maximizer can maximize paperclips in a lot of different ways. How it does it is fundamentally dependent on its utility-function and how precisely it was defined. If there are no constraints in the form of design and goal parameters then it can maximize paperclips in all sorts of ways that don’t demand recursive self-improvement. “Utility” does only become well-defined if we precisely define what it means to maximize it. Just maximizing paperclips doesn’t define how quickly and how economically it is supposed to happen.
I don’t understand your arguments.
My intuition is this: If the AI has the goal “The more paperclips the better: e.g. an universe containing 1002 paperclips is “400 utilons” better than an universe containing 602″ then it will try to maximize paperclips. And if it tries this by reciting poems from the Bible then it isn’t a rational AI, since it does not employ the most efficient strategy for maximizing paperclips.
The very definition of “rational utility maximizer” implies that it will try to maximize utilons as fast and as efficient as possible. Sure, it’s possible that recursive self-improvement isn’t a good strategy for doing so, but I think it’s not unlikely. Am I missing something?
If the AI has a different utility function like “paperclips are pretty cool, but not as awesome as other things” then it will do other things.
No, you are not missing something at least not here. XiXiDu simply doesn’t have a firm grasp on the concept of optimization. Don’t let this confuse you.
The problem is that “utility” has to be defined. To maximize expected utility does not imply certain actions, efficiency and economic behavior, or the drive to protect yourself. You can also rationally maximize paperclips without protecting yourself if it is not part of your goal parameters.
I know what kind of agent you assume. I am just pointing out what needs to be true in conjunction to make the overall premise true. Expected utility maximizing does not equal what you assume. You can also assign utility to maximize paperclips as long as nothing turns you off but don’t care about being turned off. If an AI is not explicitly programmed to care about it, then it won’t.