An AGI that was not a utility maximizer would make more progress towards whatever goals it had if it modified itself to become a utility maximizer. Three exceptions are if (1) the AGI has a goal of not being a utility maximizer, (2) the AGI has a goal of not modifying itself, (3) the AGI thinks it will be treated better by other powerful agents if it is not a utility maximizer.
Would humans, or organizations of humans, make more progress towards whatever goals they have, if they modified themselves to become a utility maximizer? If so, why don’t they? If not, why would an AGI?
What would it mean to modify oneself to become a utility maximizer? What would it mean for the US, for example? The only meaning I can imagine is that one individual—for the sake of argument we assume that this individual is already an utility maximizer—enforces his will on everyone else. Would that help the US make more progress towards its goals? Do countries that are closer to utility maximizers, like North Korea, make more progress towards their goals?
A human seeking to become a utility maximizer would read LessWrong and try to become more rational. Groups of people are not utility maximizers as their collective preferences might not even be transitive. If the goal of North Korea is to keep the Kim family in bother then the country being a utility maximizer does seem to help.
This depends on how far outside that human’s current capabilities, and that human’s society’s state of knowledge, that thing is. For playing basketball in the modern world, sure, it makes no sense to study physics and calculus, it’s far better to find a coach and train the skills you need. But if you want to become immortal and happen to live in ancient China, then studying and practicing “that thing” looks like eating specially-prepared concoctions containing mercury and thereby getting yourself killed, whereas studying generic rationality leads to the whole series of scientific insights and industrial innovations that make actual progress towards the real goal possible.
Put another way: I think the real complexity is hidden in your use of the phrase “something specific.” If you can concretely state and imagine what the specific thing is, then you probably already have the context needed for useful practice. It’s in figuring out that context, in order to be able to so concretely state what more abstractly stated ‘goals’ really imply and entail, that we need more general and flexible rationality skills.
If you want to be good at something specific that doesn’t exist yet, you need to study the relevant area of science, which is still more specific than rationality.
Assuming the relevant area of science already exists, yes. Recurse as needed, and there is some level of goal for which generic rationality is a highly valuable skillset. Where that level is, depends on personal and societal context.
Efficiency at utility maximisation , like any other kind of efficiency relates to available resources. One upshot of that an entity might already be doing as well as it realistically can, given its resources. Another is that humans don’t necessarily benefit from rationality training...as also suggested by the empirical evidence.
Edit: Another is that a resource rich but inefficient entity can beat a small efficient one, so efficiency,.AKA utility maximization , doesn’t always win out.
When you say the AGI has a goal of not modifying itself, do you mean that the AGI has a goal of not modifying its goals? Because that assumption seems to be fairly prevalent.
An AGI that was not a utility maximizer would make more progress towards whatever goals it had if it modified itself to become a utility maximizer. Three exceptions are if (1) the AGI has a goal of not being a utility maximizer, (2) the AGI has a goal of not modifying itself, (3) the AGI thinks it will be treated better by other powerful agents if it is not a utility maximizer.
Would humans, or organizations of humans, make more progress towards whatever goals they have, if they modified themselves to become a utility maximizer? If so, why don’t they? If not, why would an AGI?
What would it mean to modify oneself to become a utility maximizer? What would it mean for the US, for example? The only meaning I can imagine is that one individual—for the sake of argument we assume that this individual is already an utility maximizer—enforces his will on everyone else. Would that help the US make more progress towards its goals? Do countries that are closer to utility maximizers, like North Korea, make more progress towards their goals?
A human seeking to become a utility maximizer would read LessWrong and try to become more rational. Groups of people are not utility maximizers as their collective preferences might not even be transitive. If the goal of North Korea is to keep the Kim family in bother then the country being a utility maximizer does seem to help.
A human who wants to do something specific would be far better off studying and practicing that thing than generic rationality.
This depends on how far outside that human’s current capabilities, and that human’s society’s state of knowledge, that thing is. For playing basketball in the modern world, sure, it makes no sense to study physics and calculus, it’s far better to find a coach and train the skills you need. But if you want to become immortal and happen to live in ancient China, then studying and practicing “that thing” looks like eating specially-prepared concoctions containing mercury and thereby getting yourself killed, whereas studying generic rationality leads to the whole series of scientific insights and industrial innovations that make actual progress towards the real goal possible.
Put another way: I think the real complexity is hidden in your use of the phrase “something specific.” If you can concretely state and imagine what the specific thing is, then you probably already have the context needed for useful practice. It’s in figuring out that context, in order to be able to so concretely state what more abstractly stated ‘goals’ really imply and entail, that we need more general and flexible rationality skills.
If you want to be good at something specific that doesn’t exist yet, you need to study the relevant area of science, which is still more specific than rationality.
Assuming the relevant area of science already exists, yes. Recurse as needed, and there is some level of goal for which generic rationality is a highly valuable skillset. Where that level is, depends on personal and societal context.
That’s quite different from saying rationality is a one size fits all solution.
Efficiency at utility maximisation , like any other kind of efficiency relates to available resources. One upshot of that an entity might already be doing as well as it realistically can, given its resources. Another is that humans don’t necessarily benefit from rationality training...as also suggested by the empirical evidence.
Edit: Another is that a resource rich but inefficient entity can beat a small efficient one, so efficiency,.AKA utility maximization , doesn’t always win out.
When you say the AGI has a goal of not modifying itself, do you mean that the AGI has a goal of not modifying its goals? Because that assumption seems to be fairly prevalent.
I meant “not modifying itself” which would include not modifying its goals if an AGI without a utility function can be said to have goals.