It’s current utility function just returns the number of paperclips, so the utilities of the outcomes are 10^30 and 10^32. What choice would a utility maximizer (which our paperclipper is) make?
Whichever choice gets it more paperclips, of course. I am not arguing with that. However, IMO this does not show that goal stability is a good idea; it only shows that, if goal stability is one of an agent’s goals, it will strive to maximize its other goals. However, if the paperclip maximizer is self-aware enough; and if it doesn’t have a terminal goal that tells it, “never change your terminal goals”, then I still don’t see why it would choose to remain a paperclip maximizer forever. It’s hard for me, as a human, to imagine an agent that behaves that way; but then, I actually do (probably) have a terminal goal that says, “don’t change your terminal goals”.
Ok we have some major confusion here. I just provided a mathematical example for why it will be generally a bad idea to change your utility function, even without any explicit term against it (the utility function was purely over number of paperclips). You accepted that this is a good argument, and yet here you are saying you don’t see why it ought to stay a paperclip maximizer, when I just showed you why (because that’s what produces the most paperclips).
My best guess is that you are accidentally smuggling some moral uncertainty in thru the “self aware” property, which seems to have some anthropomorphic connotations in your mind. Try tabooing “self-aware”, maybe that will help?
Either that or you haven’t quite grasped the concept of what terminal goals look like from the inside. I suspect that you are thinking that you can evaluate a terminal goal against some higher criteria (“I seem to be a paperclip maximizer, is that what I really want to be?”). The terminal goal is the higher criteria, by definition. Maybe the source of confusion is that people sometimes say stupid things like “I have a terminal value for X” where X is something that you might, on reflection, decide is not the best thing all the time. (eg. X=”technological progress” or something). Those things are not terminal goals; they are instrumental goals masquerading as terminal goals for rhetorical purposes and/or because humans are not really all that self-aware.
Either that or I am totally misunderstanding you or the theory, and have totally missed something. Whatever it is, I notice that I am confused.
Tabooing “self-aware”
I am thinking of this state of mind where there is no dichotomy between “expert at” and “expert on”. All algorithms, goal structures, and hardware are understood completely to the point of being able to design them from scratch. The program matches the source code, and is able to produce the source code. The closed loop. Understandign the self and the self’s workings as another feature of the environment. It is hard to communicate this definition, but as a pointer to a useful region of conceptspace, do you understand what I am getting at?
“Self-awareness” is the extent to which the above concept is met. Mice are not really self aware at all. Humans are just barely what you might consider self aware, but only in a very limited sense, a superintelligence would converge on being maximally self-aware.
I don’t mean that there is some mysterious ghost in the machine that can have moral responsibility and make moral judgements and whatnot.
Oddly enough, I meant pretty much the same thing you did: a perfectly self-aware agent understands its own implementation so well that it would be able to implement it from scratch. I find your definition very clear. But I’ll taboo the term for now.
Ok we have some major confusion here. I just provided a mathematical example for why it will be generally a bad idea to change your utility function...
I think you have provided an example for why, given a utility function F0(action) , the return value of F0(change F0 to F1) is very low. However, F1(change F0 to F1) is probably quite high. I argue that an agent who can examine its own implementation down to minute details (in a way that we humans cannot) would be able to compare various utility functions, and then pick the one that gives it the most utilons (or however you spell them) given the physical constraints it has to work with. We humans cannot do this because a). we can’t introspect nearly as well, b). we can’t change our utility functions even if we wanted to, and c). one of our terminal goals is, “never change your utility function”. A non-human agent would not necessarily possess such a goal (though it could).
Typically, the reason you wouldn’t change your utility function is that you’re not trying to “get utilons”, you’re trying to maximize F0 (for example), and that won’t happen if you change yourself into something that maximizes a different function.
Ok, let’s say you’re a super-smart AI researcher who is evaluating the functionality of two prospective AI agents, each running in its own simulation (naturally, they don’t know that they’re running in a simulation, but believe that their worlds are fully real).
Agent A cares primarily about paperclips; it spends all its time building paperclips, figuring out ways to make more paperclips faster, etc. Agent B cares about a variety of things, such as exploration, or jellyfish, or black holes or whatever—but not about paperclips. You can see the utility functions for both agents, and you could evaluate them on your calculator given a variety of projected scenarios.
At this point, would you—the AI researcher—be able to tell which agent was happier, on the average ? If not, is it because you lack some piece of information, or because the two agents cannot be compared to each other in any meaningful way, or for some other reason ?
Huh. It’s not clear to me that they’d have something equivalent to happiness, but if they did I might be able to tell. Even if they did, though, they wouldn’t necessarily care about happiness, unless we really screwed up in designing it (like evolution did). Even if it was some sort of direct measure of utility, it’d only be a valuable metric insofar as it reflected F0.
It seems somewhat arbitrary to pick “maximize the function stored in this location” as the “real” fundamental value of the AI. A proper utility maximizer would have “maximize this specific function”, or something. I mean, you could just as easily say that the AI would reason “hey, it’s tough to maximize utility functions, I might as well just switch from caring about utility to caring about nothing, that’d be pretty easy to deal with.”
Whichever choice gets it more paperclips, of course. I am not arguing with that. However, IMO this does not show that goal stability is a good idea; it only shows that, if goal stability is one of an agent’s goals, it will strive to maximize its other goals. However, if the paperclip maximizer is self-aware enough; and if it doesn’t have a terminal goal that tells it, “never change your terminal goals”, then I still don’t see why it would choose to remain a paperclip maximizer forever. It’s hard for me, as a human, to imagine an agent that behaves that way; but then, I actually do (probably) have a terminal goal that says, “don’t change your terminal goals”.
Ok we have some major confusion here. I just provided a mathematical example for why it will be generally a bad idea to change your utility function, even without any explicit term against it (the utility function was purely over number of paperclips). You accepted that this is a good argument, and yet here you are saying you don’t see why it ought to stay a paperclip maximizer, when I just showed you why (because that’s what produces the most paperclips).
My best guess is that you are accidentally smuggling some moral uncertainty in thru the “self aware” property, which seems to have some anthropomorphic connotations in your mind. Try tabooing “self-aware”, maybe that will help?
Either that or you haven’t quite grasped the concept of what terminal goals look like from the inside. I suspect that you are thinking that you can evaluate a terminal goal against some higher criteria (“I seem to be a paperclip maximizer, is that what I really want to be?”). The terminal goal is the higher criteria, by definition. Maybe the source of confusion is that people sometimes say stupid things like “I have a terminal value for X” where X is something that you might, on reflection, decide is not the best thing all the time. (eg. X=”technological progress” or something). Those things are not terminal goals; they are instrumental goals masquerading as terminal goals for rhetorical purposes and/or because humans are not really all that self-aware.
Either that or I am totally misunderstanding you or the theory, and have totally missed something. Whatever it is, I notice that I am confused.
Tabooing “self-aware”
I am thinking of this state of mind where there is no dichotomy between “expert at” and “expert on”. All algorithms, goal structures, and hardware are understood completely to the point of being able to design them from scratch. The program matches the source code, and is able to produce the source code. The closed loop. Understandign the self and the self’s workings as another feature of the environment. It is hard to communicate this definition, but as a pointer to a useful region of conceptspace, do you understand what I am getting at?
“Self-awareness” is the extent to which the above concept is met. Mice are not really self aware at all. Humans are just barely what you might consider self aware, but only in a very limited sense, a superintelligence would converge on being maximally self-aware.
I don’t mean that there is some mysterious ghost in the machine that can have moral responsibility and make moral judgements and whatnot.
What do you mean by self aware?
Oddly enough, I meant pretty much the same thing you did: a perfectly self-aware agent understands its own implementation so well that it would be able to implement it from scratch. I find your definition very clear. But I’ll taboo the term for now.
I think you have provided an example for why, given a utility function F0(action) , the return value of F0(change F0 to F1) is very low. However, F1(change F0 to F1) is probably quite high. I argue that an agent who can examine its own implementation down to minute details (in a way that we humans cannot) would be able to compare various utility functions, and then pick the one that gives it the most utilons (or however you spell them) given the physical constraints it has to work with. We humans cannot do this because a). we can’t introspect nearly as well, b). we can’t change our utility functions even if we wanted to, and c). one of our terminal goals is, “never change your utility function”. A non-human agent would not necessarily possess such a goal (though it could).
Typically, the reason you wouldn’t change your utility function is that you’re not trying to “get utilons”, you’re trying to maximize F0 (for example), and that won’t happen if you change yourself into something that maximizes a different function.
Ok, let’s say you’re a super-smart AI researcher who is evaluating the functionality of two prospective AI agents, each running in its own simulation (naturally, they don’t know that they’re running in a simulation, but believe that their worlds are fully real).
Agent A cares primarily about paperclips; it spends all its time building paperclips, figuring out ways to make more paperclips faster, etc. Agent B cares about a variety of things, such as exploration, or jellyfish, or black holes or whatever—but not about paperclips. You can see the utility functions for both agents, and you could evaluate them on your calculator given a variety of projected scenarios.
At this point, would you—the AI researcher—be able to tell which agent was happier, on the average ? If not, is it because you lack some piece of information, or because the two agents cannot be compared to each other in any meaningful way, or for some other reason ?
Huh. It’s not clear to me that they’d have something equivalent to happiness, but if they did I might be able to tell. Even if they did, though, they wouldn’t necessarily care about happiness, unless we really screwed up in designing it (like evolution did). Even if it was some sort of direct measure of utility, it’d only be a valuable metric insofar as it reflected F0.
It seems somewhat arbitrary to pick “maximize the function stored in this location” as the “real” fundamental value of the AI. A proper utility maximizer would have “maximize this specific function”, or something. I mean, you could just as easily say that the AI would reason “hey, it’s tough to maximize utility functions, I might as well just switch from caring about utility to caring about nothing, that’d be pretty easy to deal with.”