The trouble is basically: there is a good chance we can build systems that—in practice—do self-modification quite well, while not yet understanding any formalism that can capture notions like value stability. For example, see evolution. So if we want to minimize the chance of doing that, one thing to do is to first develop a formalism in which goal stability makes sense.
(Goal stability is one kind of desirable notion out of many. The general principle is: if you don’t understand how or why something works, it’s reasonably likely to not do quite what you want it to do. If you are trying to build an X and want to make sure you understand how it works, a sensible first step is to try and develop an account of how X could exist at all.)
Evolution isn’t an AGI, if we’re talking not about “systems that do self-modification” in general, but an “AGI that could do self modification”, does it need to initially contain an additional, explicit formalism to capture notions like value stability? Failing to have such a formalism, would it then not care what happens to its own utility function?
Randomizing itself in unpredictable ways would be counter to the AGI’s current utility function. Any of the actions the AI takes is by definition intended to serve its current utility function—even if that intent seems delayed, e.g. when the action is taking measurements in order to build a more accurate model, or self-modifying to gain more future optimizing power.
Since self-modification in an unpredictable way is an action that strictly jeapardizes the AGI’s future potential to maximize its current utility function (the only basis for the decision concerning any action, including whether the AGI will self-modify or not), the safeguard against unpredictable self-modification would be inherently engrained in the AGI’s desire to only ever maximize its current utility function.
Conclusion: The formalism that saves the AGI from unwanted self-modification is its desire to fulfill its current utility function. The AGI would be motivated to develop formalisms that allow it to self modify to better optimize its current utility function in the future, since that would maximize its current utility function better (what a one-trick-pony!).
Since self-modification in an unpredictable way is an action that strictly jeapardizes the AGI’s future potential to maximize its current utility function (the only basis for the decision concerning any action, including whether the AGI will self-modify or not), the safeguard against unpredictable self-modification would be inherently engrained in the AGI’s desire to only ever maximize its current utility function.
I understand the point you are making about value stability.
With our current understanding, an AI with this architecture would not do anything productive. The concern isn’t that an AI with this architecture would do something bad, it’s that (in light of the fact that it would not do anything productive) you wouldn’t build it. Instead you would build something different; quite possibly something you understand less well and whose good behavior is more of an empirical regularity (and potentially more unstable). Humans as the output of evolution are the prototypical example of this, though many human artifacts have the same character to varying degrees (programs, organizations, cities, economies).
With our current understanding, an AI with this architecture would not do anything productive.
Is that because any non-trivial action could run a chance of changing the AGI, and thus the AGI wouldn’t dare do anything at all? (If (false), disregard the following. Return 0;).
If (true), with goal stability being a paramount invariant, would you say that the AGI needs to extrapolate the effect any action would have on itself, before executing it? As in “type ‘hi’” or “buy an apple” being preceded by “prove this action maintains the invariant ‘goal stability’”.
It seems like such an architecture wouldn’t do much of anything either, combing through its own code whenever doing anything. (Edit: And competing teams would be quick to modify the AGI such that it checks less.)
If you say that not every action necessitates proving the invariant over all of its code, then an AI without having a way of proving actions to be non-invariant-threatening could do any actions that wouldn’t result in a call to the (non-existing) “prove action isn’t self-modifying in a goal shifting way”.
Is that because any non-trivial action could run a chance of changing the AGI, and thus the AGI wouldn’t dare do anything at all? (If (false), disregard the following. Return 0;).
That or it takes actions changing itself without caring that they would make it worse because it doesn’t know that its current algorithms are worth preserving. Your scenario is what might happen if someone notices this problem and tries to fix it by telling the AI to never modify itself, depending on how exactly they formalize ‘never modify itself’.
The trouble is basically: there is a good chance we can build systems that—in practice—do self-modification quite well, while not yet understanding any formalism that can capture notions like value stability. For example, see evolution. So if we want to minimize the chance of doing that, one thing to do is to first develop a formalism in which goal stability makes sense.
(Goal stability is one kind of desirable notion out of many. The general principle is: if you don’t understand how or why something works, it’s reasonably likely to not do quite what you want it to do. If you are trying to build an X and want to make sure you understand how it works, a sensible first step is to try and develop an account of how X could exist at all.)
Evolution isn’t an AGI, if we’re talking not about “systems that do self-modification” in general, but an “AGI that could do self modification”, does it need to initially contain an additional, explicit formalism to capture notions like value stability? Failing to have such a formalism, would it then not care what happens to its own utility function?
Randomizing itself in unpredictable ways would be counter to the AGI’s current utility function. Any of the actions the AI takes is by definition intended to serve its current utility function—even if that intent seems delayed, e.g. when the action is taking measurements in order to build a more accurate model, or self-modifying to gain more future optimizing power.
Since self-modification in an unpredictable way is an action that strictly jeapardizes the AGI’s future potential to maximize its current utility function (the only basis for the decision concerning any action, including whether the AGI will self-modify or not), the safeguard against unpredictable self-modification would be inherently engrained in the AGI’s desire to only ever maximize its current utility function.
Conclusion: The formalism that saves the AGI from unwanted self-modification is its desire to fulfill its current utility function. The AGI would be motivated to develop formalisms that allow it to self modify to better optimize its current utility function in the future, since that would maximize its current utility function better (what a one-trick-pony!).
I understand the point you are making about value stability.
With our current understanding, an AI with this architecture would not do anything productive. The concern isn’t that an AI with this architecture would do something bad, it’s that (in light of the fact that it would not do anything productive) you wouldn’t build it. Instead you would build something different; quite possibly something you understand less well and whose good behavior is more of an empirical regularity (and potentially more unstable). Humans as the output of evolution are the prototypical example of this, though many human artifacts have the same character to varying degrees (programs, organizations, cities, economies).
Is that because any non-trivial action could run a chance of changing the AGI, and thus the AGI wouldn’t dare do anything at all? (If (false), disregard the following. Return 0;).
If (true), with goal stability being a paramount invariant, would you say that the AGI needs to extrapolate the effect any action would have on itself, before executing it? As in “type ‘hi’” or “buy an apple” being preceded by “prove this action maintains the invariant ‘goal stability’”.
It seems like such an architecture wouldn’t do much of anything either, combing through its own code whenever doing anything. (Edit: And competing teams would be quick to modify the AGI such that it checks less.)
If you say that not every action necessitates proving the invariant over all of its code, then an AI without having a way of proving actions to be non-invariant-threatening could do any actions that wouldn’t result in a call to the (non-existing) “prove action isn’t self-modifying in a goal shifting way”.
That or it takes actions changing itself without caring that they would make it worse because it doesn’t know that its current algorithms are worth preserving. Your scenario is what might happen if someone notices this problem and tries to fix it by telling the AI to never modify itself, depending on how exactly they formalize ‘never modify itself’.