With our current understanding, an AI with this architecture would not do anything productive.
Is that because any non-trivial action could run a chance of changing the AGI, and thus the AGI wouldn’t dare do anything at all? (If (false), disregard the following. Return 0;).
If (true), with goal stability being a paramount invariant, would you say that the AGI needs to extrapolate the effect any action would have on itself, before executing it? As in “type ‘hi’” or “buy an apple” being preceded by “prove this action maintains the invariant ‘goal stability’”.
It seems like such an architecture wouldn’t do much of anything either, combing through its own code whenever doing anything. (Edit: And competing teams would be quick to modify the AGI such that it checks less.)
If you say that not every action necessitates proving the invariant over all of its code, then an AI without having a way of proving actions to be non-invariant-threatening could do any actions that wouldn’t result in a call to the (non-existing) “prove action isn’t self-modifying in a goal shifting way”.
Is that because any non-trivial action could run a chance of changing the AGI, and thus the AGI wouldn’t dare do anything at all? (If (false), disregard the following. Return 0;).
That or it takes actions changing itself without caring that they would make it worse because it doesn’t know that its current algorithms are worth preserving. Your scenario is what might happen if someone notices this problem and tries to fix it by telling the AI to never modify itself, depending on how exactly they formalize ‘never modify itself’.
Is that because any non-trivial action could run a chance of changing the AGI, and thus the AGI wouldn’t dare do anything at all? (If (false), disregard the following. Return 0;).
If (true), with goal stability being a paramount invariant, would you say that the AGI needs to extrapolate the effect any action would have on itself, before executing it? As in “type ‘hi’” or “buy an apple” being preceded by “prove this action maintains the invariant ‘goal stability’”.
It seems like such an architecture wouldn’t do much of anything either, combing through its own code whenever doing anything. (Edit: And competing teams would be quick to modify the AGI such that it checks less.)
If you say that not every action necessitates proving the invariant over all of its code, then an AI without having a way of proving actions to be non-invariant-threatening could do any actions that wouldn’t result in a call to the (non-existing) “prove action isn’t self-modifying in a goal shifting way”.
That or it takes actions changing itself without caring that they would make it worse because it doesn’t know that its current algorithms are worth preserving. Your scenario is what might happen if someone notices this problem and tries to fix it by telling the AI to never modify itself, depending on how exactly they formalize ‘never modify itself’.