Yes, it does seem safer to build non-self-modifying AIs. But I’m not quite saying that should be the limit. I’m saying that any AI that can self-modify ought to have a hard barrier where there is code that can’t be modified.
I know there has been excitement here about a transhuman AI being able to bypass pretty much any control humans could devise (that excitement is the topic that first brought me here, in fact). But going for a century or so with AIs that can’t self-modify seems like a pretty good precaution, no?
Simply making a promise could be considered self-modification, since you presumably behave differently after making the promise than you would have counterfactually.
Learning some fact about the world could be considered self-modification, for the same reason.
Can we come up with a useful classification scheme, distinguishing safe forms of self-modification from unsafe forms? Or, what may amount to the same thing, can we give criteria for rationally self-modifying, for each class of self-modification? That is, for example, when is it rational to make promises? When is it rational to update our beliefs about the world?
Perhaps in this context: Structural changes to yourself that are not changes to beliefs, or memories—and are not merely confined to repositioning your actuators, or day-to-day metabolism.
Can we come up with a useful classification scheme, distinguishing safe forms of self-modification from unsafe forms?
You could whitelist safe kinds. That might be useful—under some circumstances.
Clearly, there are some internal values that an AI would need to be able to modify, or else it couldn’t learn. But I think there is good reason to disallow an AI from modifying its own rules for reward, at least to start out. An analogy in humans is that we can do some amazingly wonderful things, but some people go awry when they begin abusing drugs, thereby modifying their own reward circuitry. Severe addicts find they can’t manage a productive life, instead turning to crime to get just enough cash to feed their habits. I’d say that there is inherent danger for human intelligences in short-circuiting or otherwise modifying our reward pathways directly (i.e. chemically), and so there would likely be danger in allowing and AI to directly modify its reward pathways
Yes, it does seem safer to build non-self-modifying AIs. But I’m not quite saying that should be the limit. I’m saying that any AI that can self-modify ought to have a hard barrier where there is code that can’t be modified.
I know there has been excitement here about a transhuman AI being able to bypass pretty much any control humans could devise (that excitement is the topic that first brought me here, in fact). But going for a century or so with AIs that can’t self-modify seems like a pretty good precaution, no?
But what counts as “self-modification”?
Simply making a promise could be considered self-modification, since you presumably behave differently after making the promise than you would have counterfactually.
Learning some fact about the world could be considered self-modification, for the same reason.
Can we come up with a useful classification scheme, distinguishing safe forms of self-modification from unsafe forms? Or, what may amount to the same thing, can we give criteria for rationally self-modifying, for each class of self-modification? That is, for example, when is it rational to make promises? When is it rational to update our beliefs about the world?
Perhaps in this context: Structural changes to yourself that are not changes to beliefs, or memories—and are not merely confined to repositioning your actuators, or day-to-day metabolism.
You could whitelist safe kinds. That might be useful—under some circumstances.
Clearly, there are some internal values that an AI would need to be able to modify, or else it couldn’t learn. But I think there is good reason to disallow an AI from modifying its own rules for reward, at least to start out. An analogy in humans is that we can do some amazingly wonderful things, but some people go awry when they begin abusing drugs, thereby modifying their own reward circuitry. Severe addicts find they can’t manage a productive life, instead turning to crime to get just enough cash to feed their habits. I’d say that there is inherent danger for human intelligences in short-circuiting or otherwise modifying our reward pathways directly (i.e. chemically), and so there would likely be danger in allowing and AI to directly modify its reward pathways
And how do you propose to stop them. Put a negative term in their reward functions?