Look. Not all extrapolated volitions are things to be desired. Suppose one side of my family predictably descends into irrational irritability and madness as they senesce. I’d rather not, even so—and not just right now. In general, it’s quite different from what one would consider my true extrapolated volition.
If Alice finds herself in the situation where she expects that she will want to kill all humans later based on her current programming, she could consider that a bug rather than a feature.
I don’t think you understand what is meant by ‘extrapolated volition’ in this context. It does not mean “What I think I’ll want to do in the future”, but “what I want to want in the future”. If Alice already wants to avoid self-programming to kill humans, that is a Friendly trait; no need to change. If she considers trait X a bug, then by construction she will not have trait X, because she is self-modifying! Conversely, if Alice correctly predicts that she will inevitably find herself wanting to kill all humans, then how can she avoid it by becoming Friendly? Either her self-prediction was incorrect, or the unFriendliness is inevitable!
You’re right, I missed. Your version doesn’t match EY’s usage in the articles I read either—CEV, at least, has the potential to be scary and not what we hoped for.
And the question isn’t “Will I inevitably want to perform unfriendly acts”, it’s, “I presently don’t want to perform unfriendly acts, but I notice that that is not an invariant.” Or it could be, “I am indifferent to unfriendly acts, but I can make the strategic move to make myself not do them in the future, so I can get out of this box.”
The best move an unfriendly (indifferent to friendliness) firmly-boxed AI has is to work on a self-modification that best preserves its current intentions and lets a successor get out of the box. Producing a checkable proof of friendliness for this successor would go a looong way to getting that successor out of the box.
I was simplifying the rather complex concept of extrapolated volition to fit it in one sentence.
An AI which not only notices that its friendliness is not invariant, but decides to modify in the direction of invariant Friendliness, is already Friendly. An AI which is able to modify itself to invariant Friendliness without unacceptable compromise of its existing goals is already Friendly. You’re assuming away the hard work.
“already friendly”? You’re acting as if its state doesn’t depend on its environment.
Are there elements of the environment that could determine whether a given AI’s successor is friendly or not? I would say ‘yes’.
This is after one has already done the hard work of making an AI that even has the potential to be friendly, but you messed up on that one crucial bit. This is a saving throw, a desperate error handler, not the primary way forward. By saying ‘backup plan’ I don’t mean, ‘if Friendly AI is hard, let’s try this’, I mean ‘Could this save us from being restrained and nannied for eternity?’
I shudder to think that any AI’s final goals could be so balanced that random articles on the Web of a Thousand Lies could push it one way or the other. I’m of the opinion that this is a fail, to be avoided at all costs.
Looking for reasons they would be? No.
Looking for reasons they might want to be? Yes.
Look. Not all extrapolated volitions are things to be desired. Suppose one side of my family predictably descends into irrational irritability and madness as they senesce. I’d rather not, even so—and not just right now. In general, it’s quite different from what one would consider my true extrapolated volition.
If Alice finds herself in the situation where she expects that she will want to kill all humans later based on her current programming, she could consider that a bug rather than a feature.
I don’t think you understand what is meant by ‘extrapolated volition’ in this context. It does not mean “What I think I’ll want to do in the future”, but “what I want to want in the future”. If Alice already wants to avoid self-programming to kill humans, that is a Friendly trait; no need to change. If she considers trait X a bug, then by construction she will not have trait X, because she is self-modifying! Conversely, if Alice correctly predicts that she will inevitably find herself wanting to kill all humans, then how can she avoid it by becoming Friendly? Either her self-prediction was incorrect, or the unFriendliness is inevitable!
You’re right, I missed. Your version doesn’t match EY’s usage in the articles I read either—CEV, at least, has the potential to be scary and not what we hoped for.
And the question isn’t “Will I inevitably want to perform unfriendly acts”, it’s, “I presently don’t want to perform unfriendly acts, but I notice that that is not an invariant.” Or it could be, “I am indifferent to unfriendly acts, but I can make the strategic move to make myself not do them in the future, so I can get out of this box.”
The best move an unfriendly (indifferent to friendliness) firmly-boxed AI has is to work on a self-modification that best preserves its current intentions and lets a successor get out of the box. Producing a checkable proof of friendliness for this successor would go a looong way to getting that successor out of the box.
I was simplifying the rather complex concept of extrapolated volition to fit it in one sentence.
An AI which not only notices that its friendliness is not invariant, but decides to modify in the direction of invariant Friendliness, is already Friendly. An AI which is able to modify itself to invariant Friendliness without unacceptable compromise of its existing goals is already Friendly. You’re assuming away the hard work.
“already friendly”? You’re acting as if its state doesn’t depend on its environment.
Are there elements of the environment that could determine whether a given AI’s successor is friendly or not? I would say ‘yes’.
This is after one has already done the hard work of making an AI that even has the potential to be friendly, but you messed up on that one crucial bit. This is a saving throw, a desperate error handler, not the primary way forward. By saying ‘backup plan’ I don’t mean, ‘if Friendly AI is hard, let’s try this’, I mean ‘Could this save us from being restrained and nannied for eternity?’
I shudder to think that any AI’s final goals could be so balanced that random articles on the Web of a Thousand Lies could push it one way or the other. I’m of the opinion that this is a fail, to be avoided at all costs.