Does this simplify to the AI obeying: “Modify my utility function if and only if the new version is likely to result in more utility according to the current version?”
If so, something about it feels wrong. For one thing, I’m not sure how an AI following such a rule would ever conclude it should change the function. If it can only make changes that result in maximizing the current function, why not just keep the current one and continue maximizing it?
That’s the point, that it would almost never change it’s underlying utility function. Once we have a provably friendly FAI, we wouldn’t want it to change the part that makes its friendly.
Now, it could still change how it goes about achieving it’s utility function, as long as that helps it get more utility, so it would still be self-modifying.
There is a chance that it could change (E.g. if you were naturally a 2-boxer on Newcomb’s Problem, you might self-modify to do a one-boxer). But, those cases are rare.
Does this simplify to the AI obeying: “Modify my utility function if and only if the new version is likely to result in more utility according to the current version?”
If so, something about it feels wrong. For one thing, I’m not sure how an AI following such a rule would ever conclude it should change the function. If it can only make changes that result in maximizing the current function, why not just keep the current one and continue maximizing it?
That’s the point, that it would almost never change it’s underlying utility function. Once we have a provably friendly FAI, we wouldn’t want it to change the part that makes its friendly.
Now, it could still change how it goes about achieving it’s utility function, as long as that helps it get more utility, so it would still be self-modifying.
There is a chance that it could change (E.g. if you were naturally a 2-boxer on Newcomb’s Problem, you might self-modify to do a one-boxer). But, those cases are rare.