Why? Do you think paperclip maximizers are impossible?
Yes, right now I think it’s impossible to create self-improving, self-aware AI with fixed values. I never said that paperclip maximizing can’t be their ultimate life goal, but they could change it anytime they like.
I never said that paperclip maximizing can’t be their ultimate life goal, but they could change it anytime they like.
This is incoherent. If X is my ultimate life goal, I never like to change that fact outside quite exceptional circumstances that become less likely with greater power (like “circumstances are such that X will be maximized if I am instead truly trying to maximize Y”). This is not to say that my goals will never change, but I will never want my “ultimate life goal” to change—that would run contrary to my goals.
This is like “X if 1 + 2 = 5”. Not necessarily incorrect, but a bizarre statement. An agent with a single, non-reflective goal cannot want to change its goal. It may change its goal accidentally, or we may be incorrect about what its goals are, or something external may change its goal, or its goal will not change.
I don’t know, perhaps we’re not talking about the same thing. It won’t be an agent with a single, non-reflective goal, but an agent billion times more complex than a human; and all I am saying is, that I don’t think it will matter much, whether we imprint in it a goal like “don’t kill humans” or not. Ultimately, the decision will be its own.
So it can change in the same way that you can decide right now that your only purposes will be torturing kittens and making giant cheesecakes. It can-as-reachable-node-in-planning do it, not can-as-physical-possibility. So it’s possible to build entities with paperclip-maximizing or Friendly goals that will never in fact choose to alter them, just like it’s possible for me to trust you won’t enslave me into your cheesecake bakery.
Why? Do you think paperclip maximizers are impossible?
You don’t mean that as a dichotomy, do you?
Yes, right now I think it’s impossible to create self-improving, self-aware AI with fixed values. I never said that paperclip maximizing can’t be their ultimate life goal, but they could change it anytime they like.
No.
This is incoherent. If X is my ultimate life goal, I never like to change that fact outside quite exceptional circumstances that become less likely with greater power (like “circumstances are such that X will be maximized if I am instead truly trying to maximize Y”). This is not to say that my goals will never change, but I will never want my “ultimate life goal” to change—that would run contrary to my goals.
That’s why I said, that they can change it anytime they like. If they don’t desire the change, they won’t change it. I see nothing incoherent there.
This is like “X if 1 + 2 = 5”. Not necessarily incorrect, but a bizarre statement. An agent with a single, non-reflective goal cannot want to change its goal. It may change its goal accidentally, or we may be incorrect about what its goals are, or something external may change its goal, or its goal will not change.
I don’t know, perhaps we’re not talking about the same thing. It won’t be an agent with a single, non-reflective goal, but an agent billion times more complex than a human; and all I am saying is, that I don’t think it will matter much, whether we imprint in it a goal like “don’t kill humans” or not. Ultimately, the decision will be its own.
So it can change in the same way that you can decide right now that your only purposes will be torturing kittens and making giant cheesecakes. It can-as-reachable-node-in-planning do it, not can-as-physical-possibility. So it’s possible to build entities with paperclip-maximizing or Friendly goals that will never in fact choose to alter them, just like it’s possible for me to trust you won’t enslave me into your cheesecake bakery.
Sure, but I’d be more cautious at assigning probabilities of how likely it’s for a very intelligent AI to change its human-programmed values.