This is like “X if 1 + 2 = 5”. Not necessarily incorrect, but a bizarre statement. An agent with a single, non-reflective goal cannot want to change its goal. It may change its goal accidentally, or we may be incorrect about what its goals are, or something external may change its goal, or its goal will not change.
I don’t know, perhaps we’re not talking about the same thing. It won’t be an agent with a single, non-reflective goal, but an agent billion times more complex than a human; and all I am saying is, that I don’t think it will matter much, whether we imprint in it a goal like “don’t kill humans” or not. Ultimately, the decision will be its own.
This is like “X if 1 + 2 = 5”. Not necessarily incorrect, but a bizarre statement. An agent with a single, non-reflective goal cannot want to change its goal. It may change its goal accidentally, or we may be incorrect about what its goals are, or something external may change its goal, or its goal will not change.
I don’t know, perhaps we’re not talking about the same thing. It won’t be an agent with a single, non-reflective goal, but an agent billion times more complex than a human; and all I am saying is, that I don’t think it will matter much, whether we imprint in it a goal like “don’t kill humans” or not. Ultimately, the decision will be its own.