apart from the guarantees, it seems to me that it is hard to build the AI in such a way that it would be unable to find a better solution than to wait
I’m not sure what the difference is between a guarantee that the AI will not X, on the one hand, and building an AI in such a way that it’s unable to X, on the other.
Regardless, I agree that it does not follow from the supposition that pressing a button results in absence of an itch, or absence of pain, or some other negative reinforcement, that the button-pressing system has a drive to preserve itself.
And, sure, it’s possible to have a utility-maximizing system that doesn’t seek to preserve itself. (Of course, if I observe a utility-maximizing system X, I should expect X to seek to preserve itself, but that’s a different question.)
I’m not sure what the difference is between a guarantee that the AI will not X, on the one hand, and building an AI in such a way that it’s unable to X, on the other.
About the same as between coming up with a true conjecture, and making a proof, except larger i’d say.
Of course, if I observe a utility-maximizing system X, I should expect X to seek to preserve itself, but that’s a different question.
Well yes, given that if it failed to preserve itself you wouldn’t be seeing it, albeit with the software there is no particular necessity for it to try to preserve itself.
I’m not sure what the difference is between a guarantee that the AI will not X, on the one hand, and building an AI in such a way that it’s unable to X, on the other. About the same as between coming up with a true conjecture, and making a proof, except larger
Ah, I see what you mean now. At least, I think I do. OK, fair enough.
I’m not sure what the difference is between a guarantee that the AI will not X, on the one hand, and building an AI in such a way that it’s unable to X, on the other.
Regardless, I agree that it does not follow from the supposition that pressing a button results in absence of an itch, or absence of pain, or some other negative reinforcement, that the button-pressing system has a drive to preserve itself.
And, sure, it’s possible to have a utility-maximizing system that doesn’t seek to preserve itself. (Of course, if I observe a utility-maximizing system X, I should expect X to seek to preserve itself, but that’s a different question.)
About the same as between coming up with a true conjecture, and making a proof, except larger i’d say.
Well yes, given that if it failed to preserve itself you wouldn’t be seeing it, albeit with the software there is no particular necessity for it to try to preserve itself.
Ah, I see what you mean now. At least, I think I do. OK, fair enough.