I realize that no analogy is perfect, but I don’t think your sleeper cell hypothetical is analogous to AI. It would be a more accurate analogy if someone were to point out that, gee, a sleeper cell would be quite effective, and it’s just a matter of time before the enemy realizes this and establishes one. There is a vast amount of Knightian uncertainty that exists in the case of AI, and does not exist in your hypothetical.
Well, it depends on what you mean, but I do think that almost any AGI we create will be unfriendly by default, so to the extent that we as a society are trying to create AGI, I don’t think it’s exaggerating to say that the sleeper cell “already exists”. I’m willing to own up to the analogy to that extent.
As for Knightian uncertainty: either the AI will be an existential threat, or it won’t. I already think that it will be (or could be), so I think I’m already being pretty conservative from a Knightian point of view, given the stakes at hand. Worst case is that we waste some research money on something that turns out to be not that important.
(Of course, I’m against wasting research money, so I pay attention to arguments for why AI won’t be a threat. I just haven’t been convinced yet)
I realize that no analogy is perfect, but I don’t think your sleeper cell hypothetical is analogous to AI. It would be a more accurate analogy if someone were to point out that, gee, a sleeper cell would be quite effective, and it’s just a matter of time before the enemy realizes this and establishes one. There is a vast amount of Knightian uncertainty that exists in the case of AI, and does not exist in your hypothetical.
Well, it depends on what you mean, but I do think that almost any AGI we create will be unfriendly by default, so to the extent that we as a society are trying to create AGI, I don’t think it’s exaggerating to say that the sleeper cell “already exists”. I’m willing to own up to the analogy to that extent.
As for Knightian uncertainty: either the AI will be an existential threat, or it won’t. I already think that it will be (or could be), so I think I’m already being pretty conservative from a Knightian point of view, given the stakes at hand. Worst case is that we waste some research money on something that turns out to be not that important.
(Of course, I’m against wasting research money, so I pay attention to arguments for why AI won’t be a threat. I just haven’t been convinced yet)