Well, it depends on what you mean, but I do think that almost any AGI we create will be unfriendly by default, so to the extent that we as a society are trying to create AGI, I don’t think it’s exaggerating to say that the sleeper cell “already exists”. I’m willing to own up to the analogy to that extent.
As for Knightian uncertainty: either the AI will be an existential threat, or it won’t. I already think that it will be (or could be), so I think I’m already being pretty conservative from a Knightian point of view, given the stakes at hand. Worst case is that we waste some research money on something that turns out to be not that important.
(Of course, I’m against wasting research money, so I pay attention to arguments for why AI won’t be a threat. I just haven’t been convinced yet)
Well, it depends on what you mean, but I do think that almost any AGI we create will be unfriendly by default, so to the extent that we as a society are trying to create AGI, I don’t think it’s exaggerating to say that the sleeper cell “already exists”. I’m willing to own up to the analogy to that extent.
As for Knightian uncertainty: either the AI will be an existential threat, or it won’t. I already think that it will be (or could be), so I think I’m already being pretty conservative from a Knightian point of view, given the stakes at hand. Worst case is that we waste some research money on something that turns out to be not that important.
(Of course, I’m against wasting research money, so I pay attention to arguments for why AI won’t be a threat. I just haven’t been convinced yet)