As a serious rebuttal, I don’t think it works. A shield AI’s code could be made public in advance of its launch, and could verifiably NOT contain anything like the memories, personality, or secret agenda of the programmers. There’s nothing “narrow” about wanting the world to cooperate in enforcing a temporary ban on superintelligent AIs.
Such a desire is, as some other commenters have complained, a bit conservative—but in light of the unprecedented risks (both in terms of geographic region affected and in terms of hard-to-remove uncertainty), I’ll be happy to be a conservative on this issue.
I’m not sure whether you’re kidding.
As a joke, it’s funny.
As a serious rebuttal, I don’t think it works. A shield AI’s code could be made public in advance of its launch, and could verifiably NOT contain anything like the memories, personality, or secret agenda of the programmers. There’s nothing “narrow” about wanting the world to cooperate in enforcing a temporary ban on superintelligent AIs.
Such a desire is, as some other commenters have complained, a bit conservative—but in light of the unprecedented risks (both in terms of geographic region affected and in terms of hard-to-remove uncertainty), I’ll be happy to be a conservative on this issue.