I understand the notion, but think of it in terms of preventing a pandemic: There’s a certain set of characteristics of a virus that would overwhelm virtually any attempt to prevent it from wiping out humanity. All existing viruses are pretty safely within the bounds of what our actual public health protocols can handle. On top of that, existing or hypothetical yet plausible protocols can prevent pandemics with viruses that have higher transmissibility, or higher mortality than anything previously experienced.
Realistically, a protocol to deal with AGI will be in a similar position. It will be distinctly “one-shot” but there’s no reason it couldn’t deal with a computer somewhat more intelligent than any existing human being.
I understand the notion, but think of it in terms of preventing a pandemic: There’s a certain set of characteristics of a virus that would overwhelm virtually any attempt to prevent it from wiping out humanity. All existing viruses are pretty safely within the bounds of what our actual public health protocols can handle. On top of that, existing or hypothetical yet plausible protocols can prevent pandemics with viruses that have higher transmissibility, or higher mortality than anything previously experienced.
Realistically, a protocol to deal with AGI will be in a similar position. It will be distinctly “one-shot” but there’s no reason it couldn’t deal with a computer somewhat more intelligent than any existing human being.