One or many superintelligences would be difficult to predict/model/understand because they have a fundamentally more powerful way to reason about reality.
Whatever reasoning technique is available to a super-intelligence is available to humans as well. No one is mandating that humans who build an AGI check their work with pencil and paper.
I mean, sure, but this observation (i.e., “We have tools that allow us to study the AI”) is only helpful if your reasoning techniques allow you to keep the AI in the box.
Which is, like, the entire point of contention, here (i.e., whether or not this can be done safely a priori).
I think that you think MIRI’s claim is “This cannot be done safely.” And I think your claim is “This obviously can be done safely” or perhaps “The onus is on MIRI to prove that this cannot be done safely.”
But, again, MIRI’s whole mission is to figure out the extent to which this can be done safely.
Whatever reasoning technique is available to a super-intelligence is available to humans as well. No one is mandating that humans who build an AGI check their work with pencil and paper.
I mean, sure, but this observation (i.e., “We have tools that allow us to study the AI”) is only helpful if your reasoning techniques allow you to keep the AI in the box.
Which is, like, the entire point of contention, here (i.e., whether or not this can be done safely a priori).
I think that you think MIRI’s claim is “This cannot be done safely.” And I think your claim is “This obviously can be done safely” or perhaps “The onus is on MIRI to prove that this cannot be done safely.”
But, again, MIRI’s whole mission is to figure out the extent to which this can be done safely.