As far as I can tell, you’re responding to the claim, “A group of humans can’t figure out complicated ideas given enough time.” But this isn’t my claim at all. My claim is, “One or many superintelligences would be difficult to predict/model/understand because they have a fundamentally more powerful way to reason about reality.” This is trivially true once the number of machines which are “smarter” than humans exceeds the total number of humans. The extent to which it is difficult to predict/model the “smarter” machines is a matter of contention. The precise number of “smarter” machines and how much “smarter” they need be before we should be “worried” is also a matter of contention. (How “worried” we should be is a matter of contention!)
But all of these points of contention are exactly the sorts of things that people at MIRI like to think about.
One or many superintelligences would be difficult to predict/model/understand because they have a fundamentally more powerful way to reason about reality.
Whatever reasoning technique is available to a super-intelligence is available to humans as well. No one is mandating that humans who build an AGI check their work with pencil and paper.
I mean, sure, but this observation (i.e., “We have tools that allow us to study the AI”) is only helpful if your reasoning techniques allow you to keep the AI in the box.
Which is, like, the entire point of contention, here (i.e., whether or not this can be done safely a priori).
I think that you think MIRI’s claim is “This cannot be done safely.” And I think your claim is “This obviously can be done safely” or perhaps “The onus is on MIRI to prove that this cannot be done safely.”
But, again, MIRI’s whole mission is to figure out the extent to which this can be done safely.
As far as I can tell, you’re responding to the claim, “A group of humans can’t figure out complicated ideas given enough time.” But this isn’t my claim at all. My claim is, “One or many superintelligences would be difficult to predict/model/understand because they have a fundamentally more powerful way to reason about reality.” This is trivially true once the number of machines which are “smarter” than humans exceeds the total number of humans. The extent to which it is difficult to predict/model the “smarter” machines is a matter of contention. The precise number of “smarter” machines and how much “smarter” they need be before we should be “worried” is also a matter of contention. (How “worried” we should be is a matter of contention!)
But all of these points of contention are exactly the sorts of things that people at MIRI like to think about.
Whatever reasoning technique is available to a super-intelligence is available to humans as well. No one is mandating that humans who build an AGI check their work with pencil and paper.
I mean, sure, but this observation (i.e., “We have tools that allow us to study the AI”) is only helpful if your reasoning techniques allow you to keep the AI in the box.
Which is, like, the entire point of contention, here (i.e., whether or not this can be done safely a priori).
I think that you think MIRI’s claim is “This cannot be done safely.” And I think your claim is “This obviously can be done safely” or perhaps “The onus is on MIRI to prove that this cannot be done safely.”
But, again, MIRI’s whole mission is to figure out the extent to which this can be done safely.