The worry is that there will be such a huge gulf between how superintelligences reason versus how we reason that it would take prohibitively long to understand them.
That may be a valid concern, but it requires evidence as it is not the default conclusion. Note that quantum physics is sufficiently different that human intuitions do not apply, but it does not take a physicist a “prohibitively long” time to understand quantum mechanical problems and their solutions.
As to your laptop example, I’m not sure what you are attempting to prove. Even if one single engineer doesn’t understand how ever component of a laptop works, we are nevertheless very much able to reason about the systems-level operation of laptops, or the the development trajectory of the global laptop market. When there are issues, we are able to debug them and fix them in context. If anything the example shows how humanity as a whole is able to complete complex projects like the creation of a modern computational machine without being constrained to any one individual understanding the whole.
Edit: gaaaah. Thanks Sable. I fell for the very trap of reasoning by analogy I opined against. Habitual modes of thought are hard to break.
As far as I can tell, you’re responding to the claim, “A group of humans can’t figure out complicated ideas given enough time.” But this isn’t my claim at all. My claim is, “One or many superintelligences would be difficult to predict/model/understand because they have a fundamentally more powerful way to reason about reality.” This is trivially true once the number of machines which are “smarter” than humans exceeds the total number of humans. The extent to which it is difficult to predict/model the “smarter” machines is a matter of contention. The precise number of “smarter” machines and how much “smarter” they need be before we should be “worried” is also a matter of contention. (How “worried” we should be is a matter of contention!)
But all of these points of contention are exactly the sorts of things that people at MIRI like to think about.
One or many superintelligences would be difficult to predict/model/understand because they have a fundamentally more powerful way to reason about reality.
Whatever reasoning technique is available to a super-intelligence is available to humans as well. No one is mandating that humans who build an AGI check their work with pencil and paper.
I mean, sure, but this observation (i.e., “We have tools that allow us to study the AI”) is only helpful if your reasoning techniques allow you to keep the AI in the box.
Which is, like, the entire point of contention, here (i.e., whether or not this can be done safely a priori).
I think that you think MIRI’s claim is “This cannot be done safely.” And I think your claim is “This obviously can be done safely” or perhaps “The onus is on MIRI to prove that this cannot be done safely.”
But, again, MIRI’s whole mission is to figure out the extent to which this can be done safely.
That may be a valid concern, but it requires evidence as it is not the default conclusion. Note that quantum physics is sufficiently different that human intuitions do not apply, but it does not take a physicist a “prohibitively long” time to understand quantum mechanical problems and their solutions.
As to your laptop example, I’m not sure what you are attempting to prove. Even if one single engineer doesn’t understand how ever component of a laptop works, we are nevertheless very much able to reason about the systems-level operation of laptops, or the the development trajectory of the global laptop market. When there are issues, we are able to debug them and fix them in context. If anything the example shows how humanity as a whole is able to complete complex projects like the creation of a modern computational machine without being constrained to any one individual understanding the whole.
Edit: gaaaah. Thanks Sable. I fell for the very trap of reasoning by analogy I opined against. Habitual modes of thought are hard to break.
As far as I can tell, you’re responding to the claim, “A group of humans can’t figure out complicated ideas given enough time.” But this isn’t my claim at all. My claim is, “One or many superintelligences would be difficult to predict/model/understand because they have a fundamentally more powerful way to reason about reality.” This is trivially true once the number of machines which are “smarter” than humans exceeds the total number of humans. The extent to which it is difficult to predict/model the “smarter” machines is a matter of contention. The precise number of “smarter” machines and how much “smarter” they need be before we should be “worried” is also a matter of contention. (How “worried” we should be is a matter of contention!)
But all of these points of contention are exactly the sorts of things that people at MIRI like to think about.
Whatever reasoning technique is available to a super-intelligence is available to humans as well. No one is mandating that humans who build an AGI check their work with pencil and paper.
I mean, sure, but this observation (i.e., “We have tools that allow us to study the AI”) is only helpful if your reasoning techniques allow you to keep the AI in the box.
Which is, like, the entire point of contention, here (i.e., whether or not this can be done safely a priori).
I think that you think MIRI’s claim is “This cannot be done safely.” And I think your claim is “This obviously can be done safely” or perhaps “The onus is on MIRI to prove that this cannot be done safely.”
But, again, MIRI’s whole mission is to figure out the extent to which this can be done safely.