I think learning is likely to be a hard problem in general (for example, the “learning with rounding problem” is the basis of some cryptographic schemes). I am much less sure whether learning the properties of the physical or social worlds is hard, but I think there’s a good chance it is. If an individual AI cannot exceed human capabilities by much (e.g., we can get an AGI as brilliant as John von Neumann but not much more intelligent), is it still dangerous?
John Von Neumann probably isn’t the ceiling, but even if there was a near-human ceiling, I don’t think it would change the situation as much as you would think. Instead of “an AGI as brilliant as JvN”, it would be “an AGI as brilliant as JvN per X FLOPs”, for some X. Then you look at the details of how many FLOPs are lying around on the planet, and how hard it is to produce more of the, and depending on X the JvN-AGIs probably aren’t as strong as a full-fledged superintelligence would be, but they do probably manage to take over the world in the end.
I think learning is likely to be a hard problem in general (for example, the “learning with rounding problem” is the basis of some cryptographic schemes). I am much less sure whether learning the properties of the physical or social worlds is hard, but I think there’s a good chance it is. If an individual AI cannot exceed human capabilities by much (e.g., we can get an AGI as brilliant as John von Neumann but not much more intelligent), is it still dangerous?
John Von Neumann probably isn’t the ceiling, but even if there was a near-human ceiling, I don’t think it would change the situation as much as you would think. Instead of “an AGI as brilliant as JvN”, it would be “an AGI as brilliant as JvN per X FLOPs”, for some X. Then you look at the details of how many FLOPs are lying around on the planet, and how hard it is to produce more of the, and depending on X the JvN-AGIs probably aren’t as strong as a full-fledged superintelligence would be, but they do probably manage to take over the world in the end.