I hadn’t seen this before. Hanson’s conception of intelligence actually seems much simpler and more plausible than how I had previously imagined it. I think ‘intelligence’ can easily act as a Semantic Stopsign because it feels like a singular entity through the experience of consciousness, but actually may be quite modular as Hanson suggests.
Intelligence must be very modular—that’s what drives Moravec’s paradox (problems like vision and locomotion that we have good modules for feel “easy”, problems that we have to solve with “general” intelligence feel “hard”), the Wason Selection task results (people don’t always have a great “general logic” module even when they could easily solve an isomorphic problem applied to a specific context), etc.
Does this greatly affect the AGI takeoff debate, though? So long as we can’t create a module which is itself capable of creating modules, what we have doesn’t qualify as human-equivalent AGI. But if/when we can, then it’s likely that it can also create an improved version of itself, and so it’s still an open question as to how fast or how far it can improve.
I hadn’t seen this before. Hanson’s conception of intelligence actually seems much simpler and more plausible than how I had previously imagined it. I think ‘intelligence’ can easily act as a Semantic Stopsign because it feels like a singular entity through the experience of consciousness, but actually may be quite modular as Hanson suggests.
Intelligence must be very modular—that’s what drives Moravec’s paradox (problems like vision and locomotion that we have good modules for feel “easy”, problems that we have to solve with “general” intelligence feel “hard”), the Wason Selection task results (people don’t always have a great “general logic” module even when they could easily solve an isomorphic problem applied to a specific context), etc.
Does this greatly affect the AGI takeoff debate, though? So long as we can’t create a module which is itself capable of creating modules, what we have doesn’t qualify as human-equivalent AGI. But if/when we can, then it’s likely that it can also create an improved version of itself, and so it’s still an open question as to how fast or how far it can improve.