I’m sure there’s no need to point to Robin Hanson’s anti-foom writings? The best single article is IMO Irreducible Detail essentially questioning the generality of intelligence.
It is true that adult human brains are built out of many domain specific modules, but these modules develop via a very general universal learning system. The neuroscientific evidence directly contradicts the evolved modularity hypothesis, which hanson appears to be heavily influenced by. That being said, his points about AI progress being driven by a large number of mostly independent advances still carries through.
Hanson’s general analysis of the economics of AGI takeoff seem pretty sound—even if it is much more likely that neuro-AGI precedes ems.
I hadn’t seen this before. Hanson’s conception of intelligence actually seems much simpler and more plausible than how I had previously imagined it. I think ‘intelligence’ can easily act as a Semantic Stopsign because it feels like a singular entity through the experience of consciousness, but actually may be quite modular as Hanson suggests.
Intelligence must be very modular—that’s what drives Moravec’s paradox (problems like vision and locomotion that we have good modules for feel “easy”, problems that we have to solve with “general” intelligence feel “hard”), the Wason Selection task results (people don’t always have a great “general logic” module even when they could easily solve an isomorphic problem applied to a specific context), etc.
Does this greatly affect the AGI takeoff debate, though? So long as we can’t create a module which is itself capable of creating modules, what we have doesn’t qualify as human-equivalent AGI. But if/when we can, then it’s likely that it can also create an improved version of itself, and so it’s still an open question as to how fast or how far it can improve.
Thank you for that Irreducible Detail article, I remember reading it before but couldn’t find it later. Hanson’s argument is very convincing and intuitive, and really sheds light on what intelligence might really be about. When I think about my own intelligence, it doesn’t feel like I have some overarching general module planning, but more like I have many simple heuristics, and rules of thumb, and automatic behaviors that just happen to work. This feels more like Hanson’s idea of intelligence.
I think this is the single best argument against MIRI’s idea of intelligence.
I’m sure there’s no need to point to Robin Hanson’s anti-foom writings? The best single article is IMO Irreducible Detail essentially questioning the generality of intelligence.
Here is a key quote:
It is true that adult human brains are built out of many domain specific modules, but these modules develop via a very general universal learning system. The neuroscientific evidence directly contradicts the evolved modularity hypothesis, which hanson appears to be heavily influenced by. That being said, his points about AI progress being driven by a large number of mostly independent advances still carries through.
Hanson’s general analysis of the economics of AGI takeoff seem pretty sound—even if it is much more likely that neuro-AGI precedes ems.
I hadn’t seen this before. Hanson’s conception of intelligence actually seems much simpler and more plausible than how I had previously imagined it. I think ‘intelligence’ can easily act as a Semantic Stopsign because it feels like a singular entity through the experience of consciousness, but actually may be quite modular as Hanson suggests.
Intelligence must be very modular—that’s what drives Moravec’s paradox (problems like vision and locomotion that we have good modules for feel “easy”, problems that we have to solve with “general” intelligence feel “hard”), the Wason Selection task results (people don’t always have a great “general logic” module even when they could easily solve an isomorphic problem applied to a specific context), etc.
Does this greatly affect the AGI takeoff debate, though? So long as we can’t create a module which is itself capable of creating modules, what we have doesn’t qualify as human-equivalent AGI. But if/when we can, then it’s likely that it can also create an improved version of itself, and so it’s still an open question as to how fast or how far it can improve.
Thank you for that Irreducible Detail article, I remember reading it before but couldn’t find it later. Hanson’s argument is very convincing and intuitive, and really sheds light on what intelligence might really be about. When I think about my own intelligence, it doesn’t feel like I have some overarching general module planning, but more like I have many simple heuristics, and rules of thumb, and automatic behaviors that just happen to work. This feels more like Hanson’s idea of intelligence.
I think this is the single best argument against MIRI’s idea of intelligence.
Here is an interesting article in the same vein.