I think math is “easy” in the sense that we have proof assistants that can verify proofs so AIs can learn it through pure self-play. Therefore I agree that AI will probably soon solve math, but I disagree that it indicates particularly high capabilities gain.
Just want to record that I agree with this with moderate confidence. Or more precisely: I think we’ll probably be able to make extremely good math-solving AIs with architecture that would be generally incompetent at most real world tasks, as (for instance) AlphaGoZero is thus incompetent.
I think “A narrow skill that in humans seems to require a whole lot of general intelligence, turns out to be able to be easily unbundled (or to be unbundled by default) from many of the other aspects of general intelligence” has a pretty good track record. Part of why I’m not super doomy, really.
Does this “paradox” still hold in the era of recent multimodal AI? In particular, what are some things that are easy for humans but hard for AI, other than things requiring embodiment? What areas are human mechanical Turks still much better at? (I believe there are areas but pretty fuzzy about what they are.)
I don’t think reading/writing is very easy for humans—compared to perception and embodied tasks. My Morvec’s paradox intuition here is maths is of a similar order of difficulty to what we have been very successfully automating in the last couple years, so I expect it will happen soon.
I mean like the type of perception one needs to empty a random dishwasher, make a cup of coffee with a random coffee machine type of stuff, clean a room. Hunt and skin a rabbit.
I think math is “easy” in the sense that we have proof assistants that can verify proofs so AIs can learn it through pure self-play. Therefore I agree that AI will probably soon solve math, but I disagree that it indicates particularly high capabilities gain.
Just want to record that I agree with this with moderate confidence. Or more precisely: I think we’ll probably be able to make extremely good math-solving AIs with architecture that would be generally incompetent at most real world tasks, as (for instance) AlphaGoZero is thus incompetent.
I think “A narrow skill that in humans seems to require a whole lot of general intelligence, turns out to be able to be easily unbundled (or to be unbundled by default) from many of the other aspects of general intelligence” has a pretty good track record. Part of why I’m not super doomy, really.
A lot of my confidence this will happen is this and a generalized Morvec’s paradox-style “hard things are easy, easy things are hard” intuition.
Does this “paradox” still hold in the era of recent multimodal AI? In particular, what are some things that are easy for humans but hard for AI, other than things requiring embodiment? What areas are human mechanical Turks still much better at? (I believe there are areas but pretty fuzzy about what they are.)
I don’t think reading/writing is very easy for humans—compared to perception and embodied tasks. My Morvec’s paradox intuition here is maths is of a similar order of difficulty to what we have been very successfully automating in the last couple years, so I expect it will happen soon.
Embodied tasks just aren’t an area where comparison makes much sense yet. What kind of perception tasks did you have in mind?
I mean like the type of perception one needs to empty a random dishwasher, make a cup of coffee with a random coffee machine type of stuff, clean a room. Hunt and skin a rabbit.