Following up on JoshuaZ’s comment, humans brains aren’t at all optimized for doing higher math — it’s a mystery that humans can do higher math at all. Humans optimized for things like visual processing and adapting to the cultures that they grow up in. So I would expect human-level math AI to be easier than human-level visual processing AI.
Because doing mathematics well is something that takes really exceptional brainpower and that even most very intelligent people aren’t capable of.
What do you mean by “really exceptional brainpower”? And what do you mean by “doing mathematics well” ?
What I meant was: (1) empirically it seems that very few human beings are good at proving theorems (or even following other people’s proofs), (2) being good at proving theorems seems to correlate somewhat with other things we think of as cleverness, (3) these facts are probably part of why it seems to orthonormal (and, I bet, to lots of others) as if skill in theorem-proving would be one of the hardest things for AI to achieve, but (4) #1 and #2 hold to some extent for other things, like being good at playing chess, that also used to be thought of as particularly impressive human achievements but that seem to be easier to make computers do than all sorts of things that initially seem straightforward.
All of which is rather cumbersome, which is why I put it in the elliptical way I did.
Following up on JoshuaZ’s comment, humans brains aren’t at all optimized for doing higher math — it’s a mystery that humans can do higher math at all. Humans optimized for things like visual processing and adapting to the cultures that they grow up in. So I would expect human-level math AI to be easier than human-level visual processing AI.
What do you mean by “really exceptional brainpower”? And what do you mean by “doing mathematics well” ?
What I meant was: (1) empirically it seems that very few human beings are good at proving theorems (or even following other people’s proofs), (2) being good at proving theorems seems to correlate somewhat with other things we think of as cleverness, (3) these facts are probably part of why it seems to orthonormal (and, I bet, to lots of others) as if skill in theorem-proving would be one of the hardest things for AI to achieve, but (4) #1 and #2 hold to some extent for other things, like being good at playing chess, that also used to be thought of as particularly impressive human achievements but that seem to be easier to make computers do than all sorts of things that initially seem straightforward.
All of which is rather cumbersome, which is why I put it in the elliptical way I did.
Heh, I missed the irony :-)
I didn’t, but realized someone could, hence why my other comment started off with the words “more explicitly” to essentially unpack gjm’s remark.