Given that I think that capturing the ingenuity of human mathematicians is one of the last things I’d expect to see before seeing human-level-or-smarter AI,
Verifying proofs and writing programs are isomorphic. A smarter-than-human theorem prover is likely to also be a better-than-human programmer, and you can ask it to do things like find good theorem provers...
More or less what Strilanc said, and not what gjm said.
The difference between chess and math is that a superhuman-at-theorem-proving AI is a lot more useful for speeding up further AI research than a superhuman-at-chess-playing AI would be. (Not to mention the chance that being better-than-human at mathematical intuition already unlocks enough munchkinning of physics to take over the world directly.)
I’d like to make explicit the connection of this idea to hard takeoff, since it’s something I’ve thought about before but isn’t stated explicitly very often. Namely, this provides some reason to think that by the time an AGI is human-level in the things humans have evolved to do, it will be very superhuman in things that humans have more difficulty with, like math and engineering.
First of all: I hereby apologize to orthonormal if s/he felt insulted or belittled or mocked by what I wrote. That was in no way my intention.
I’m not sure “sarcasm” is quite the right term, though. I really do think that, among human beings, doing mathematics well takes exceptional brainpower, and even most very intelligent people can’t do it; I really do think that playing chess well has the same feature; and I really do think that that feature is why orthonormal expects doing mathematics well to be one of the last achievements of AI before general superhumanness.
If my last paragraph had said “I don’t think that’s very good evidence, though; you could say the same about playing chess well, and we all know what computers have done to that” then it would have been clearer and that might have been worth the loss of brevity. But I’m not sure sarcasm is the failure mode (if failure it be) here.
Following up on JoshuaZ’s comment, humans brains aren’t at all optimized for doing higher math — it’s a mystery that humans can do higher math at all. Humans optimized for things like visual processing and adapting to the cultures that they grow up in. So I would expect human-level math AI to be easier than human-level visual processing AI.
Because doing mathematics well is something that takes really exceptional brainpower and that even most very intelligent people aren’t capable of.
What do you mean by “really exceptional brainpower”? And what do you mean by “doing mathematics well” ?
What I meant was: (1) empirically it seems that very few human beings are good at proving theorems (or even following other people’s proofs), (2) being good at proving theorems seems to correlate somewhat with other things we think of as cleverness, (3) these facts are probably part of why it seems to orthonormal (and, I bet, to lots of others) as if skill in theorem-proving would be one of the hardest things for AI to achieve, but (4) #1 and #2 hold to some extent for other things, like being good at playing chess, that also used to be thought of as particularly impressive human achievements but that seem to be easier to make computers do than all sorts of things that initially seem straightforward.
All of which is rather cumbersome, which is why I put it in the elliptical way I did.
More explicitly, there are a lot of things that humans can do easily that seem to be extremely difficult for AI (vision and language are the most obvious), and there are many things that humans have trouble with that we can easily make AI do a decent job. At some level, this shouldn’t be that surprising because a lot of the things humans find really easy are things we have millions of years of evolution optimizing those procedures.
Why do you think so?
Verifying proofs and writing programs are isomorphic. A smarter-than-human theorem prover is likely to also be a better-than-human programmer, and you can ask it to do things like find good theorem provers...
More or less what Strilanc said, and not what gjm said.
The difference between chess and math is that a superhuman-at-theorem-proving AI is a lot more useful for speeding up further AI research than a superhuman-at-chess-playing AI would be. (Not to mention the chance that being better-than-human at mathematical intuition already unlocks enough munchkinning of physics to take over the world directly.)
Because doing mathematics well is something that takes really exceptional brainpower and that even most very intelligent people aren’t capable of.
Just like chess.
I’d like to make explicit the connection of this idea to hard takeoff, since it’s something I’ve thought about before but isn’t stated explicitly very often. Namely, this provides some reason to think that by the time an AGI is human-level in the things humans have evolved to do, it will be very superhuman in things that humans have more difficulty with, like math and engineering.
I’m against sarcasm as a way of making an argument; it makes it less pleasant to discuss things here, and it can hide a bad argument.
First of all: I hereby apologize to orthonormal if s/he felt insulted or belittled or mocked by what I wrote. That was in no way my intention.
I’m not sure “sarcasm” is quite the right term, though. I really do think that, among human beings, doing mathematics well takes exceptional brainpower, and even most very intelligent people can’t do it; I really do think that playing chess well has the same feature; and I really do think that that feature is why orthonormal expects doing mathematics well to be one of the last achievements of AI before general superhumanness.
If my last paragraph had said “I don’t think that’s very good evidence, though; you could say the same about playing chess well, and we all know what computers have done to that” then it would have been clearer and that might have been worth the loss of brevity. But I’m not sure sarcasm is the failure mode (if failure it be) here.
Following up on JoshuaZ’s comment, humans brains aren’t at all optimized for doing higher math — it’s a mystery that humans can do higher math at all. Humans optimized for things like visual processing and adapting to the cultures that they grow up in. So I would expect human-level math AI to be easier than human-level visual processing AI.
What do you mean by “really exceptional brainpower”? And what do you mean by “doing mathematics well” ?
What I meant was: (1) empirically it seems that very few human beings are good at proving theorems (or even following other people’s proofs), (2) being good at proving theorems seems to correlate somewhat with other things we think of as cleverness, (3) these facts are probably part of why it seems to orthonormal (and, I bet, to lots of others) as if skill in theorem-proving would be one of the hardest things for AI to achieve, but (4) #1 and #2 hold to some extent for other things, like being good at playing chess, that also used to be thought of as particularly impressive human achievements but that seem to be easier to make computers do than all sorts of things that initially seem straightforward.
All of which is rather cumbersome, which is why I put it in the elliptical way I did.
Heh, I missed the irony :-)
I didn’t, but realized someone could, hence why my other comment started off with the words “more explicitly” to essentially unpack gjm’s remark.
More explicitly, there are a lot of things that humans can do easily that seem to be extremely difficult for AI (vision and language are the most obvious), and there are many things that humans have trouble with that we can easily make AI do a decent job. At some level, this shouldn’t be that surprising because a lot of the things humans find really easy are things we have millions of years of evolution optimizing those procedures.