If somebody was able to write a program that’s able to find proofs [involving substantive ideas], I would consider that to be strong evidence that there’s been meaningful progress on general artificial intelligence. In absence of such examples, I have a strong prior against there having been meaningful progress on general artificial intelligence.
That strikes me as like saying, “Until this huge task is within a hair’s breadth of being complete, I will doubt that any progress has been made.” Surely there are more intermediate milestones, no?
It’s both unclear to me that there are intermediate milestones and that getting to the point of a computer being a generate proofs of the Sylow theorems brings you within a hair’s breadth of being complete.
I’m way out of my field here. Can you name some intermediate milestones?
Well, I have no knowledge of the Sylow theorems, but it seems likely that if any system can efficiently generate short, ingenious, idea-based proofs, it must have something analogous to a mathematician’s understanding.
Or at least have a mechanism for formulating and relating concepts, which to me (admittedly a layman), sounds like the main challenge for AGI.
I suppose the key scenario I can imagine right now where an AI is able to conduct arbitrary human-level mathematical reasoning, but can not be called a fully general intelligence, is if there is some major persistent difficulty in transferring the ability to reason about the purely conceptual world of mathematics to the domain of the physically real world one is embedded within. In particular (inspired by Eliezer’s comments on AIXI), I could imagine there being difficulty with the problem of locating oneself within that world and distinguishing oneself from the surroundings.
Ha, I’ve seen that quote before, good point! (and likewise in the linked comment) I suppose one reason to think that mathematical reasoning is close to AGI is that it seems similar to programming. And if an AI can program, that seems significant.
Maybe a case could be made that the key difficulty in programming will turn out to be in formulating what program to write. I’m not sure what the analogue is in mathematics. Generally it’s pretty easy to formally state a theorem to prove, even if you have no idea how to prove it, right?
If so, that might lend support for the argument that automated general mathematical reasoning is still a ways off from AGI.
Maybe a case could be made that the key difficulty in programming will turn out to be in formulating what program to write. I’m not sure what the analogue is in mathematics. Generally it’s pretty easy to formally state a theorem to prove, even if you have no idea how to prove it, right?
The mathematical counterpart may of the “recognizing important concepts and asking good questions” variety. A friend of my has an idea of how to formalize the notion of an “important concept” in a mathematical field, and possible relevance to AI, but at the moment it’s all very vague speculation :-).
As for intermediate milestones, I think all the progress that has been made in AI (and neuroscience!) for the last half-century should count. We now know a whole lot more about AI and brains than we used to.
EDIT: To be more specific, I think such feats as beating humans at chess and Jeopardy! or being able to passably translate texts and drive cars are significant.
Future milestones might include:
beating the Go world champion
significantly better machine translations
much more efficient theorem provers
strong general game players
I could see all of those happening without being able to replicate all human-generated mathematical proofs. But it would still seem like significant progress has been made.
I agree that there exist intermediate milestone. The question is how far the ones that have been surpassed are from the end goal. The relevant thing isn’t how much we understand relative to what we used to know, but how much we understand relative to what’s necessary to build a general artificial intelligence. The latter can be small even if the former is large.
That strikes me as like saying, “Until this huge task is within a hair’s breadth of being complete, I will doubt that any progress has been made.” Surely there are more intermediate milestones, no?
It’s both unclear to me that there are intermediate milestones and that getting to the point of a computer being a generate proofs of the Sylow theorems brings you within a hair’s breadth of being complete.
I’m way out of my field here. Can you name some intermediate milestones?
Well, I have no knowledge of the Sylow theorems, but it seems likely that if any system can efficiently generate short, ingenious, idea-based proofs, it must have something analogous to a mathematician’s understanding.
Or at least have a mechanism for formulating and relating concepts, which to me (admittedly a layman), sounds like the main challenge for AGI.
I suppose the key scenario I can imagine right now where an AI is able to conduct arbitrary human-level mathematical reasoning, but can not be called a fully general intelligence, is if there is some major persistent difficulty in transferring the ability to reason about the purely conceptual world of mathematics to the domain of the physically real world one is embedded within. In particular (inspired by Eliezer’s comments on AIXI), I could imagine there being difficulty with the problem of locating oneself within that world and distinguishing oneself from the surroundings.
“If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is.”—John von Neumann.
See also the quotation in this comment.
Ha, I’ve seen that quote before, good point! (and likewise in the linked comment) I suppose one reason to think that mathematical reasoning is close to AGI is that it seems similar to programming. And if an AI can program, that seems significant.
Maybe a case could be made that the key difficulty in programming will turn out to be in formulating what program to write. I’m not sure what the analogue is in mathematics. Generally it’s pretty easy to formally state a theorem to prove, even if you have no idea how to prove it, right?
If so, that might lend support for the argument that automated general mathematical reasoning is still a ways off from AGI.
The mathematical counterpart may of the “recognizing important concepts and asking good questions” variety. A friend of my has an idea of how to formalize the notion of an “important concept” in a mathematical field, and possible relevance to AI, but at the moment it’s all very vague speculation :-).
It’s pretty easy for a human of significantly above average intelligence. That doesn’t imply easy for an average human or an AI.
As for intermediate milestones, I think all the progress that has been made in AI (and neuroscience!) for the last half-century should count. We now know a whole lot more about AI and brains than we used to.
EDIT: To be more specific, I think such feats as beating humans at chess and Jeopardy! or being able to passably translate texts and drive cars are significant.
Future milestones might include:
beating the Go world champion
significantly better machine translations
much more efficient theorem provers
strong general game players
I could see all of those happening without being able to replicate all human-generated mathematical proofs. But it would still seem like significant progress has been made.
I agree that there exist intermediate milestone. The question is how far the ones that have been surpassed are from the end goal. The relevant thing isn’t how much we understand relative to what we used to know, but how much we understand relative to what’s necessary to build a general artificial intelligence. The latter can be small even if the former is large.
That’s fair.