I think his claim is basically “we don’t know yet how to teach a machine how to identify reasonable hypotheses in a short amount of time,” where the “short amount of time” is implicit.
My impression was that he was saying that creativity is some mysterious thing that we don’t know how to implement. But we do. Creativity is just search. Search that is possibly guided by experience solving similar problems. By learning from past experiences, search becomes more efficient. This idea is quite consistent with studies on how the human brain works. Beginner chess players rely more on ‘thinking’ (i.e. considering a large variety of moves, most of which are terrible), but grandmasters seem to rely more on their memory.
It similarly seems unlikely that you could have a genetic algorithm operate on a population of physics explanations and end up with an explanation that successfully explains Dark Matter, because at each step the genetic algorithm needs to have some sense of what is more or less likely to explain Dark Matter.
As I said, though, it’s quite different, because a hypothetical explanation for dark matter needs to only be consistent with existing experimental data. It’s true that it’s unfeasible to do this for the Turing test, because you need to test millions of candidate programs against humans, and this cannot be done inside the computer unless you already have AGI. But checking proposals for dark matter against existing data can be done entirely inside the computer.
I think his claim is that a correct inference procedure will point right at the correct answer, but as I disagree with that point I am reluctant to ascribe it to him.
I agree with you.
My interpretation of that section is that Deutsch is claiming that “induction” is not a complete explanation. If you say “well, the sun rose every day for as long as I can remember, and I suspect it will do so today,” then you get surprised by things like “well, the year starts with 19 every day for as long as I can remember, and I suspect it will do so today.”
If the machine’s only inputs were ’1990, 1991, 1992, … , 1999′, and it had no knowledge of math, arithmetic, language, or what years represent, then how on Earth can it possibly make any inference other than the next date will also start with 19? There is no other inference it could make.
On the other hand, if it had access to the sequence ’1900, 1901, 1902, … , 1999′ then it becomes a different story. It can infer that 1 always follows 0, 2 always follows 1, etc., and 0 always follows 9. It could also infer that when 0 follows 9, the next digit is incremented. Thus it can conclude that after 1999, the date 2000 is plausible, and add it to its list of highly-plausible hypotheses. Another hypothesis could be that the 3rd digit is never affected, and that the next date after 1999 is 1900.
Equivalently, if it had already been told about math, it would know how number sequences work, and could say with high confidence that the next year will be 2000. Yes, going to school counts as ‘past experiences’.
It’s is a common mistake that people make when talking about induction. They think induction is simply just ‘X has always happened, therefore it will always happen’. But induction is far more complicated than that! That’s why it took so long to come up with a mathematical theory of induction (Solomonoff induction). Solomonoff induction considers all possible hypotheses—some of them possibly extremely complex—and weighs them according to how simple they are and if they fit the observed data. That is the very definition of science. Solomonoff induction could accurately predict the progression of dates, and could do ‘science’. People have implemented time-limited versions of Solomonoff induction on a computer and they work as expected. We do need to come up with faster and more efficient ways of doing this, though. I agree with that.
I agree that there’s a lot more work to be done in AI. We need to find better learning and search algorithms. What I disagree with is that the work must be this kind of philosophical work that Deutsch is proposing. I think the work that needs to be done is very much engineering work.
Correct, but not helpful; when you say “just search,” that’s like saying “but Dark Matter is just physics.” The physicists don’t have a good explanation of Dark Matter yet, and the search people don’t have a good implementation of creativity (on the level of concepts) yet.
I agree that there’s a lot more work to be done in AI. We need to find better learning and search algorithms. What I disagree with is that the work must be this kind of philosophical work that Deutsch is proposing. I think the work that needs to be done is very much engineering work.
It is not obvious to me that Deutsch is familiar with ideas like Solomonoff induction, Pearl’s work on causality, and so on, and thinks that they’re inadequate to the task. He might be saying “we need a formalized version of induction” while unaware that Solomonoff already proposed one.
Search that is possibly guided by experience solving similar problems. By learning from past experiences, search becomes more efficient.
I agree that there’s a lot more work to be done in AI. We need to find better learning and search algorithms.
Why did I mention this at all? Because there’s no other way to do this. Creativity (coming up with new unprecedented solutions to problems) must utilize some form of search, and due to the no-free-lunch theorem, there is no shortcut to finding the solution to a problem. The only thing that can get around no-free-lunch is to consider an ensemble of problems. That is, to learn from past experiences.
And about your point:
It is not obvious to me that Deutsch is familiar with ideas like Solomonoff induction, Pearl’s work on causality, and so on, and thinks that they’re inadequate to the task.
I agree with this. The fact that he didn’t even mention Solomonoff at all, even in passing, despite the fact that he devoted half the article to talking about induction, is strongly indicative of this.
That doesn’t look helpful to me. Yes, you can define creativity this way but the price you pay is that your search space becomes impossibly huge and high-dimensional.
Defining sculpture as a search for a pleasing arrangement of atoms isn’t very useful.
My impression was that he was saying that creativity is some mysterious thing that we don’t know how to implement. But we do. Creativity is just search. Search that is possibly guided by experience solving similar problems. By learning from past experiences, search becomes more efficient. This idea is quite consistent with studies on how the human brain works. Beginner chess players rely more on ‘thinking’ (i.e. considering a large variety of moves, most of which are terrible), but grandmasters seem to rely more on their memory.
As I said, though, it’s quite different, because a hypothetical explanation for dark matter needs to only be consistent with existing experimental data. It’s true that it’s unfeasible to do this for the Turing test, because you need to test millions of candidate programs against humans, and this cannot be done inside the computer unless you already have AGI. But checking proposals for dark matter against existing data can be done entirely inside the computer.
I agree with you.
If the machine’s only inputs were ’1990, 1991, 1992, … , 1999′, and it had no knowledge of math, arithmetic, language, or what years represent, then how on Earth can it possibly make any inference other than the next date will also start with 19? There is no other inference it could make.
On the other hand, if it had access to the sequence ’1900, 1901, 1902, … , 1999′ then it becomes a different story. It can infer that 1 always follows 0, 2 always follows 1, etc., and 0 always follows 9. It could also infer that when 0 follows 9, the next digit is incremented. Thus it can conclude that after 1999, the date 2000 is plausible, and add it to its list of highly-plausible hypotheses. Another hypothesis could be that the 3rd digit is never affected, and that the next date after 1999 is 1900.
Equivalently, if it had already been told about math, it would know how number sequences work, and could say with high confidence that the next year will be 2000. Yes, going to school counts as ‘past experiences’.
It’s is a common mistake that people make when talking about induction. They think induction is simply just ‘X has always happened, therefore it will always happen’. But induction is far more complicated than that! That’s why it took so long to come up with a mathematical theory of induction (Solomonoff induction). Solomonoff induction considers all possible hypotheses—some of them possibly extremely complex—and weighs them according to how simple they are and if they fit the observed data. That is the very definition of science. Solomonoff induction could accurately predict the progression of dates, and could do ‘science’. People have implemented time-limited versions of Solomonoff induction on a computer and they work as expected. We do need to come up with faster and more efficient ways of doing this, though. I agree with that.
I agree that there’s a lot more work to be done in AI. We need to find better learning and search algorithms. What I disagree with is that the work must be this kind of philosophical work that Deutsch is proposing. I think the work that needs to be done is very much engineering work.
Correct, but not helpful; when you say “just search,” that’s like saying “but Dark Matter is just physics.” The physicists don’t have a good explanation of Dark Matter yet, and the search people don’t have a good implementation of creativity (on the level of concepts) yet.
It is not obvious to me that Deutsch is familiar with ideas like Solomonoff induction, Pearl’s work on causality, and so on, and thinks that they’re inadequate to the task. He might be saying “we need a formalized version of induction” while unaware that Solomonoff already proposed one.
I made it clear what I mean:
Why did I mention this at all? Because there’s no other way to do this. Creativity (coming up with new unprecedented solutions to problems) must utilize some form of search, and due to the no-free-lunch theorem, there is no shortcut to finding the solution to a problem. The only thing that can get around no-free-lunch is to consider an ensemble of problems. That is, to learn from past experiences.
And about your point:
I agree with this. The fact that he didn’t even mention Solomonoff at all, even in passing, despite the fact that he devoted half the article to talking about induction, is strongly indicative of this.
That doesn’t look helpful to me. Yes, you can define creativity this way but the price you pay is that your search space becomes impossibly huge and high-dimensional.
Defining sculpture as a search for a pleasing arrangement of atoms isn’t very useful.
After that sentence I made it clear what I mean. See my reply to Vaniver.