Interesting analysis. I hadn’t heard of Goodman before so I appreciate the reference.
In my view the problem of induction has been almost entirely solved by the ideas from the literature on statistical learning, such as VC theory, MDL, Solomonoff induction, and PAC learning. You might disagree, but you should probably talk about why those ideas prove insufficient in your view if you want to convince people (especially if your audience is up-to-date on ML).
One particularly glaring limitation with Goodman’s argument is that it depends on natural language predicates (“green”, “grue”, etc). Natural language is terribly ambiguous and imprecise, which makes it hard to evaluate philosophical statements about natural language predicates. You’d be better off casting the discussion in terms of computer programs, that take a given set of input observations and produce an output prediction.
Of course you could write “green” and “grue” as computer functions, but it would be immediately obvious how much more contrived the program using “grue” is than the program using “green”.
In my view, “the problem of induction” is just a bunch of philosophers obsessing over the fact that induction is not deduction, and that you therefore cannot predict the future with logical certainty. This is true, but not very interesting. We should instead spend our energy thinking about how to make better predictions, and how we can evaluate how much confidence to have in our predictions. I agree with you that the fields you mention have made immense progress on that.
I am not convinced that computer programs are immune to Goodmans point. AI agents have ontologies, and their predictions will depend on that ontology. Two agents with different ontologies but the same data can reach different conclusions, and unless they have access to their source code, it is not obvious that they will be able to figure out which one is right.
Consider two humans who are both writing computer functions. Both the “green” and the “grue” programmer will believe that their perspective is the neutral one, and therefore write a simple program that takes light wavelength as input and outputs a constant color predicate. The difference is that one of them will be surprised after time t, when suddenly the computer starts outputting different colors from their programmers experienced qualia. At that stage, we know which one of the programmers was wrong, but the point is that it might not be possible to predict this in advance.
In my view, “the problem of induction” is just a bunch of philosophers obsessing over the fact that induction is not deduction, and that you therefore cannot predict the future with logical certainty.
Being able to make only probablistic predictions (but understanding how that works) is one thing. Being able to make only probablistic predictions, and not even understanding how that works, is another thing.
Interesting analysis. I hadn’t heard of Goodman before so I appreciate the reference.
In my view the problem of induction has been almost entirely solved by the ideas from the literature on statistical learning, such as VC theory, MDL, Solomonoff induction, and PAC learning. You might disagree, but you should probably talk about why those ideas prove insufficient in your view if you want to convince people (especially if your audience is up-to-date on ML).
One particularly glaring limitation with Goodman’s argument is that it depends on natural language predicates (“green”, “grue”, etc). Natural language is terribly ambiguous and imprecise, which makes it hard to evaluate philosophical statements about natural language predicates. You’d be better off casting the discussion in terms of computer programs, that take a given set of input observations and produce an output prediction.
Of course you could write “green” and “grue” as computer functions, but it would be immediately obvious how much more contrived the program using “grue” is than the program using “green”.
In my view, “the problem of induction” is just a bunch of philosophers obsessing over the fact that induction is not deduction, and that you therefore cannot predict the future with logical certainty. This is true, but not very interesting. We should instead spend our energy thinking about how to make better predictions, and how we can evaluate how much confidence to have in our predictions. I agree with you that the fields you mention have made immense progress on that.
I am not convinced that computer programs are immune to Goodmans point. AI agents have ontologies, and their predictions will depend on that ontology. Two agents with different ontologies but the same data can reach different conclusions, and unless they have access to their source code, it is not obvious that they will be able to figure out which one is right.
Consider two humans who are both writing computer functions. Both the “green” and the “grue” programmer will believe that their perspective is the neutral one, and therefore write a simple program that takes light wavelength as input and outputs a constant color predicate. The difference is that one of them will be surprised after time t, when suddenly the computer starts outputting different colors from their programmers experienced qualia. At that stage, we know which one of the programmers was wrong, but the point is that it might not be possible to predict this in advance.
Being able to make only probablistic predictions (but understanding how that works) is one thing. Being able to make only probablistic predictions, and not even understanding how that works, is another thing.
lmao epic, TAG. somehow—both you and me found this 5 year old article & commented on it in the past 6 hours