In my view, “the problem of induction” is just a bunch of philosophers obsessing over the fact that induction is not deduction, and that you therefore cannot predict the future with logical certainty. This is true, but not very interesting. We should instead spend our energy thinking about how to make better predictions, and how we can evaluate how much confidence to have in our predictions. I agree with you that the fields you mention have made immense progress on that.
I am not convinced that computer programs are immune to Goodmans point. AI agents have ontologies, and their predictions will depend on that ontology. Two agents with different ontologies but the same data can reach different conclusions, and unless they have access to their source code, it is not obvious that they will be able to figure out which one is right.
Consider two humans who are both writing computer functions. Both the “green” and the “grue” programmer will believe that their perspective is the neutral one, and therefore write a simple program that takes light wavelength as input and outputs a constant color predicate. The difference is that one of them will be surprised after time t, when suddenly the computer starts outputting different colors from their programmers experienced qualia. At that stage, we know which one of the programmers was wrong, but the point is that it might not be possible to predict this in advance.
In my view, “the problem of induction” is just a bunch of philosophers obsessing over the fact that induction is not deduction, and that you therefore cannot predict the future with logical certainty.
Being able to make only probablistic predictions (but understanding how that works) is one thing. Being able to make only probablistic predictions, and not even understanding how that works, is another thing.
In my view, “the problem of induction” is just a bunch of philosophers obsessing over the fact that induction is not deduction, and that you therefore cannot predict the future with logical certainty. This is true, but not very interesting. We should instead spend our energy thinking about how to make better predictions, and how we can evaluate how much confidence to have in our predictions. I agree with you that the fields you mention have made immense progress on that.
I am not convinced that computer programs are immune to Goodmans point. AI agents have ontologies, and their predictions will depend on that ontology. Two agents with different ontologies but the same data can reach different conclusions, and unless they have access to their source code, it is not obvious that they will be able to figure out which one is right.
Consider two humans who are both writing computer functions. Both the “green” and the “grue” programmer will believe that their perspective is the neutral one, and therefore write a simple program that takes light wavelength as input and outputs a constant color predicate. The difference is that one of them will be surprised after time t, when suddenly the computer starts outputting different colors from their programmers experienced qualia. At that stage, we know which one of the programmers was wrong, but the point is that it might not be possible to predict this in advance.
Being able to make only probablistic predictions (but understanding how that works) is one thing. Being able to make only probablistic predictions, and not even understanding how that works, is another thing.
lmao epic, TAG. somehow—both you and me found this 5 year old article & commented on it in the past 6 hours