I don’t quite think you’ve solved the problem of induction.
I think there’s a fairly serious issue with your claim that being able to predict something accurately means you necessarily fully understand the variables which causes it because determinism.
The first thing to note is that “perfect predictability implies zero mutual information” plays well with approximation: approximately perfect predictability implies approximately zero mutual information. If we can predict the sled’s speed to within 1% error, then any other variables in the universe can only influence that remaining 1% error. Similarly, if we can predict the sled’s speed 99% of the time, then any other variables can only matter 1% of the time. And we can combine those: if 99% of the time we can predict the sled’s speed to within 1% error, then any other variables can only influence the 1% error except for the 1% of sled-runs when they might have a larger effect.
That’s not really the cases. E.g: let’s say that ice cream melt twice as fast in galaxies without a supermassive black hole at the center. You do experiments to see how fast ice cream melts. After controlling for type of ice cream, temperature, initial temp of the ice cream, airflow and air humidity, you find that you can predict how ice cream melts. You triumphantly claim that you know which things cause ice cream to melt at different rates, having completely missed the black hole’s effects.
Essentially, controlling for A & B but not C won’t tell you whether C has a causal influence on the thing you’re measuring unless
you intentionally change C between experiments (not practical given googleplexes of potential causal factors)
C happens to naturally vary quite a bit and so makes your experimental results different, cluing you in to the fact that you’re missing something.
I don’t quite think you’ve solved the problem of induction.
I think there’s a fairly serious issue with your claim that being able to predict something accurately means you necessarily fully understand the variables which causes it because determinism.
That’s not really the cases. E.g: let’s say that ice cream melt twice as fast in galaxies without a supermassive black hole at the center. You do experiments to see how fast ice cream melts. After controlling for type of ice cream, temperature, initial temp of the ice cream, airflow and air humidity, you find that you can predict how ice cream melts. You triumphantly claim that you know which things cause ice cream to melt at different rates, having completely missed the black hole’s effects.
Essentially, controlling for A & B but not C won’t tell you whether C has a causal influence on the thing you’re measuring unless
you intentionally change C between experiments (not practical given googleplexes of potential causal factors)
C happens to naturally vary quite a bit and so makes your experimental results different, cluing you in to the fact that you’re missing something.
This seems like the topic of “accidentally controlling a variable” that the post discusses in the section titled “Replication”.
Absolutely a problem that happens.