Well, the i.d.d assumption or the CLT or whatever variation you want to go to is, in my opinion, rather pointless. It aims at modeling exactly the kind of simplistic idealized system that don’t exist in the real world.
If you look at most important real world systems, from biological organisms, to stock markets, to traffic. The variables are basically the opposite of i.d.d., they are all correlated, even worse, the correlations aren’t linear.
You can model traffic all you want and try to drive in an “optimal” way but then you will reach the “driving like an utter asshole” edge case which has the unexpected result of “tailgating”.
You can model the immune system based on over 80 years of observations in an organism and try to tweak it just a tiny it to help fight an infection and the infinitesimal tweak will cause the “cytokine storm” edge case (which will never have been observed before, since it’s usually fatal).
Further more the criticisms above don’t even mention the idea that, again, you could have a process where all the variables are i.d.d (or can be modeled as such), but you just happen to have missed some variables which are critical for some states, but not for other states. So you get a model that’s good for a large amount of state and then over-fits on the states where you are missing the critical variables (e.g. this is the problem with things like predicting earthquakes or the movement of the Earth’s magnetic pole).
All problems where i.d.d reasonably fits the issue are:
a) Solved (e.g. predicting tides reasonably accurately)
b) Solvable from a statistical perspective but suffer from data gathering issues (e.g. a lot of problem in physics, where running the experiments is the hard part)
Right. I just took issue with the “unsaid” part because it makes it sound like the book makes statements that are untrue, when in fact it can at worst make statements that aren’t meaningful (“if this unrealistic assumption holds, then stuff follows”). You can call it pointless, but not silent, because well it’s not.
I’m of course completely unqualified to judge how realistic the i.d.d. assumption is, having never used ML in practice. I edited the paragraph you quoted to add a disclaimer that it is only true if the i.d.d assumption holds.
But I’d point out that this is a text book, so even if correlations are as problematic as you say, it is still a reasonable choice to present the idealized model first and then later discuss ways to model correlations in the data. No idea if this actually happens at some point.
This seems much too strong, lots of interesting unsolved problems can be cast as i.i.d. Video classification, for example, can be cast as i.i.d., where the distribution is over different videos, not individual frames.
Well, the i.d.d assumption or the CLT or whatever variation you want to go to is, in my opinion, rather pointless. It aims at modeling exactly the kind of simplistic idealized system that don’t exist in the real world.
If you look at most important real world systems, from biological organisms, to stock markets, to traffic. The variables are basically the opposite of i.d.d., they are all correlated, even worse, the correlations aren’t linear.
You can model traffic all you want and try to drive in an “optimal” way but then you will reach the “driving like an utter asshole” edge case which has the unexpected result of “tailgating”.
You can model the immune system based on over 80 years of observations in an organism and try to tweak it just a tiny it to help fight an infection and the infinitesimal tweak will cause the “cytokine storm” edge case (which will never have been observed before, since it’s usually fatal).
Further more the criticisms above don’t even mention the idea that, again, you could have a process where all the variables are i.d.d (or can be modeled as such), but you just happen to have missed some variables which are critical for some states, but not for other states. So you get a model that’s good for a large amount of state and then over-fits on the states where you are missing the critical variables (e.g. this is the problem with things like predicting earthquakes or the movement of the Earth’s magnetic pole).
All problems where i.d.d reasonably fits the issue are:
a) Solved (e.g. predicting tides reasonably accurately)
b) Solvable from a statistical perspective but suffer from data gathering issues (e.g. a lot of problem in physics, where running the experiments is the hard part)
c) Boring and/or imaginary
Right. I just took issue with the “unsaid” part because it makes it sound like the book makes statements that are untrue, when in fact it can at worst make statements that aren’t meaningful (“if this unrealistic assumption holds, then stuff follows”). You can call it pointless, but not silent, because well it’s not.
I’m of course completely unqualified to judge how realistic the i.d.d. assumption is, having never used ML in practice. I edited the paragraph you quoted to add a disclaimer that it is only true if the i.d.d assumption holds.
But I’d point out that this is a text book, so even if correlations are as problematic as you say, it is still a reasonable choice to present the idealized model first and then later discuss ways to model correlations in the data. No idea if this actually happens at some point.
This seems much too strong, lots of interesting unsolved problems can be cast as i.i.d. Video classification, for example, can be cast as i.i.d., where the distribution is over different videos, not individual frames.