If you’re using a radar gun that gives readings to the nearest MPH, for example, you won’t perceive a difference between 10.1 and 10.2 MPH, and so to you the two are equivalent.
As an aside, this is one of the reasons why some sensing systems deliberately inject random noise.
If it turns out that, for instance, your system’s actual states are always X.4 MPH, you have a systematic bias if you use a radar gun that actually gives readings to the nearest MPH. If, however, you inject ±0.5 MPH random noise, you don’t have a systematic bias any more. (Of course, this requires repeated sampling to pick up on.)
But that is a very different thing than saying that those other variables simply don’t exist! One is a statement of probability, another is a statement of certainty. Maybe there’s a confluence of variables that only occur once every thousand times, which you won’t pick up when doing an initial evaluation.
As an extreme example of that, consider:
f(x)→SHA512(x)=SHA512(someRandomLongConstant)
Under blackbox testing, this function is indistinguishable from f(x)→false.
It’s incorrect to say that billions of variables aren’t affecting a sled sliding down a hill—of course they’re affecting the speed, even if most are only by a few planck-lengths per hour. But, crucially, they’re mostly not affecting it to a detectable amount. The detectability threshold is the key to the argument.
It is important to differentiate between billions of: uncorrelated variables, correlated variables, and anticorrelated variables. A grain of sand on the hill may not detectably influence the sled. A truckload of sand, on the other hand, will very likely do so.
You are correct in the case of uncorrelated variables with a mean of zero; it is interesting in the real world that almost all variables appear to fall into this category.
As an aside, this is one of the reasons why some sensing systems deliberately inject random noise.
If it turns out that, for instance, your system’s actual states are always X.4 MPH, you have a systematic bias if you use a radar gun that actually gives readings to the nearest MPH. If, however, you inject ±0.5 MPH random noise, you don’t have a systematic bias any more. (Of course, this requires repeated sampling to pick up on.)
As an extreme example of that, consider:
f(x)→SHA512(x)=SHA512(someRandomLongConstant)
Under blackbox testing, this function is indistinguishable from f(x)→false.
It is important to differentiate between billions of: uncorrelated variables, correlated variables, and anticorrelated variables. A grain of sand on the hill may not detectably influence the sled. A truckload of sand, on the other hand, will very likely do so.
You are correct in the case of uncorrelated variables with a mean of zero; it is interesting in the real world that almost all variables appear to fall into this category.