I suspect this distinction between “real” and “fake” numbers is blurrier than you are describing.
Consider voltage in classical physics. Differences in voltages are a real measurable quantity. But the “absolute” voltage at a given point is a mathematical fiction.
Or consider Kolmogorov complexity. It’s only defined once you fix a specific Turing machine (which researchers rarely bother to do.) And even then, it’s not decidable. Is that a real number or a fake number?
The distinction might be blurry, but I don’t think it’s blurrier for that particular reason :-)
Sure, to measure voltage or K-complexity you need to choose a scale. But the same is true for mass (kilograms or pounds, related by a scaling factor), temperature (Celsius or Fahrenheit, related by a translation and scaling), spacetime coordinates (dependent on position and velocity of origin), etc. You just choose a scale and then you’re done. With a fake number, on the other hand, you don’t know how to measure it even if you had a scale.
K-complexity isn’t really a matter of scale. Give me a program, and I can design a Turing machine that can implement it in one symbol.
For any two given Turing machines, you can find some constant so that the K-complexity of a program in terms of each Turing machine is within that constant, but it’s not like they’re off by that constant exactly. In fact, it’s impossible to do that.
Also, he gave two reasons. You only talked about the first.
Yeah, I agree that K-complexity is annoyingly relative. If there were something more absolute that could do the same job, I’d adopt it without a second thought, because it would be more “true” and less “fake” :-) And I feel the same way about Bayesian priors, for similar reasons.
I suspect this distinction between “real” and “fake” numbers is blurrier than you are describing.
Consider voltage in classical physics. Differences in voltages are a real measurable quantity. But the “absolute” voltage at a given point is a mathematical fiction.
Or consider Kolmogorov complexity. It’s only defined once you fix a specific Turing machine (which researchers rarely bother to do.) And even then, it’s not decidable. Is that a real number or a fake number?
The distinction might be blurry, but I don’t think it’s blurrier for that particular reason :-)
Sure, to measure voltage or K-complexity you need to choose a scale. But the same is true for mass (kilograms or pounds, related by a scaling factor), temperature (Celsius or Fahrenheit, related by a translation and scaling), spacetime coordinates (dependent on position and velocity of origin), etc. You just choose a scale and then you’re done. With a fake number, on the other hand, you don’t know how to measure it even if you had a scale.
K-complexity isn’t really a matter of scale. Give me a program, and I can design a Turing machine that can implement it in one symbol.
For any two given Turing machines, you can find some constant so that the K-complexity of a program in terms of each Turing machine is within that constant, but it’s not like they’re off by that constant exactly. In fact, it’s impossible to do that.
Also, he gave two reasons. You only talked about the first.
Yeah, I agree that K-complexity is annoyingly relative. If there were something more absolute that could do the same job, I’d adopt it without a second thought, because it would be more “true” and less “fake” :-) And I feel the same way about Bayesian priors, for similar reasons.
I feel like there’s a meaningful distinction here, but calling them ‘true’ and fake’ smuggles in connotations that I don’t feel are accurate.