The distinction might be blurry, but I don’t think it’s blurrier for that particular reason :-)
Sure, to measure voltage or K-complexity you need to choose a scale. But the same is true for mass (kilograms or pounds, related by a scaling factor), temperature (Celsius or Fahrenheit, related by a translation and scaling), spacetime coordinates (dependent on position and velocity of origin), etc. You just choose a scale and then you’re done. With a fake number, on the other hand, you don’t know how to measure it even if you had a scale.
K-complexity isn’t really a matter of scale. Give me a program, and I can design a Turing machine that can implement it in one symbol.
For any two given Turing machines, you can find some constant so that the K-complexity of a program in terms of each Turing machine is within that constant, but it’s not like they’re off by that constant exactly. In fact, it’s impossible to do that.
Also, he gave two reasons. You only talked about the first.
Yeah, I agree that K-complexity is annoyingly relative. If there were something more absolute that could do the same job, I’d adopt it without a second thought, because it would be more “true” and less “fake” :-) And I feel the same way about Bayesian priors, for similar reasons.
The distinction might be blurry, but I don’t think it’s blurrier for that particular reason :-)
Sure, to measure voltage or K-complexity you need to choose a scale. But the same is true for mass (kilograms or pounds, related by a scaling factor), temperature (Celsius or Fahrenheit, related by a translation and scaling), spacetime coordinates (dependent on position and velocity of origin), etc. You just choose a scale and then you’re done. With a fake number, on the other hand, you don’t know how to measure it even if you had a scale.
K-complexity isn’t really a matter of scale. Give me a program, and I can design a Turing machine that can implement it in one symbol.
For any two given Turing machines, you can find some constant so that the K-complexity of a program in terms of each Turing machine is within that constant, but it’s not like they’re off by that constant exactly. In fact, it’s impossible to do that.
Also, he gave two reasons. You only talked about the first.
Yeah, I agree that K-complexity is annoyingly relative. If there were something more absolute that could do the same job, I’d adopt it without a second thought, because it would be more “true” and less “fake” :-) And I feel the same way about Bayesian priors, for similar reasons.