Perhaps asking that question wasn’t the best way to make my point. Let me try to be more explicit. Intuitively, “complexity” seems to be an absolute, objective concept. But all of the formalizations we have of it so far contain a relativized core. In Bayesian updating, it’s the prior. In Kolmogorov complexity, it’s the universal Turing machine. If we use “simple math”, it would be the language we use to talk about math.
This failure to pin down an objective notion of complexity causes me to question the intuition that complexity is objective. I’d probably change my mind if someone came up with a “reasonable” formalization that’s not “relative to something.”
Perhaps asking that question wasn’t the best way to make my point. Let me try to be more explicit. Intuitively, “complexity” seems to be an absolute, objective concept. But all of the formalizations we have of it so far contain a relativized core. In Bayesian updating, it’s the prior. In Kolmogorov complexity, it’s the universal Turing machine. If we use “simple math”, it would be the language we use to talk about math.
This failure to pin down an objective notion of complexity causes me to question the intuition that complexity is objective. I’d probably change my mind if someone came up with a “reasonable” formalization that’s not “relative to something.”