I spent some time thinking about your question and cannot give an answer until I understand better what you mean by objective vs arbitrary.
The concept of complexity looks objective enough in the mathematical sense. Then, if I understand you correctly, you take a step back and say that mathematics itself (including logic, I presume?) is a random concept, so other beings could have wildly different “foomatics” that they find completely clear and intuitive. With the standards thus raised, what kind of argument could ever show you that something is “objective”? This isn’t even the problem of induction, this is… I’m at a loss for words. Why do you even bother with Tegmark’s multiverse then? Why not say instead that “existence” is a random insular human concept, and our crystalloid friends could have a completely different concept of “fooistence”? Where’s the ground floor?
Here’s a question to condense the issue somewhat. What do you think about Bayesian updating? Is it “objective” enough?
Perhaps asking that question wasn’t the best way to make my point. Let me try to be more explicit. Intuitively, “complexity” seems to be an absolute, objective concept. But all of the formalizations we have of it so far contain a relativized core. In Bayesian updating, it’s the prior. In Kolmogorov complexity, it’s the universal Turing machine. If we use “simple math”, it would be the language we use to talk about math.
This failure to pin down an objective notion of complexity causes me to question the intuition that complexity is objective. I’d probably change my mind if someone came up with a “reasonable” formalization that’s not “relative to something.”
I spent some time thinking about your question and cannot give an answer until I understand better what you mean by objective vs arbitrary.
The concept of complexity looks objective enough in the mathematical sense. Then, if I understand you correctly, you take a step back and say that mathematics itself (including logic, I presume?) is a random concept, so other beings could have wildly different “foomatics” that they find completely clear and intuitive. With the standards thus raised, what kind of argument could ever show you that something is “objective”? This isn’t even the problem of induction, this is… I’m at a loss for words. Why do you even bother with Tegmark’s multiverse then? Why not say instead that “existence” is a random insular human concept, and our crystalloid friends could have a completely different concept of “fooistence”? Where’s the ground floor?
Here’s a question to condense the issue somewhat. What do you think about Bayesian updating? Is it “objective” enough?
Perhaps asking that question wasn’t the best way to make my point. Let me try to be more explicit. Intuitively, “complexity” seems to be an absolute, objective concept. But all of the formalizations we have of it so far contain a relativized core. In Bayesian updating, it’s the prior. In Kolmogorov complexity, it’s the universal Turing machine. If we use “simple math”, it would be the language we use to talk about math.
This failure to pin down an objective notion of complexity causes me to question the intuition that complexity is objective. I’d probably change my mind if someone came up with a “reasonable” formalization that’s not “relative to something.”