Sorry, I misparsed your comment and gave a wrong answer, which I then deleted.
Your original comment was trivially correct, and my reply missed the point. We can never justify our concept of complexity by thinking like that—linguistically—because this would be like trying to justify our prior with our prior, “a priori”. If my prior is based on complexity and Bob’s prior is based on foobity (religion, or whatever), we will find each other’s priors weird. So if you ask whether all imaginable creatures have to use our concept of complexity, the easy answer is no. Instead we look at the outside world and note that our brand of razor seems to work. When it doesn’t (religion, or whatever), we update it. Is there any other aspect to your question that I missed?
Let’s call our brand of razor together with the algorithm we use to update it (using what we see from the outside world) our “meta-razor”. Now is this “meta-razor” just a kind of “foobity”, i.e., an arbitrary notion that we just happen to have, or is there something objective about it?
I spent some time thinking about your question and cannot give an answer until I understand better what you mean by objective vs arbitrary.
The concept of complexity looks objective enough in the mathematical sense. Then, if I understand you correctly, you take a step back and say that mathematics itself (including logic, I presume?) is a random concept, so other beings could have wildly different “foomatics” that they find completely clear and intuitive. With the standards thus raised, what kind of argument could ever show you that something is “objective”? This isn’t even the problem of induction, this is… I’m at a loss for words. Why do you even bother with Tegmark’s multiverse then? Why not say instead that “existence” is a random insular human concept, and our crystalloid friends could have a completely different concept of “fooistence”? Where’s the ground floor?
Here’s a question to condense the issue somewhat. What do you think about Bayesian updating? Is it “objective” enough?
Perhaps asking that question wasn’t the best way to make my point. Let me try to be more explicit. Intuitively, “complexity” seems to be an absolute, objective concept. But all of the formalizations we have of it so far contain a relativized core. In Bayesian updating, it’s the prior. In Kolmogorov complexity, it’s the universal Turing machine. If we use “simple math”, it would be the language we use to talk about math.
This failure to pin down an objective notion of complexity causes me to question the intuition that complexity is objective. I’d probably change my mind if someone came up with a “reasonable” formalization that’s not “relative to something.”
Sorry, I misparsed your comment and gave a wrong answer, which I then deleted.
Your original comment was trivially correct, and my reply missed the point. We can never justify our concept of complexity by thinking like that—linguistically—because this would be like trying to justify our prior with our prior, “a priori”. If my prior is based on complexity and Bob’s prior is based on foobity (religion, or whatever), we will find each other’s priors weird. So if you ask whether all imaginable creatures have to use our concept of complexity, the easy answer is no. Instead we look at the outside world and note that our brand of razor seems to work. When it doesn’t (religion, or whatever), we update it. Is there any other aspect to your question that I missed?
Let’s call our brand of razor together with the algorithm we use to update it (using what we see from the outside world) our “meta-razor”. Now is this “meta-razor” just a kind of “foobity”, i.e., an arbitrary notion that we just happen to have, or is there something objective about it?
I spent some time thinking about your question and cannot give an answer until I understand better what you mean by objective vs arbitrary.
The concept of complexity looks objective enough in the mathematical sense. Then, if I understand you correctly, you take a step back and say that mathematics itself (including logic, I presume?) is a random concept, so other beings could have wildly different “foomatics” that they find completely clear and intuitive. With the standards thus raised, what kind of argument could ever show you that something is “objective”? This isn’t even the problem of induction, this is… I’m at a loss for words. Why do you even bother with Tegmark’s multiverse then? Why not say instead that “existence” is a random insular human concept, and our crystalloid friends could have a completely different concept of “fooistence”? Where’s the ground floor?
Here’s a question to condense the issue somewhat. What do you think about Bayesian updating? Is it “objective” enough?
Perhaps asking that question wasn’t the best way to make my point. Let me try to be more explicit. Intuitively, “complexity” seems to be an absolute, objective concept. But all of the formalizations we have of it so far contain a relativized core. In Bayesian updating, it’s the prior. In Kolmogorov complexity, it’s the universal Turing machine. If we use “simple math”, it would be the language we use to talk about math.
This failure to pin down an objective notion of complexity causes me to question the intuition that complexity is objective. I’d probably change my mind if someone came up with a “reasonable” formalization that’s not “relative to something.”