The phrase “the map is not the territory” is not just a possibly conceivable map, it’s part of my map.
Thinking in terms of programming, it’s vaguely like I have a class instance s where one of the elements p is a pointer to the instance itself. So I can write
*(s.p) == s.
Or go further and write
*(*(s.p).p) == s.
As far as I want with only the tools offered to me by my current map.
How did you ascertain there is no need for higher levels beforehand
Meta-level is pure logical reasoning over the universal truths. Since it does not depend in any way on the kinds maps and territories you encounter, it can be established beforehand. If the meta-level is sufficiently expressive (with appropriate reflective capacity), then you are all set.
Note that the OP does not say anything whatsoever about specific maps and territories, but rather reasons in the generic realm of universal truths. Think of a logical theory powerful enough to capture the OP argument and reason about whether the OP argument is true, and also able to express and reason about my argument about it, etc. That’s your ultimate meta-level. The insight is that when you have a [countably] infinite tower of expanding logical theories, you can take their union as your ultimate logical theory. Theoretically, this never stops (you can then take another tower over the union, repeat that process itself infinitely many times, etc, and at any step you can take a union of those), but you quickly ran out of things that you’d ever care about in practice.
Axioms necessarily must be consensual, i.e. shared by 2 or more interlocutors. If everyone invents their own personal axioms, they cease to be axioms and are just personal opinions.
What are the axioms that you believe are commonly shared, that lead to this conclusion?
“Beforehand” in which sense? Any sufficiently powerful non-inconsistent logical theory is necessary incomplete—you need to know what kinds of logical inferences you care to be making to pick the appropriate logical theory.
How did you ascertain there is no need for higher levels beforehand?
The phrase “the map is not the territory” is not just a possibly conceivable map, it’s part of my map.
Thinking in terms of programming, it’s vaguely like I have a class instance s where one of the elements p is a pointer to the instance itself. So I can write *(s.p) == s. Or go further and write *(*(s.p).p) == s.
As far as I want with only the tools offered to me by my current map.
Did you intend to answer the above question? If so, I don’t quite follow your programming analogy.
Meta-level is pure logical reasoning over the universal truths. Since it does not depend in any way on the kinds maps and territories you encounter, it can be established beforehand. If the meta-level is sufficiently expressive (with appropriate reflective capacity), then you are all set.
Note that the OP does not say anything whatsoever about specific maps and territories, but rather reasons in the generic realm of universal truths. Think of a logical theory powerful enough to capture the OP argument and reason about whether the OP argument is true, and also able to express and reason about my argument about it, etc. That’s your ultimate meta-level. The insight is that when you have a [countably] infinite tower of expanding logical theories, you can take their union as your ultimate logical theory. Theoretically, this never stops (you can then take another tower over the union, repeat that process itself infinitely many times, etc, and at any step you can take a union of those), but you quickly ran out of things that you’d ever care about in practice.
‘Established’ by whom?
(If ‘established’ by yourself, haven’t you just replaced an infinite regress with circularity?)
Kind of, and it’s unavoidable—by definition, you cannot justify your own axioms, and you cannon reason without some axioms. See https://www.lesswrong.com/posts/TynBiYt6zg42StRbb/my-kind-of-reflection and https://www.lesswrong.com/posts/C8nEXTcjZb9oauTCW/where-recursive-justification-hits-bottom for some meta-justification of why it is an OK thing to do.
Axioms necessarily must be consensual, i.e. shared by 2 or more interlocutors. If everyone invents their own personal axioms, they cease to be axioms and are just personal opinions.
What are the axioms that you believe are commonly shared, that lead to this conclusion?
Rarely happens in practice, at least not without a lot of work for people to sync up their intuitions and agree on a common subset.
Not sure it exists. But see the links I shared to Eliezer’s post expressing the axioms he is /hoping/ would be commonly shared.
“Beforehand” in which sense? Any sufficiently powerful non-inconsistent logical theory is necessary incomplete—you need to know what kinds of logical inferences you care to be making to pick the appropriate logical theory.
In the sense of how you attained this capacity to assess the threshold level.