I probably shouldn’t talk about concepts of which I know almost nothing, especially if I don’t even know the agreed upon terminology to refer to it. The reason for why I replied to your initial comment about the “outside world” was that I felt reasonable sure that the concept that is called a quine does not enable any agent to feature a map that equals the territory, not even in principle. And that’s what made me believe that there is an “outside world” to any agent that is embedded in a larger world.
As far as I know a Quine can be seen as an artifact of a given language rather than a complete and consistent self-reference. Every Quine is missing some of its own definition, e.g. “when preceded by” or “print” need external interpreters to work as intended.
You wrote that you can pack a whole Linux distribution “in there”. I don’t see how that gets around the problem though, maybe you can elaborate on it. Even if the definitions of all functions were included in the definition of the quine, only the mechanical computation, the features of the actual data processing done at the hardware level “enable” the Linux kernel. You could in theory extent your quine until you have a self-replicating Turing machine, but in the end you will either have to resolve to mathematical Platonism or run into problems like the low-entropy beginning of the universe.
For example, once your map of the territory became an equal copy of the territory it would miss the fact that the territory is made up of itself and a perfect copy. And once you would incorporate that fact into your map then your map would be missing the fact that the difference between itself and the territory is the knowledge that there is a copy of it. I don’t see how you can sidestep this problem, even if you accept mathematical Platonism.
Unless there is a way to sidestep that problem there is always an “outside world” (I think that for all practical purposes this is true anyway).
Let’s say there are two agents, A and B, that try to defeat each other. If A was going to model B to predict its next move A(B), and vice versa B(A), then each of them would have to update on the fact that they are simulating each other, A(B(A)) and B(A(B)). That they will have to simulate themselves as well, A(A(B(A)), B(A)), does only highlight the ultimate problem, they are going to remain ignorant of some facts in the territory (“outside world”). Among other reasons, the observer effect does not permit them to attain certainty about the territory (including themselves). If nothing else, logical uncertainty will remain.
But as you probably know, I still lack the math and general background knowledge to tell if what I am saying makes any sense, I might simply be confused. But I have a strong intuition that what I am saying might not be completely wrong.
Related: Ken Thompson’s “Reflections on Trusting Trust” (his site, Wikipedia#Reflections_on_Trusting_Trust)), Richard Kennaway’s comment on “The Finale of the Ultimate Meta Mega Crossover”.
That is my own pet theory of “god.” That the complexity of the operation of the universe requires something at least as complex as the universe to understand it, and the only real candidate when it comes down to it is the universe itself. Either in some sense the universe comprehends itself, or it doesn’t. (Either the universe is a conscious universe, or it is a p-zombie universe.)
I suspect we are more than a few steps away from making a map of this part of the territory that is helpful beyond providing good beer conversations and maybe some poetry.
I don’t get your point, I know that you can do that.
Okay. Unpack the reason for saying that
This is what I meant.
I probably shouldn’t talk about concepts of which I know almost nothing, especially if I don’t even know the agreed upon terminology to refer to it. The reason for why I replied to your initial comment about the “outside world” was that I felt reasonable sure that the concept that is called a quine does not enable any agent to feature a map that equals the territory, not even in principle. And that’s what made me believe that there is an “outside world” to any agent that is embedded in a larger world.
As far as I know a Quine can be seen as an artifact of a given language rather than a complete and consistent self-reference. Every Quine is missing some of its own definition, e.g. “when preceded by” or “print” need external interpreters to work as intended.
You wrote that you can pack a whole Linux distribution “in there”. I don’t see how that gets around the problem though, maybe you can elaborate on it. Even if the definitions of all functions were included in the definition of the quine, only the mechanical computation, the features of the actual data processing done at the hardware level “enable” the Linux kernel. You could in theory extent your quine until you have a self-replicating Turing machine, but in the end you will either have to resolve to mathematical Platonism or run into problems like the low-entropy beginning of the universe.
For example, once your map of the territory became an equal copy of the territory it would miss the fact that the territory is made up of itself and a perfect copy. And once you would incorporate that fact into your map then your map would be missing the fact that the difference between itself and the territory is the knowledge that there is a copy of it. I don’t see how you can sidestep this problem, even if you accept mathematical Platonism.
Unless there is a way to sidestep that problem there is always an “outside world” (I think that for all practical purposes this is true anyway).
Let’s say there are two agents, A and B, that try to defeat each other. If A was going to model B to predict its next move A(B), and vice versa B(A), then each of them would have to update on the fact that they are simulating each other, A(B(A)) and B(A(B)). That they will have to simulate themselves as well, A(A(B(A)), B(A)), does only highlight the ultimate problem, they are going to remain ignorant of some facts in the territory (“outside world”). Among other reasons, the observer effect does not permit them to attain certainty about the territory (including themselves). If nothing else, logical uncertainty will remain.
But as you probably know, I still lack the math and general background knowledge to tell if what I am saying makes any sense, I might simply be confused. But I have a strong intuition that what I am saying might not be completely wrong.
Does the territory “know” the territory?
Related: Ken Thompson’s “Reflections on Trusting Trust” (his site, Wikipedia#Reflections_on_Trusting_Trust)), Richard Kennaway’s comment on “The Finale of the Ultimate Meta Mega Crossover”.
That is my own pet theory of “god.” That the complexity of the operation of the universe requires something at least as complex as the universe to understand it, and the only real candidate when it comes down to it is the universe itself. Either in some sense the universe comprehends itself, or it doesn’t. (Either the universe is a conscious universe, or it is a p-zombie universe.)
I suspect we are more than a few steps away from making a map of this part of the territory that is helpful beyond providing good beer conversations and maybe some poetry.