Epistemic rationality is about forming true beliefs, about getting the map in your head to accurately reflect the world out there.
Since the map describes itself as well, not just the parts of the world other than the map, and being able to reason about the relationship of the map and the world is crucial in the context of epistemic rationality, I object to including the “out there” part in the quoted sentence. The map in your head should accurately reflect the world, not just the part of the world that’s “out there”.
But it can be taken as meaning to exclude the mind. I’m clearly not arguing with Luke’s intended meaning, so the intended meaning is irrelevant to this issue, only possible interpretations of the text as written.
Since the map describes itself as well, not just the parts of the world other than the map...
A bin trying to contain itself? I generally agree with your comment, but there are limits, no system can understand itself for that the very understanding would evade itself forever.
Ahh, don’t say “understanding” when you mean “containing a simulation”!
It’s true that a computer capable of storing n bits can’t contain within it a complete description of an arbitrary n-bit computer. But that’s not fundamentally different to being unable to store of a description of the 3^^^3 ×n-bit world out there (the territory will generally always be bigger than the map); and of course you don’t have to have a miniature bit-for-bit copy of the territory in your head to have a useful understanding of it, and the same goes for self-knowledge.
(Of course, regardless of all that, we have quines anyway, but they’ve already been mentioned.)
The naive concept of understanding includes everything we’ve already learned from cognitive psychology, and other sciences of the brain. Knowing, for example, that the brain runs on neurons with certain activation functions is useful even if you don’t know the specific activation states of all the neurons in your brain, as is a high-level algorithmic description of how our thought processes work.
This counts as part of the map that reflects the world “inside our heads”, and it is certainly worth refining.
In the context of a computer program or AI such “understanding” would include the AI inspecting its own hardware and its own source code, whether by reading it from the disk or esoteric quining tricks. An intelligent AI could make useful inferences from the content of the code itself -- without having to actually run it, which is what would constitute “simulation” and run into all the paradoxes with having not enough memory to contain a running version of itself.
“Understanding” is then usually partial, but still very useful. “Simulating” is precise and essentially complete, but usually computationally intractable (and occasionally impossible) so we rarely try to do that. You can’t get around logical certainty, but that just means you’ll sometimes have to live with incomplete knowledge, and it’s not as if we weren’t resigned to that anyway.
The “map” is NEVER comlete. So our map of the map is an incomplete map of the map. In engineering terms, the remarkable feature of the human mind’s gigantically oversimplified map of the world both in it and around it is that it is as effective as it is in changing the world.
On the other hand, since we are not exposed to anything with a better map than ours, it is difficult to know what we are missing. Must be a cognitive bias or three in there as well.
More accurately, the map should worry about mapping its future states, to plan the ways of setting them up to reflect the world, and have them mapped when they arrive, so that knowledge of them can be used to update the map (of the world further in the future, including the map, and more generally of relevant abstract facts).
(More trivially, there are quines), programs that know full own source code, as you likely know.)
(More trivially, there are quines, programs that know full own source code, as you likely know.)
I don’t think being able to quine yourself really has anything to do with fully understanding yourself. I could get a complete printout of the exact state of every neuron in my brain, that wouldn’t give me full understanding of myself. To do something useful with the data, I’d need to perform an analysis of it at a higher level of abstraction. A quine provides the raw source code that can be analyzed, but it does no analysis by itself.
“The effects of untried mutations on fifteen million interacting shapers could rarely be predicted in advance; in most cases, the only reliable method would have been to perform every computation that the altered seed itself would have performed… which was no different from going ahead and growing the seed, creating the mind, predicting nothing.” (Greg Egan).
In any case, if we are talking about the brains/minds of baseline unmodified humans, as we should be in an introductory article aimed at folks outside the LW community, then XiXiDu’s point is definitely valid. Ordinary humans can’t quine themselves, even if a quine could be construed as “understanding”.
His point is not valid, because it doesn’t distinguish the difficulty of self-understanding from that of understanding the world-out-there, as nshepperd points out. (There was also this unrelated issue where he didn’t seem to understand what quines can do.) A human self-model would talk about abstract beliefs first, not necessarily connecting them to the state of the brain in any way.
His point is not valid, because it doesn’t distinguish the difficulty of self-understanding from that of understanding the world-out-there, as nshepperd points out. (There was also this unrelated issue where he didn’t seem to understand what quines can do.)
I don’t? Can you elaborate on what exactly I don’t understand. Also, “self” is a really vague term.
More trivially, there are quines, programs that know full own source code, as you likely know.
Something has to interpret the source code (e.g. “print”). The map never equals the territory completely. At some point you’ll get stuck, any self-replication is based on outside factors, e.g. the low entropy at the beginning of the universe.
main = putStrLn $ (\x -> x ++ show x)"main = putStrLn $ (\\x -> x ++ show x)"
The quine does not include the code for the function “putStrLn”.
The quine does not include the code for the function “putStrLn”.
It could. You can pack a whole Linux distribution in there, it won’t cause any problems in principle (if we ignore the technical issue of having to deal with a multi-gigabyte source file)..
I probably shouldn’t talk about concepts of which I know almost nothing, especially if I don’t even know the agreed upon terminology to refer to it. The reason for why I replied to your initial comment about the “outside world” was that I felt reasonable sure that the concept that is called a quine does not enable any agent to feature a map that equals the territory, not even in principle. And that’s what made me believe that there is an “outside world” to any agent that is embedded in a larger world.
As far as I know a Quine can be seen as an artifact of a given language rather than a complete and consistent self-reference. Every Quine is missing some of its own definition, e.g. “when preceded by” or “print” need external interpreters to work as intended.
You wrote that you can pack a whole Linux distribution “in there”. I don’t see how that gets around the problem though, maybe you can elaborate on it. Even if the definitions of all functions were included in the definition of the quine, only the mechanical computation, the features of the actual data processing done at the hardware level “enable” the Linux kernel. You could in theory extent your quine until you have a self-replicating Turing machine, but in the end you will either have to resolve to mathematical Platonism or run into problems like the low-entropy beginning of the universe.
For example, once your map of the territory became an equal copy of the territory it would miss the fact that the territory is made up of itself and a perfect copy. And once you would incorporate that fact into your map then your map would be missing the fact that the difference between itself and the territory is the knowledge that there is a copy of it. I don’t see how you can sidestep this problem, even if you accept mathematical Platonism.
Unless there is a way to sidestep that problem there is always an “outside world” (I think that for all practical purposes this is true anyway).
Let’s say there are two agents, A and B, that try to defeat each other. If A was going to model B to predict its next move A(B), and vice versa B(A), then each of them would have to update on the fact that they are simulating each other, A(B(A)) and B(A(B)). That they will have to simulate themselves as well, A(A(B(A)), B(A)), does only highlight the ultimate problem, they are going to remain ignorant of some facts in the territory (“outside world”). Among other reasons, the observer effect does not permit them to attain certainty about the territory (including themselves). If nothing else, logical uncertainty will remain.
But as you probably know, I still lack the math and general background knowledge to tell if what I am saying makes any sense, I might simply be confused. But I have a strong intuition that what I am saying might not be completely wrong.
Related: Ken Thompson’s “Reflections on Trusting Trust” (his site, Wikipedia#Reflections_on_Trusting_Trust)), Richard Kennaway’s comment on “The Finale of the Ultimate Meta Mega Crossover”.
That is my own pet theory of “god.” That the complexity of the operation of the universe requires something at least as complex as the universe to understand it, and the only real candidate when it comes down to it is the universe itself. Either in some sense the universe comprehends itself, or it doesn’t. (Either the universe is a conscious universe, or it is a p-zombie universe.)
I suspect we are more than a few steps away from making a map of this part of the territory that is helpful beyond providing good beer conversations and maybe some poetry.
Every program runs on some kind of machine, be it an intel processor, an abstract model of a programming language or the laws of the universe. A program can know its own source code in terms it can execute, i.e. commands that are understood by the interpreter.
But I am not sure what point you are trying to make exactly in the above comment.
But I am not sure what point you are trying to make exactly in the above comment.
Vladimir Nesov was criticizing lukeprog’s phrase “world out there” claiming that the “map in your head should accurately reflect the world, not just the part of the world that’s “out there”″. I agree, but if you are accurate then you have to admit that it isn’t completely possible to do so.
Since the map describes itself as well, not just the parts of the world other than the map, and being able to reason about the relationship of the map and the world is crucial in the context of epistemic rationality, I object to including the “out there” part in the quoted sentence. The map in your head should accurately reflect the world, not just the part of the world that’s “out there”.
I suppose. Fixed.
I don’t think “out there” is meant to exclude the map itself- its metaphorical language.
But it can be taken as meaning to exclude the mind. I’m clearly not arguing with Luke’s intended meaning, so the intended meaning is irrelevant to this issue, only possible interpretations of the text as written.
(Nods and shrugs) Is there a way to make the point both accurately and simply? The whole thing is a mess of recursive reference.
A bin trying to contain itself? I generally agree with your comment, but there are limits, no system can understand itself for that the very understanding would evade itself forever.
Ahh, don’t say “understanding” when you mean “containing a simulation”!
It’s true that a computer capable of storing n bits can’t contain within it a complete description of an arbitrary n-bit computer. But that’s not fundamentally different to being unable to store of a description of the 3^^^3 ×n-bit world out there (the territory will generally always be bigger than the map); and of course you don’t have to have a miniature bit-for-bit copy of the territory in your head to have a useful understanding of it, and the same goes for self-knowledge.
(Of course, regardless of all that, we have quines anyway, but they’ve already been mentioned.)
Could you elaborate on the difference between “understanding” and “simulating”, how are you going to get around logical uncertainty?
The naive concept of understanding includes everything we’ve already learned from cognitive psychology, and other sciences of the brain. Knowing, for example, that the brain runs on neurons with certain activation functions is useful even if you don’t know the specific activation states of all the neurons in your brain, as is a high-level algorithmic description of how our thought processes work.
This counts as part of the map that reflects the world “inside our heads”, and it is certainly worth refining.
In the context of a computer program or AI such “understanding” would include the AI inspecting its own hardware and its own source code, whether by reading it from the disk or esoteric quining tricks. An intelligent AI could make useful inferences from the content of the code itself -- without having to actually run it, which is what would constitute “simulation” and run into all the paradoxes with having not enough memory to contain a running version of itself.
“Understanding” is then usually partial, but still very useful. “Simulating” is precise and essentially complete, but usually computationally intractable (and occasionally impossible) so we rarely try to do that. You can’t get around logical certainty, but that just means you’ll sometimes have to live with incomplete knowledge, and it’s not as if we weren’t resigned to that anyway.
The “map” is NEVER comlete. So our map of the map is an incomplete map of the map. In engineering terms, the remarkable feature of the human mind’s gigantically oversimplified map of the world both in it and around it is that it is as effective as it is in changing the world.
On the other hand, since we are not exposed to anything with a better map than ours, it is difficult to know what we are missing. Must be a cognitive bias or three in there as well.
More accurately, the map should worry about mapping its future states, to plan the ways of setting them up to reflect the world, and have them mapped when they arrive, so that knowledge of them can be used to update the map (of the world further in the future, including the map, and more generally of relevant abstract facts).
(More trivially, there are quines), programs that know full own source code, as you likely know.)
I don’t think being able to quine yourself really has anything to do with fully understanding yourself. I could get a complete printout of the exact state of every neuron in my brain, that wouldn’t give me full understanding of myself. To do something useful with the data, I’d need to perform an analysis of it at a higher level of abstraction. A quine provides the raw source code that can be analyzed, but it does no analysis by itself.
“The effects of untried mutations on fifteen million interacting shapers could rarely be predicted in advance; in most cases, the only reliable method would have been to perform every computation that the altered seed itself would have performed… which was no different from going ahead and growing the seed, creating the mind, predicting nothing.” (Greg Egan).
In any case, if we are talking about the brains/minds of baseline unmodified humans, as we should be in an introductory article aimed at folks outside the LW community, then XiXiDu’s point is definitely valid. Ordinary humans can’t quine themselves, even if a quine could be construed as “understanding”.
His point is not valid, because it doesn’t distinguish the difficulty of self-understanding from that of understanding the world-out-there, as nshepperd points out. (There was also this unrelated issue where he didn’t seem to understand what quines can do.) A human self-model would talk about abstract beliefs first, not necessarily connecting them to the state of the brain in any way.
I don’t? Can you elaborate on what exactly I don’t understand. Also, “self” is a really vague term.
Something has to interpret the source code (e.g. “print”). The map never equals the territory completely. At some point you’ll get stuck, any self-replication is based on outside factors, e.g. the low entropy at the beginning of the universe.
The quine does not include the code for the function “putStrLn”.
It could. You can pack a whole Linux distribution in there, it won’t cause any problems in principle (if we ignore the technical issue of having to deal with a multi-gigabyte source file)..
I don’t get your point, I know that you can do that.
Okay. Unpack the reason for saying that
This is what I meant.
I probably shouldn’t talk about concepts of which I know almost nothing, especially if I don’t even know the agreed upon terminology to refer to it. The reason for why I replied to your initial comment about the “outside world” was that I felt reasonable sure that the concept that is called a quine does not enable any agent to feature a map that equals the territory, not even in principle. And that’s what made me believe that there is an “outside world” to any agent that is embedded in a larger world.
As far as I know a Quine can be seen as an artifact of a given language rather than a complete and consistent self-reference. Every Quine is missing some of its own definition, e.g. “when preceded by” or “print” need external interpreters to work as intended.
You wrote that you can pack a whole Linux distribution “in there”. I don’t see how that gets around the problem though, maybe you can elaborate on it. Even if the definitions of all functions were included in the definition of the quine, only the mechanical computation, the features of the actual data processing done at the hardware level “enable” the Linux kernel. You could in theory extent your quine until you have a self-replicating Turing machine, but in the end you will either have to resolve to mathematical Platonism or run into problems like the low-entropy beginning of the universe.
For example, once your map of the territory became an equal copy of the territory it would miss the fact that the territory is made up of itself and a perfect copy. And once you would incorporate that fact into your map then your map would be missing the fact that the difference between itself and the territory is the knowledge that there is a copy of it. I don’t see how you can sidestep this problem, even if you accept mathematical Platonism.
Unless there is a way to sidestep that problem there is always an “outside world” (I think that for all practical purposes this is true anyway).
Let’s say there are two agents, A and B, that try to defeat each other. If A was going to model B to predict its next move A(B), and vice versa B(A), then each of them would have to update on the fact that they are simulating each other, A(B(A)) and B(A(B)). That they will have to simulate themselves as well, A(A(B(A)), B(A)), does only highlight the ultimate problem, they are going to remain ignorant of some facts in the territory (“outside world”). Among other reasons, the observer effect does not permit them to attain certainty about the territory (including themselves). If nothing else, logical uncertainty will remain.
But as you probably know, I still lack the math and general background knowledge to tell if what I am saying makes any sense, I might simply be confused. But I have a strong intuition that what I am saying might not be completely wrong.
Does the territory “know” the territory?
Related: Ken Thompson’s “Reflections on Trusting Trust” (his site, Wikipedia#Reflections_on_Trusting_Trust)), Richard Kennaway’s comment on “The Finale of the Ultimate Meta Mega Crossover”.
That is my own pet theory of “god.” That the complexity of the operation of the universe requires something at least as complex as the universe to understand it, and the only real candidate when it comes down to it is the universe itself. Either in some sense the universe comprehends itself, or it doesn’t. (Either the universe is a conscious universe, or it is a p-zombie universe.)
I suspect we are more than a few steps away from making a map of this part of the territory that is helpful beyond providing good beer conversations and maybe some poetry.
Every program runs on some kind of machine, be it an intel processor, an abstract model of a programming language or the laws of the universe. A program can know its own source code in terms it can execute, i.e. commands that are understood by the interpreter.
But I am not sure what point you are trying to make exactly in the above comment.
Vladimir Nesov was criticizing lukeprog’s phrase “world out there” claiming that the “map in your head should accurately reflect the world, not just the part of the world that’s “out there”″. I agree, but if you are accurate then you have to admit that it isn’t completely possible to do so.