There was a post a long time ago from Eliezer that I cannot find (edit: thank you Plasmon!) with a quick search of the site, where he had listed a set of characteristics (“blue”, “is egg-shaped”) and in the center a label for those characteristics (“blegg!”); and two graphs. One graph, which is the native description in most minds, has a node at the center, the label (blegg!) and nodes coming out (something is blue iff it is a bleg iff it is egg-shaped) and the other graph which had no label (something could be blue or egg-shaped; if both it might be a blegg!)
It seems to me that you conceive of maps as being the structural units of the universe. That is, there are a bunch of “map” nodes and no central territory node.
I have, in the past, felt a great sympathy for this idea. But I no longer subscribe to it.
There is one way in which this conception is simpler: it contains fewer nodes! One for each mapmaker rather than all those, PLUS one for the territory. Also, it has the satisfying deep wisdom relation that Louie discusses in his first point.
There are several ways in which it is less simple. It has FAR more connections; around n! rather than n. Even if not all mapmakers interact it has around n choose (i) where i is the average number of interactions. That’s WAY MORE.
Also, it requires that we have a strong sense of who the mapmakers are; that is, we have to draw a little circle around where all the maps are. This seems like a very odd, very complicated, not very materialist proposition which has all the same flaws that the copenhagen interpretation does.
It seems to me that you conceive of maps as being the structural units of the universe.
Boy, did I fail to communicate! No, that is not how I conceive of maps. “Structural units of the universe” sounds more like territory to me. You and I seem to have completely diverging understandings of what those neural net diagrams were about as well.
I think of maps as being things like Newton’s theory of Gravitation, QED, billiard-ball models of kinetic theory, and the approximation of the US economy as a free market. Einstein’s theory of gravitation is a better map than Newton’s, MWI and CI are competing maps of the same territory.
Your final paragraph signals to me that we are not likely to succeed in communicating.
I am not even thinking of a materialistic conception of maps being embedded in brains—as a part of the territory somehow representing the territory. I am perfectly happy maintaining a Cartesian dualism—thinking of maps as living in minds and of territory as composed of a different kind of substance altogether.
Your final paragraph establishes its point well; I agree we will not end up seeing eye-to-eye on this matter. However out of curiosity, I would ask if you can tell me how you go about finding out where mind-stuff is?
Is it in every human brain? When is it put there? Is it in a monkey brain? A chimpanzee brain? An octopus brain? Would it be in an em computation? Do you believe in p-zombies?
It occurs to me that this is probably coming off as more hostile than I intend. I used to have a sense of dualism, but the fact that there are questions about it I do not know how to answer turned me off. I am curious whether you answered these questions or ignored them, not as a matter of criticism.
I am really badly failing to communicate today. My fault, not yours. No, I am not asserting Cartesian dualism as a theory about the true nature of reality. I am a monist, a materialist. And in a sense, a reductionist. But not a naive one who thinks that high-level concepts should be discarded in favor of low level ones as soon as possible because they are closer to the ‘truth’.
Yes, those were scare quotes around the word ‘truth’. But the reason I scare-quote the word is not that I deny that truth exists. Of course the word has meaning. It is just that neither I nor anyone else can provide and justify any operational definition. We don’t know the truth. We can’t perceive the territory. We can only construct maps, talk about the maps we have created with other people, and evaluate the maps against the sense impressions that arrive at our minds.
Now all of this takes place in our minds. Minds, not brains. We need to pretend to a belief in dualism in order to even properly think the thought that the map is not the territory. Cartesian dualism is not a mistake. Any more than Newtonian physics is a mistake. When used correctly it enables you to understand what is happening.
No doubt this will have been another failure to communicate. Maybe I’ll try again someday.
Okay this is much better and different from what I’d thought you’d been saying.
When you say “we” and “minds” you are getting at something and here is my attempt to see if I’ve understood:
Given an algorithm which models itself (something like a mind; but not so specific, taboo mind) and its environment, that algorithm must recognize the difference between its model of its environment, which is filtered through it’s I/O devices of whatever form, and the environment itself.
The model this algorithm has should realize that the set of information contained in the environment may be in a different format from the set of information contained in the model (dualism of a sort) and that its accuracy is optimizing for predictions as opposed to truth.
There was a post a long time ago from Eliezer that I cannot find (edit: thank you Plasmon!) with a quick search of the site, where he had listed a set of characteristics (“blue”, “is egg-shaped”) and in the center a label for those characteristics (“blegg!”); and two graphs. One graph, which is the native description in most minds, has a node at the center, the label (blegg!) and nodes coming out (something is blue iff it is a bleg iff it is egg-shaped) and the other graph which had no label (something could be blue or egg-shaped; if both it might be a blegg!)
It seems to me that you conceive of maps as being the structural units of the universe. That is, there are a bunch of “map” nodes and no central territory node.
I have, in the past, felt a great sympathy for this idea. But I no longer subscribe to it.
There is one way in which this conception is simpler: it contains fewer nodes! One for each mapmaker rather than all those, PLUS one for the territory. Also, it has the satisfying deep wisdom relation that Louie discusses in his first point.
There are several ways in which it is less simple. It has FAR more connections; around n! rather than n. Even if not all mapmakers interact it has around n choose (i) where i is the average number of interactions. That’s WAY MORE.
Also, it requires that we have a strong sense of who the mapmakers are; that is, we have to draw a little circle around where all the maps are. This seems like a very odd, very complicated, not very materialist proposition which has all the same flaws that the copenhagen interpretation does.
Boy, did I fail to communicate! No, that is not how I conceive of maps. “Structural units of the universe” sounds more like territory to me. You and I seem to have completely diverging understandings of what those neural net diagrams were about as well.
I think of maps as being things like Newton’s theory of Gravitation, QED, billiard-ball models of kinetic theory, and the approximation of the US economy as a free market. Einstein’s theory of gravitation is a better map than Newton’s, MWI and CI are competing maps of the same territory.
Your final paragraph signals to me that we are not likely to succeed in communicating. I am not even thinking of a materialistic conception of maps being embedded in brains—as a part of the territory somehow representing the territory. I am perfectly happy maintaining a Cartesian dualism—thinking of maps as living in minds and of territory as composed of a different kind of substance altogether.
Your final paragraph establishes its point well; I agree we will not end up seeing eye-to-eye on this matter. However out of curiosity, I would ask if you can tell me how you go about finding out where mind-stuff is?
Is it in every human brain? When is it put there? Is it in a monkey brain? A chimpanzee brain? An octopus brain? Would it be in an em computation? Do you believe in p-zombies?
It occurs to me that this is probably coming off as more hostile than I intend. I used to have a sense of dualism, but the fact that there are questions about it I do not know how to answer turned me off. I am curious whether you answered these questions or ignored them, not as a matter of criticism.
I am really badly failing to communicate today. My fault, not yours. No, I am not asserting Cartesian dualism as a theory about the true nature of reality. I am a monist, a materialist. And in a sense, a reductionist. But not a naive one who thinks that high-level concepts should be discarded in favor of low level ones as soon as possible because they are closer to the ‘truth’.
Yes, those were scare quotes around the word ‘truth’. But the reason I scare-quote the word is not that I deny that truth exists. Of course the word has meaning. It is just that neither I nor anyone else can provide and justify any operational definition. We don’t know the truth. We can’t perceive the territory. We can only construct maps, talk about the maps we have created with other people, and evaluate the maps against the sense impressions that arrive at our minds.
Now all of this takes place in our minds. Minds, not brains. We need to pretend to a belief in dualism in order to even properly think the thought that the map is not the territory. Cartesian dualism is not a mistake. Any more than Newtonian physics is a mistake. When used correctly it enables you to understand what is happening.
No doubt this will have been another failure to communicate. Maybe I’ll try again someday.
Okay this is much better and different from what I’d thought you’d been saying.
When you say “we” and “minds” you are getting at something and here is my attempt to see if I’ve understood:
Given an algorithm which models itself (something like a mind; but not so specific, taboo mind) and its environment, that algorithm must recognize the difference between its model of its environment, which is filtered through it’s I/O devices of whatever form, and the environment itself.
The model this algorithm has should realize that the set of information contained in the environment may be in a different format from the set of information contained in the model (dualism of a sort) and that its accuracy is optimizing for predictions as opposed to truth.
Is this similar to what you mean?
No. If it involves self modeling, it is very far from what I am talking about. Give it up. It is just not worth it.
Okay. Sorry ):
The post you’re talking about is probably How An Algorithm Feels From Inside
Yes it is, thank you! I’ll add the link in.