Thanks for explaining, upvoted. But I still don’t see how this could possibly make sense.
There is no indication that this model building will one day be exhausted. In fact, there is plenty of evidence to the contrary. It has happened many times throughout human history that we thought that our knowledge was nearly complete, there was nothing more to discover, except for one or two small things here and there.
But our models have become more accurate over time. We’ve become, if you will, “less wrong”. If there’s no territory, what have we been converging to?
Have you actually seen “the territory”? Of course not.
...Yes? I see it all the time.
There are plenty of unexplained observations out there. We assume that these come from some underlying “reality” which generates them. And it’s a fair assumption.
I seem to recall someone (EY?) defining “reality” as “that which generates our observations”. Which seems like a fairly natural definition to me. If it’s just maps generating our observations, I’d call the maps part of the territory. (Like a map with a picture of the map itself on the territory. Except, in your world, I guess, there’s no territory to chart so the map is a map of itself.) This feels like arguing about definitions.
I see how this might sorta make sense if we postulate that the Simulator Gods are trying really hard to fuck with us. Though still, in that case, I think the simulating world can be called a territory.
But our models have become more accurate over time.
Indeed they have. We can predict the outcome of future experiments better and better.
We’ve become, if you will, “less wrong”.
Yep.
If there’s no territory, what have we been converging to?
Why do you think we have been converging to something? Every new model asks generates more questions than it answers. Sure, we know now why emitted light is quantized, but we have no idea how to deal, for example, with the predicted infinite vacuum energy.
...Yes? I see it all the time.
No, you really don’t. What you think you see is a result of multiple layers of processing. What you get is observations, not the unfettered access to this territory thing.
: I seem to recall someone (EY?) defining “reality” as “that which generates our observations”. Which seems like a fairly natural definition to me.
It is not a definition, it’s a hypothesis. At least in the way Eliezer uses it. I make no assumptions about the source of observations, if any.
If it’s just maps generating our observations, I’d call the maps part of the territory.
First, I made no claims that maps generate anything. maps are what we use to make sense of observations. Second, If you define the territory the usual way, as “reality”, then of course maps are part of the territory, everything is.
in your world, I guess, there’s no territory to chart so the map is a map of itself.)
Not quite. You construct progressively more accurate models to explain past and predict future inputs. In the process, you gain access to new and more elaborate inputs. This does not have to end.
This feels like arguing about definitions.
I realize that is how you feel. The difference is that if the assumption of the territory implies that we have a chance to learn everything there is to learn some day, construct the absolutely accurate map of the territory (possibly at the price of duplicating the territory and calling it a map). I am not convinced that it is a good assumption. Quite the opposite, our experience shows that it is a bad one, it has been falsified time and again. And bad models should be discarded, no matter how comforting they may be.
No, you really don’t. What you think you see is a result of multiple layers of processing. What you get is observations, not the unfettered access to this territory thing.
You could argue that sensing is part of the territory while any thing that is sensed is part of the map, I think.
You could, but you should be very careful, since most of sensing is multiple levels of maps. Suppose you see a cat. So, presumably the cat is part of the territory, right? Well, let’s see:
what you perceive as a cat is constructed in your brain from genetics, postnatal development, education, previous experiences and nerve impulses reaching your visual cortex. There are multiple levels of processing: light entering through your eye, being focused, absorbed by light-sensitive cells, going through 3 or 4 levels of other cells before triggering spikes in the afferent fibers reaching deep into your visual cortex. The work done inside it to trigger “this is a cat” subroutine in a totally different part of the brain is much much more complex.
Any of these levels can be disrupted, so that when you see a cat others don’t agree (maybe someone drew a “realistic” picture to fool you, or maybe your brain constructed a cat image where that of a different but unfamiliar animal (say, raccoon) would be more accurate). Multiple observations are required to validate that what you perceive as a cat behaves the way your internal model of the cat predicts.
Even the light rays which eventually resulted in you being aware of the cat are simplified maps of propagating excitation of the EM field interacted with atoms in what could reasonably be modeled as cat’s fur. Unless it is better modeled as lines on paper.
This stack of models currently ends somewhere in the Standard Model of Particle physics. Not because it’s the “ultimate reality”, but because we don’t have a good handle on how to continue building the stack.
You could argue that all the things I have described are “real” and part of the territory. Absolutely you can. But then why stop there? If light rays are real and not just abstractions, then so are images of cats in your brain.
Thus any model is as “real” as any other, though one can argue that accurate (better at anticipating future experiences) model are more real than inaccurate ones. The heliocentric model is “more real” than the geocentric one. in the sense that it has larger domain of validity. But then you are also forced to admit that quarks are more real than mesons and cats are less real than generic felines.
By “sensing” I was referring to the end result of all those nerves firing and processes processing when awareness meets the result of all that stuff. I suppose I could have more accurately stated that awareness is a part of the territory as awareness arises directly from some part of your circuitry. Everything about the cat in your example may happen in the brain or not and so you can’t really be sure that there’s an underlying reality behind it, but awareness itself is a direct consequence of the configuration of the processing equipment.
It’s real, but the thing that’s being experienced isn’t the real thing. The cat quale is a real process, but it’s not a real cat (probably). The part of processing the quale that is the awareness (not the object of awareness) is itself the real awareness and holds the distinction of actually being in the territory rather than in the map.
Why do you think we have been converging to something?
What is the point of science, otherwise? Better prediction of observations? But you can’t explain what an observantion is.
If the territory theory is able to explain the purpose of science, and the no-territory theory is not , the territory theory is better.
What you think you see is a result of multiple layers of processing. What you get is observations, not the unfettered access to this territory thing.
..according to a map which has “inputs from the territory” marked on it.
seem to recall someone (EY?) defining “reality” as “that which generates our observations”. Which seems like a fairly natural definition to me.It is not a definition, it’s a hypothesis.
At least in the way Eliezer uses it. I make no assumptions about the source of observations, if any.
Well, you need to. If the territory theory can explain the very existence of observations, and the no-territory theory cannot, the territory theory is better,
You construct progressively more accurate models to explain past and predict future inputs. In the process, you gain access to new and more elaborate inputs.
Inputs from where?
The difference is that [if] the assumption of the territory implies that we have a chance to learn everything there is to learn some day, construct the absolutely accurate map of the territory
No it doesn’t. “The territory exists, but is not perfectly mappable” is a coherent assumption, particularly in view if the definition of the territory as the source of observations.
Thanks for explaining, upvoted. But I still don’t see how this could possibly make sense.
But our models have become more accurate over time. We’ve become, if you will, “less wrong”. If there’s no territory, what have we been converging to?
...Yes? I see it all the time.
I seem to recall someone (EY?) defining “reality” as “that which generates our observations”. Which seems like a fairly natural definition to me. If it’s just maps generating our observations, I’d call the maps part of the territory. (Like a map with a picture of the map itself on the territory. Except, in your world, I guess, there’s no territory to chart so the map is a map of itself.) This feels like arguing about definitions.
I see how this might sorta make sense if we postulate that the Simulator Gods are trying really hard to fuck with us. Though still, in that case, I think the simulating world can be called a territory.
Indeed they have. We can predict the outcome of future experiments better and better.
We’ve become, if you will, “less wrong”.
Yep.
If there’s no territory, what have we been converging to?
Why do you think we have been converging to something? Every new model asks generates more questions than it answers. Sure, we know now why emitted light is quantized, but we have no idea how to deal, for example, with the predicted infinite vacuum energy.
No, you really don’t. What you think you see is a result of multiple layers of processing. What you get is observations, not the unfettered access to this territory thing.
It is not a definition, it’s a hypothesis. At least in the way Eliezer uses it. I make no assumptions about the source of observations, if any.
First, I made no claims that maps generate anything. maps are what we use to make sense of observations. Second, If you define the territory the usual way, as “reality”, then of course maps are part of the territory, everything is.
Not quite. You construct progressively more accurate models to explain past and predict future inputs. In the process, you gain access to new and more elaborate inputs. This does not have to end.
I realize that is how you feel. The difference is that if the assumption of the territory implies that we have a chance to learn everything there is to learn some day, construct the absolutely accurate map of the territory (possibly at the price of duplicating the territory and calling it a map). I am not convinced that it is a good assumption. Quite the opposite, our experience shows that it is a bad one, it has been falsified time and again. And bad models should be discarded, no matter how comforting they may be.
You could argue that sensing is part of the territory while any thing that is sensed is part of the map, I think.
You could, but you should be very careful, since most of sensing is multiple levels of maps. Suppose you see a cat. So, presumably the cat is part of the territory, right? Well, let’s see:
what you perceive as a cat is constructed in your brain from genetics, postnatal development, education, previous experiences and nerve impulses reaching your visual cortex. There are multiple levels of processing: light entering through your eye, being focused, absorbed by light-sensitive cells, going through 3 or 4 levels of other cells before triggering spikes in the afferent fibers reaching deep into your visual cortex. The work done inside it to trigger “this is a cat” subroutine in a totally different part of the brain is much much more complex.
Any of these levels can be disrupted, so that when you see a cat others don’t agree (maybe someone drew a “realistic” picture to fool you, or maybe your brain constructed a cat image where that of a different but unfamiliar animal (say, raccoon) would be more accurate). Multiple observations are required to validate that what you perceive as a cat behaves the way your internal model of the cat predicts.
Even the light rays which eventually resulted in you being aware of the cat are simplified maps of propagating excitation of the EM field interacted with atoms in what could reasonably be modeled as cat’s fur. Unless it is better modeled as lines on paper.
This stack of models currently ends somewhere in the Standard Model of Particle physics. Not because it’s the “ultimate reality”, but because we don’t have a good handle on how to continue building the stack.
You could argue that all the things I have described are “real” and part of the territory. Absolutely you can. But then why stop there? If light rays are real and not just abstractions, then so are images of cats in your brain.
Thus any model is as “real” as any other, though one can argue that accurate (better at anticipating future experiences) model are more real than inaccurate ones. The heliocentric model is “more real” than the geocentric one. in the sense that it has larger domain of validity. But then you are also forced to admit that quarks are more real than mesons and cats are less real than generic felines.
By “sensing” I was referring to the end result of all those nerves firing and processes processing when awareness meets the result of all that stuff. I suppose I could have more accurately stated that awareness is a part of the territory as awareness arises directly from some part of your circuitry. Everything about the cat in your example may happen in the brain or not and so you can’t really be sure that there’s an underlying reality behind it, but awareness itself is a direct consequence of the configuration of the processing equipment.
So what is a map and not the territory in your example? The cat identification process? The “I see a cat” quale? I am confused.
Yes, the cat quale is map.
I’d argue that it is as real as any other brain process.
It’s real, but the thing that’s being experienced isn’t the real thing. The cat quale is a real process, but it’s not a real cat (probably). The part of processing the quale that is the awareness (not the object of awareness) is itself the real awareness and holds the distinction of actually being in the territory rather than in the map.
What is the point of science, otherwise? Better prediction of observations? But you can’t explain what an observantion is.
If the territory theory is able to explain the purpose of science, and the no-territory theory is not , the territory theory is better.
..according to a map which has “inputs from the territory” marked on it.
Well, you need to. If the territory theory can explain the very existence of observations, and the no-territory theory cannot, the territory theory is better,
Inputs from where?
No it doesn’t. “The territory exists, but is not perfectly mappable” is a coherent assumption, particularly in view if the definition of the territory as the source of observations.