My utility function is over ‘what other people see happen to themselves’ so it contains a reference to the same epistemic question.
Doesn’t this depend on how your utility function defines “people”? If it’s defined via pattern continuity, you get one answer to this question, and if it’s defined via physical continuity (or perhaps a combination of physical and pattern continuity), you get another. (Much like how the “a tree falls in forest” question depends on how “sound” is defined.)
Note that if “people” were ontologically primitive, then there would be a single objective answer. People are not ontologically primitive in reality, but are in our usual models. So it seems reasonable that we might intuitively think there should be a single objective answer to “what will someone see when they step into a Star Trek transporter” when there really isn’t.
If I were to ask you ‘which things are ontologically primitive in reality?’, what kinds of things would you use to justify your answer? To be clear, I’m not just asking about what your answer is, but what kind of evidence you think is relevant to determining an answer. What, in other words, would things have to look like for you to conclude that human beings were ontologically primitive in reality (and not just in our usual models).
I ask, among other reasons, because although I’m confident that phenomena relevant to human beings, like behaviors, thoughts, biological processes, etc. are reducible to more fundamental physical systems, it’s not obvious to me that this straightforwardly means that those more fundamental physical systems are more ontologically primitive than human beings. So far as I understand things, the physical, chemical, and biological theories we use to explain phenomena relevant to human beings don’t purport to make claims about ontological primitiveness.
This topic probably deserves more thought than I’ve put into it, but it seems to me that you can tell what things are ontologically primitive in in reality by looking at what objects the fundamental laws of physics keep track of and directly operate upon. For example in Newtonian physics these would be individual particles, and in Quantum Mechanics it would just be the wavefunction. (Of course at this point we don’t know what the fundamental laws of physics actually are so we can’t say what things are ontologically primitive yet, but it seems pretty clear that it can’t be human beings.)
it’s not obvious to me that this straightforwardly means that those more fundamental physical systems are more ontologically primitive than human beings
Ontological primitiveness seems like a binary property. Either something is kept track of and operated upon directly by the fundamental laws of physics, or it isn’t. I can’t see what sense it would make to say one thing is “more primitive” than another.
(It may be that there is more than one concept of “ontological primitiveness” that is useful. I think my definition/explanation makes sense in combination with my recent posts and comments, but you may have another one in mind?)
it seems to me that you can tell what things are ontologically primitive in in reality by looking at what objects the fundamental laws of physics keep track of and directly operate upon.
Suppose some people constructed an AI which is programmed to experience the world in terms of ontologically primitive things from the get go, and construct the rest of its (non-primitive) ontology from there. Do you think an AI, experiencing only ontologically primitive things and their behaviors according to fundamental physical laws could discover the existence of, say, living things?
Do you think an AI, experiencing only ontologically primitive things and their behaviors according to fundamental physical laws could discover the existence of, say, living things?
What do you mean by “discover the existence of living things”? It seems plausible that such an AI may create some auxiliary (or “higher-level”) objects in its world model to help it make predictions because it doesn’t have enough computing power to just apply the fundamental laws of physics, and in the course of doing this may also label some such objects with a label that’s roughly equivalent to “living”. If this counts, I think the answer is yes, possibly, depending on the design of the AI.
It seems plausible that such an AI may create some auxiliary (or “higher-level”) objects in its world model to help it make predictions because it doesn’t have enough computing power to just apply the fundamental laws of physics
Assume it has infinite computing power. The AI thing is just a way of asking this question: if something knew all the facts about the things physical laws keep track of and directly operate on, and it were logically omniscient, would it know, for example, that this thing here is a tulip, that it’s alive, etc.?
If not (I gather from your post that the answer is ‘no’) then it seems we should conclude one of two things:
1) Tulips are not in the territory, or,
2) Tulips are in the territory, but (for some reason) some facts about tulips are not derivable from facts about ontologically primitive things.
Which do you think is right? Or have I left out one or more possibilities?
(EDIT: I changed the example from ‘me’ to ‘tulips’ to avoid the impression that this question has anything to do with consciousness)
I’m also not sure what you mean by “Are tulips are in the territory?” or why you are asking me that. There seem to be collections or structures of ontologically primitive objects in the territory that correspond to the objects in our internal models that we label as “tulips”. From this, can you derive for yourself whether “tulips are in the territory”?
I’m also not sure what you mean by “Are tulips are in the territory?” or why you are asking me that.
I’m trying to get some grip on the relation between ontologically primitive things and ontologically non-primitive things. A second question lurking about here is one raised by some of EY’s recent talk about ontology as he would want it programmed into an AI.
We didn’t all start by understanding ontological primitives and discover that human beings exist. We started with human beings and discovered that facts about human beings are reducible to facts about ontological primitives (discovering what those primitives were along the way). But does the fact that we went from human beings down to ontological primitives mean that something that started from ontological primitives would discover human beings?
But if the question isn’t clear, or feels unmotivating, then I withdraw it, and I appreciate your answers thus far.
It’s possible to detect tulips, but there are many alternative things that it’s possible to detect, so there needs to be some motivation for the detecting of tulips in particular to actually take place. For natural concepts, it’s efficient world modeling (which your AI by assumption doesn’t need to care about), and for morality-related concepts, it’s value judgments (these will require different concepts for different AIs, but may agree on the utility of keeping track of the “fundamental” physical facts).
(On a different note, “Are tulips in the territory?” sounds like a question about definitions. Some more specific relevant query may be similar, but I’m not sure how to find one.)
So you’re saying that my AI (with infinite computational power) would never discover the existence of tulips?
(On a different note, “Are tulips in the territory?” sounds like a question about definitions.
I don’t intend it to be. I think tulips exist, unlike shmulips (similar to tulips, except they have golf balls instead of flowers), which don’t. I don’t think I have a firm grip on the map-territory distinction, but I was trying to use it in the way Wei was using it.
Anyway, here’s the basis of my question: tulips do exist. They’re real, mind independent things and they are part of the furniture of the universe. Any god or AI who came into our universe would have an incomplete understanding of this universe if they failed to include tulips in their story.
That said, is the complete story of our universe derivable from a complete story of the ontological primitives (plus whatever logic you wish to avail yourself of)? I’m not totally sure that’s a well formed question, mind you.
Anyway, here’s the basis of my question: tulips do exist. They’re real, mind independent things and they are part of the furniture of the universe. Any god or AI who came into our universe would have an incomplete understanding of this universe if they failed to include tulips in their story.
Tulips objectively exist as a fuzzy cluster in configuration space, and if an AI were to list all facts about the world, this would be one of them. But unlike us, an AI or god doesn’t necessarily have a reason to notice this clustering or make any use of it. It’s kind of like 22581959 being prime is an objective fact that you and I can discover, but don’t necessarily have any reason to notice or make use of.
BTW, Eliezer argued, and I agree with, that this kind of objective clustering can’t be used directly to define morally relevant concepts like “people”.
Tulips objectively exist as a fuzzy cluster in configuration space, and if an AI were to list all facts about the world, this would be one of them.
I was being a bit ambiguous: I mean to talk about concrete individual tulips, not the species. Even given the framework in ’The Cluster Structure...”, each actual tulip is a point in thingspace, not a cluster.
If an AI were to list all facts about the world, it would list that the wavefunction of the universe can be approximately factored into X, where X corresponds to what we would call an individual tulip. (Note that an individual tulip is also actually a cluster in configuration space, because it’s a blob of amplitude-factor, not a single point in configuration space. Of course this cluster is much smaller than the cluster of all tulips.)
This topic probably deserves more thought than I’ve put into it, but it seems to me that you can tell what things are ontologically primitive in in reality by looking at what objects the fundamental laws of physics keep track of and directly operate upon. For example in Newtonian physics these would be individual particles, and in Quantum Mechanics it would just be the wavefunction.
The problem is that different equivalent formulations will make different things ontologically primitive.
(Of course at this point we don’t know what the fundamental laws of physics actually are so we can’t say what things are ontologically primitive yet, but it seems pretty clear that it can’t be human beings.)
How do you know there is a fundamental level, as opposed something like a void cathedral?
The problem is that different equivalent formulations will make different things ontologically primitive.
Perhaps in this case we could say “the ontology of the universe is one or the other but I can’t tell which, so I’ll just have to be uncertain”. Do you see any problems with this, or have any better ideas?
How do you know there is a fundamental level, as opposed something like a void cathedral?
Can you give an example of a mathematical formulation of a void cathedral, just to show that such a thing is possible?
Can you give an example of a mathematical formulation of a void cathedral, just to show that such a thing is possible?
One description is something like the following: take the space of computable universes that agree with our observations so far. Rather than putting an Occam prior over it, put an ultrafilter on it. One can pick the ultrafilter so that the set of universes where any particular level is fundamental has measure zero.
I’m afraid I lack the background knowledge and/or math skills to figure out your idea from this short description. I can’t find any papers after doing a search either, so I guess this is your original idea? If so, why not write it up somewhere?
My utility function is over ‘what other people see happen to themselves’ so it contains a reference to the same epistemic question.
Doesn’t this depend on how your utility function defines “people”? If it’s defined via pattern continuity, you get one answer to this question, and if it’s defined via physical continuity (or perhaps a combination of physical and pattern continuity), you get another. (Much like how the “a tree falls in forest” question depends on how “sound” is defined.)
Note that if “people” were ontologically primitive, then there would be a single objective answer. People are not ontologically primitive in reality, but are in our usual models. So it seems reasonable that we might intuitively think there should be a single objective answer to “what will someone see when they step into a Star Trek transporter” when there really isn’t.
If you don’t mind a rather primitive question:
If I were to ask you ‘which things are ontologically primitive in reality?’, what kinds of things would you use to justify your answer? To be clear, I’m not just asking about what your answer is, but what kind of evidence you think is relevant to determining an answer. What, in other words, would things have to look like for you to conclude that human beings were ontologically primitive in reality (and not just in our usual models).
I ask, among other reasons, because although I’m confident that phenomena relevant to human beings, like behaviors, thoughts, biological processes, etc. are reducible to more fundamental physical systems, it’s not obvious to me that this straightforwardly means that those more fundamental physical systems are more ontologically primitive than human beings. So far as I understand things, the physical, chemical, and biological theories we use to explain phenomena relevant to human beings don’t purport to make claims about ontological primitiveness.
This topic probably deserves more thought than I’ve put into it, but it seems to me that you can tell what things are ontologically primitive in in reality by looking at what objects the fundamental laws of physics keep track of and directly operate upon. For example in Newtonian physics these would be individual particles, and in Quantum Mechanics it would just be the wavefunction. (Of course at this point we don’t know what the fundamental laws of physics actually are so we can’t say what things are ontologically primitive yet, but it seems pretty clear that it can’t be human beings.)
Ontological primitiveness seems like a binary property. Either something is kept track of and operated upon directly by the fundamental laws of physics, or it isn’t. I can’t see what sense it would make to say one thing is “more primitive” than another.
(It may be that there is more than one concept of “ontological primitiveness” that is useful. I think my definition/explanation makes sense in combination with my recent posts and comments, but you may have another one in mind?)
Suppose some people constructed an AI which is programmed to experience the world in terms of ontologically primitive things from the get go, and construct the rest of its (non-primitive) ontology from there. Do you think an AI, experiencing only ontologically primitive things and their behaviors according to fundamental physical laws could discover the existence of, say, living things?
What do you mean by “discover the existence of living things”? It seems plausible that such an AI may create some auxiliary (or “higher-level”) objects in its world model to help it make predictions because it doesn’t have enough computing power to just apply the fundamental laws of physics, and in the course of doing this may also label some such objects with a label that’s roughly equivalent to “living”. If this counts, I think the answer is yes, possibly, depending on the design of the AI.
Assume it has infinite computing power. The AI thing is just a way of asking this question: if something knew all the facts about the things physical laws keep track of and directly operate on, and it were logically omniscient, would it know, for example, that this thing here is a tulip, that it’s alive, etc.?
If not (I gather from your post that the answer is ‘no’) then it seems we should conclude one of two things:
1) Tulips are not in the territory, or,
2) Tulips are in the territory, but (for some reason) some facts about tulips are not derivable from facts about ontologically primitive things.
Which do you think is right? Or have I left out one or more possibilities?
(EDIT: I changed the example from ‘me’ to ‘tulips’ to avoid the impression that this question has anything to do with consciousness)
I’m also not sure what you mean by “Are tulips are in the territory?” or why you are asking me that. There seem to be collections or structures of ontologically primitive objects in the territory that correspond to the objects in our internal models that we label as “tulips”. From this, can you derive for yourself whether “tulips are in the territory”?
I’m trying to get some grip on the relation between ontologically primitive things and ontologically non-primitive things. A second question lurking about here is one raised by some of EY’s recent talk about ontology as he would want it programmed into an AI.
We didn’t all start by understanding ontological primitives and discover that human beings exist. We started with human beings and discovered that facts about human beings are reducible to facts about ontological primitives (discovering what those primitives were along the way). But does the fact that we went from human beings down to ontological primitives mean that something that started from ontological primitives would discover human beings?
But if the question isn’t clear, or feels unmotivating, then I withdraw it, and I appreciate your answers thus far.
It’s possible to detect tulips, but there are many alternative things that it’s possible to detect, so there needs to be some motivation for the detecting of tulips in particular to actually take place. For natural concepts, it’s efficient world modeling (which your AI by assumption doesn’t need to care about), and for morality-related concepts, it’s value judgments (these will require different concepts for different AIs, but may agree on the utility of keeping track of the “fundamental” physical facts).
(On a different note, “Are tulips in the territory?” sounds like a question about definitions. Some more specific relevant query may be similar, but I’m not sure how to find one.)
So you’re saying that my AI (with infinite computational power) would never discover the existence of tulips?
I don’t intend it to be. I think tulips exist, unlike shmulips (similar to tulips, except they have golf balls instead of flowers), which don’t. I don’t think I have a firm grip on the map-territory distinction, but I was trying to use it in the way Wei was using it.
Anyway, here’s the basis of my question: tulips do exist. They’re real, mind independent things and they are part of the furniture of the universe. Any god or AI who came into our universe would have an incomplete understanding of this universe if they failed to include tulips in their story.
That said, is the complete story of our universe derivable from a complete story of the ontological primitives (plus whatever logic you wish to avail yourself of)? I’m not totally sure that’s a well formed question, mind you.
Tulips objectively exist as a fuzzy cluster in configuration space, and if an AI were to list all facts about the world, this would be one of them. But unlike us, an AI or god doesn’t necessarily have a reason to notice this clustering or make any use of it. It’s kind of like 22581959 being prime is an objective fact that you and I can discover, but don’t necessarily have any reason to notice or make use of.
BTW, Eliezer argued, and I agree with, that this kind of objective clustering can’t be used directly to define morally relevant concepts like “people”.
I was being a bit ambiguous: I mean to talk about concrete individual tulips, not the species. Even given the framework in ’The Cluster Structure...”, each actual tulip is a point in thingspace, not a cluster.
If an AI were to list all facts about the world, it would list that the wavefunction of the universe can be approximately factored into X, where X corresponds to what we would call an individual tulip. (Note that an individual tulip is also actually a cluster in configuration space, because it’s a blob of amplitude-factor, not a single point in configuration space. Of course this cluster is much smaller than the cluster of all tulips.)
Okay, that answers my question, thanks.
The problem is that different equivalent formulations will make different things ontologically primitive.
How do you know there is a fundamental level, as opposed something like a void cathedral?
Perhaps in this case we could say “the ontology of the universe is one or the other but I can’t tell which, so I’ll just have to be uncertain”. Do you see any problems with this, or have any better ideas?
Can you give an example of a mathematical formulation of a void cathedral, just to show that such a thing is possible?
One description is something like the following: take the space of computable universes that agree with our observations so far. Rather than putting an Occam prior over it, put an ultrafilter on it. One can pick the ultrafilter so that the set of universes where any particular level is fundamental has measure zero.
I’m afraid I lack the background knowledge and/or math skills to figure out your idea from this short description. I can’t find any papers after doing a search either, so I guess this is your original idea? If so, why not write it up somewhere?
Sorta related. (Someone write “Metametametaphysics” plz.)