Reality is that which actually exists, regardless of how any agents within it might perceive it, choose to model it, or describe it to each other.
If reality happens to be infinitely complex, then all finite models of it must necessarily be incomplete. That might be annoying, but why would you consider that to mean “reality doesn’t really exist”?
Well, to be clear, I didn’t intend to say that reality doesn’t really exist. There’s definitely something that’s real. I was just wondering about if there is some base-level reality that’s ontologically different from other things, like the abstractions we use.
Now, what I’m saying feels pretty philosophical, and perhaps the question isn’t even meaningful.
Still, I’m wondering about the agents making an infinite sequence of decompositions that each have increased predictive accuracy. What would the base-level reality be in that case? Any of the decompositions the agents create would be wrong, even if some are infinitely complex.
Also, I’ve realize I’m confused about the meaning of “what really exists”, but I think it would be hard to clarify and reason about this. Perhaps I’m overthinking things, but I am still rather confused.
I’m imagining some other agent or AI that doesn’t distinguish between base-level reality and abstractions, I’m not sure how I could argue with them. I mean, in principle, I think you could come up with reasoning systems that distinguish between base-level reality and abstractions, as well as reasoning systems that don’t, that both make equally good empirical predictions. If there was some alien that didn’t make the distinction in their epistemology or ontology, I’m not sure how I could say, and support saying, “You’re wrong”.
I mean, I predict you could both make arbitrarily powerful agents with high predictive accuracy and high optimization-pressure that don’t distinguish between base-level reality and abstractions, and could do the same with agents that do make such a distinction. If both perform fine, them I’m not sure how I could argue that one’s “wrong”.
Is the existence of base-level reality subjective? Does this question even make sense?
We are probably just using different terminology and talking past each other. You agree that there is “something that’s real”. From my point of view, the term “base-level reality” refers to exactly that which is real, and no more. The abstractions we use do not necessarily correspond with base-level reality in any way at all. In particular if we are any of simulated entities, dreaming, full-sensory hallucinating, disembodied consciousness, or brains in jars with synthetic sensory input then we may not have any way to learn anything meaningful about base-level reality, but that does not preclude its existence because it is still certain that something exists.
Still, I’m wondering about the agents making an infinite sequence of decompositions that each have increased predictive accuracy. What would the base-level reality be in that case? Any of the decompositions the agents create would be wrong, even if some are infinitely complex.
None of the models are any sort of reality at all. At best, they are predictors of some sort of sensory reality (which may be base-level reality, or might not). It is possible that all of the models are actually completely wrong, as the agents have all been living in a simulation or are actually insane with false memories of correct predictions, etc.
Is the existence of base-level reality subjective? Does this question even make sense?
The question makes sense, but the answer is the most emphatic NO that it is possible to give. Even in some hypothetical solipsistic universe in which only one bodiless mind exists and anything else is just internal experiences of that mind, that mind objectively exists.
It is conceivable to suppose a universe in which everything is a simulation in some lower-level universe resulting in an ordering with no least element to qualify as base-level reality, but this is still an objective fact about such a universe.
We do seem have have been talking past each other to some extent. Base-level reality, for course, exists if you define it to be “what really exists”.
However, I’m a little unsure about if that’s how people use the word. I mean, if someone asked me if Santa really exists, I’d say “No”, but if they asked if chairs really existed, I’d say “Yes”. That doesn’t seem wrong to me, but I thought our base-level reality only contained subatomic particles, not chairs. Does this mean the statement “Chairs really exist” is actually wrong? Or I am misinterpreting?
I’m also wondering how people justify thinking that models talking about things like chairs, trees, and anything other than subatomic particles don’t “really exist”. Is this even true?
I’m just imagining talking with some aliens with no distinction between base-level reality and what we would consider mere abstractions. For example, suppose the aliens knew about chairs, when they discovered quantum theory, they said say, “Oh! There are these atom things, and when they’re arrange in the right way, they cause chairs to exist!” But suppose they never distinguished between the subatomic particles being real and they chairs being real: they just saw both subatomic particles and chairs to both be fully real, and the correct arrangement of the former caused the latter to exist.
How could I argue with such aliens? They’re already making correct predictions, so I don’t see any way to show them evidence that disproves them. Is there some abstract reason to think models about thing like chairs don’t “really exist”?
The main places I’ve see the term “base-level reality” used are in discussions about the simulation hypothesis. “Base-level” being the actually real reality where sensory information tells you about interactions in the actual real world, as opposed to simulations where the sensory information is fabricated and almost completely independent of the rules that base-level reality follows. The abstraction is that the base-level reality serves as a foundation on which (potentially) a whole “tower” of simulations-within-simulations-within-simulations could be erected.
That semantic excursion aside, you don’t need to go to aliens to find beings that hold subatomic particles as being ontologically equivalent with chairs. Plenty of people hold that they’re both abstractions that help us deal with the world we live in, just at different length scales (and I’m one of them).
Well, even in a simulation, sensory information still tells you about interactions in the actual real world. I mean, based on your experiences in the simulation, you can potentially approximately infer the algorithm and computational state of the “base-level” computer you’re running in, and I believe those count as interactions in the “actual real world”. And if your simulation is sufficiently big and takes up a sufficiently large amount of the world, you could potentially learn quite a lot about the underlying “real world” just by examining your simulation.
That said, I still can’t say I really understand the concept of “base-level reality”. I know you said its what informs you about the “actual real world”, but this feels similarly confusing to me as defining base-level reality as “what really exists”. I know that reasoning and talking about things so abstract is hard and can easily lead to nonsense, but I’m still interested.
I’m curious about what even the purpose is of having an ontologically fundamental distinction between base-level reality and abstractions and whether it’s worth having. When asking, “Should I treat base-level reality and abstractions as fundamentally distinct?”, I think I good way to approximate this is by asking “Would I want an AI to reason as if its abstractions and base-level reality were fundamentally distinct?”
And I’m not completely sure they should. AIs, to reason practically, need to use “abstractions” in at least some of their models. If you want, you could have a special “This is just an abstraction” or “this is base-level reality” tag on each of your models, but I’m not sure what the benefit of this would be or what you would use it for.
Even without such a distinction, an AI would have both models that would be normally considered abstractions, as well as those of what you would think of as base-level reality, and would select which models to use based on their computational efficiency and the extent to which they are relevant and informative to the topic at hand. That sounds like a reasonable thing to do, and I’m not clear how ascribing fundamental difference to “abstractions” and “base-level” reality would do better than this.
If the AI talks with humans that use the phrase “base-level reality”, then it could potentially be useful for the AI to come up with an is-base-level-reality predicate in its world model in order to model things that answer, “When will this person call something base-level reality?” But such an predicate wouldn’t be treated as fundamentally different from any other predicate, like “Is a chair”.
When asking, “Should I treat base-level reality and abstractions as fundamentally distinct?”, I think I good way to approximate this is by asking “Would I want an AI to reason as if its abstractions and base-level reality were fundamentally distinct?”
Do you want an AI to be able to conceive of anything along the lines of “how correct is my model”, to distinguish hypothetical from actual, or illusion from substance?
If you do, then you want something that fits in the conceptual space pointed at by “base-level reality”, even if it doesn’t use that phrase or even have the capability to express it.
I suppose it might be possible to have a functioning AI that is capable of reasoning and forming models without being able to make any such distinctions, but I can’t see a way to do it that won’t be fundamentally crippled compared with human capability.
I’m interested in your thoughts on how the AI would be crippled.
I don’t think it would be crippled in terms of empirical predictive accuracy, at least. The AI could till come come up with all the low-level models like quantum physics, as well as keep the abstract ones like “this is what a chair is”, and then just use whichever it needs to make the highest possible predictive accuracy in a given circumstances.
If the AI is built to make and run quantum physics experiments, then in order to have high predictive accuracy is would need to learn and use an accurate model of quantum physics. But I don’t see why you would need a distinction between base-level reality and abstractions to do that.
The AI could still learn a sense of “illusion”. If the AI is around psychotic people who have illusions a lot, then I don’t see what’s stopping the AI from forming a model model saying, “Some people experience these things called ‘illusions’, and it makes them take the wrong action or wrong predictions as specified in <insert model of how people react to illusions”.
And I don’t see why the AI wouldn’t be able to consider the possibility that it also experiences illusions. For example, suppose the AI is in the desert and keeps seeing what looks like an oasis. But when the AI gets closer, it sees only sand. To have higher predictive accuracy in this situation, the AI could learn a (non-ontologically fundamental) “is-an-illusion” predicate.
Would the crippling me in terms of scoring highly on its utility function, rather than just predicting percepts? I don’t really see how this would be a problem. I mean, suppose you want an AI to make chairs. Then even if the AI lacked a notion of base-level reality, it could still learn an accurate models of how chairs work and how they are manufactured. Then the AI could have its utility function defined in terms of it’s notion of chairs to make it make chairs.
Could you give any specific example in which an AI using no ontologically fundamental notion of base-level reality would either make the wrong prediction or make the wrong action, in a way that would be avoided by using such a notion?
This feels like a bait-and-switch since you’re now talking about this in terms of an “ontologically fundamental” qualifier where previously you were only talking about “ontologically different”.
To you, does the phrase “ontologically fundamental” mean exactly the same thing as “ontologically different”? It certainly doesn’t to me!
It was a mistake for me to conflate “ontologically fundamental” and “ontologically different.
Still, I had in mind that they were ontologically different in some fundamental way. It was my mistake to merely use the word “different”. I had imagined that to make an AI that’s reasonable, it would actually make sense to hard-code some notion of base-level reality as well as abstractions, and to treat them differently. For example, you could have the AI have a single prior over “base-level reliaty”, then just come up with whatever abstractions that work well with predictively approximating the base-level reality. Instead it seems like the AI could just learn the concept of “base-level reality” like it would learn any other concept. Is this correct?
Also, in the examples I gave, I think the AI wouldn’t actually have needed a notion of base-level reality. The concept of a mirage is different from the concept of non-base-level reality. So is the concept of a mental illusion. Understanding both of those is different than understanding the concept of base-level reality.
If humans use the phrase “base-level reality”, I still don’t think it would be strictly necessary for an AI to have the concept. The AI could just know rules of the form, “If you ask a human if x is base-level reality, they will say ‘yes’ in the following situations...”, and then describe the situations.
So it doesn’t seem to me like the actual concept of “base-level reality” is essential, though it might be helpful. Of course, I might of course be missing or misunderstanding something. Corrections are appreciated.
The concept of a mirage is different from the concept of non-base-level reality.
Different in a narrow sense yes. “Refraction through heated air that can mislead a viewer into thinking it is reflection from water” is indeed different from “lifetime sensory perceptions that mislead about the true nature and behaviour of reality”. However, my opinion is that any intelligence that can conceive of the first without being able to conceive of the second is crippled by comparison with the range of human thought.
...lifetime sensory perceptions that mislead about the true nature and behaviour of reality
I don’t think you would actually need a concept of base-level reality to conceive of this.
First off, let me say that’s it seems pretty hard coming up with lifetime sensory precepts that would mislead about reality. Even if the AI was in a simulation, the physical implementation is part of reality. And the AI could learn about it. And from this, the AI could also potentially learn about the world outside the simulation. AIs commonly try to come up with the simplest (in terms of description length), most predictively accurate model of their percepts they can. And I bet the simplest models would involve having a world outside the simulation with specified physics, that would result in the simulations being built.
That said, lifetime sensory percepts can still mislead. For example, the simplest, highest-prior models that explain the AI’s percepts might say it’s in a simulation run by aliens. However, suppose the AI’s simulation actually just poofed into existed without a cause, and the rest of the world is filled with giant hats and no aliens. An AI, even without a distinction between base-level reality and abstractions, would still be able to come up with this model. If this isn’t a model involving percepts misleading you about the nature of reality, I’m not sure what is. So it seems to me that such AIs would be able to conceive of the idea of percepts misleading about reality. And the AIs would assign low probability to being in the all-hat world, just as they should.
Even if the AI was in a simulation, the physical implementation is part of reality. And the AI could learn about it.
The only means would be errors in the simulation.
Any underlying reality that supports Turing machines or any of the many equivalents can simulate every computable process. Even in the case of computers with bounded resources, there are corresponding theorems that show that the process being computed does not depend upon the underlying computing model.
So the only thing that can be discerned is that the underlying reality supports computation, and says essentially nothing about the form that it takes.
An AI, even without a distinction between base-level reality and abstractions, [...] would be able to conceive of the idea of percepts misleading about reality
How can it conceive of the idea of percepts misleading about reality if it literally can’t conceive of any distinction between models (which are a special case of abstractions) and reality?
Well, the only absolute guarantee the AI can make is that the underlying reality supports computation.
But it can still probabilistically infer other things about it. Specifically, the AI knows not only that the underlying reality supports computation, but also that there was some underlying process that actually created the simulation it’s in. Even though Conway’s Game of Life can allow for arbitrary computation, many possible configurations of the world state would result in no AI simulations being made. The configurations that would result in AI simulations being made would likely involve some sort of intelligent civilization creating the simulations. So the AI could potentially predict the existence of this civilization and infer some things about it.
Regardless, even if the AI can’t infer anything else about outside reality, I don’t see how this is a fault of not having a notion of base-level reality. I mean, if you’re correct, then it’s not clear to me how an AI with a notion of base-level reality would do inferentially better.
How can it conceive of the idea of percepts misleading about reality if it literally can’t conceive of any distinction between models (which are a special case of abstractions) and reality?
Well, as I said before, the AI could still consider the possibility that the world is composed entirely of hats (minus the AI simulation). The AI could also have a model of Bayesian inference and infer that the Bayesian probability that would be rational to assign to “the world is all hats” is low and its evidence makes it even lower. So, by combining these two models, the AI can come up with a model that says, “The world is all hats, even though everything I’ve seen, according to probability theory, makes it seem like this isn’t the case”. That sounds like a model about the idea of percepts misleading about reality.
I know we’ve been going back and forth a lot, but I think these are pretty interesting things to talk about, so I thank you for the discussion.
It might help if you try to describe a specific situation in which the AI makes the wrong prediction or takes the wrong action for its goals. This could help be better understand what you’re thinking about.
Well, as I said before, the AI could still consider the possibility that the world is composed entirely of hats (minus the AI simulation).
At this point I’m not sure there’s much point in discussing further. You’re using words in ways that seem self-contradictory to me.
You said “the AI could still consider the possibility that the world is composed of [...]”. Considering a possibility is creating a model. Models can be constructed about all sorts of things: mathematical statements, future sensory inputs, hypothetical AIs in simulated worlds, and so on. In this case, the AI’s model is about “the world”, that is to say, reality.
So it is using a concept of model, and a concept of reality. It is only considering the model as a possibility, so it knows that not everything true in the model is automatically true in reality and vice versa. Therefore it is distinguishing between them. But you posited that it can’t do that.
To me, this is a blatant contradiction. My model of you is that you are unlikely to post blatant contradictions, so I am left with the likelihood that what you mean by your statements is wholly unlike the meaning I assign to the same statements. This does not bode well for effective communication.
Yeah, it might be best to wrap up the discussion. It seems we aren’t really understanding what the other means.
So it is using a concept of model, and a concept of reality. It is only considering the model as a possibility, so it knows that not everything true in the model is automatically true in reality and vice versa. Therefore it is distinguishing between them. But you posited that it can’t do that.
Well, I can’t say I’m really following you there. The AI would still have a notion of reality. It just would consider abstractions like chairs and tables to be part of reality.
There is one thing I want to say though. We’ve been discussing the question of if a notion of base-level reality is necessary to avoid severe limitations in reasoning ability. And to see why I think it’s not, just consider regular humans. They often don’t have a distinction between base-level reality and abstractions. And yet, they can still reason about the possibility of life-long illusions as well as function well to accomplish their goals. And if you taught someone the concept of “base-level reality”, I’m not sure it would help them much.
It sounds like you’re using very different expectations for those questions, as opposed to the very rigorous interrogation of base reality. ‘Does Santa exist?’ and ‘does that chair exist?’ are questions which (implicitly, at least) are part of a system of questions like ‘what happens if I set trip mines in my chimney tonight?’ and ‘if I try to sit down, will I fall on my ass?’ which have consequences in terms of sensory input and feedback. You can respond ‘yes’ to the former, if you’re trying to preserve a child’s belief in Santa (although I contend that’s a lie) and you can truthfully answer ‘no’ to the latter if you want to talk about an investigation of base reality.
Of course, if you answer ‘no’ to ‘does that chair exist?’ your interlocutor will give you a contemptful look, because that wasn’t the question they were asking, and you knew that, and you chose to answer a different question anyway.
I choose to think of this as different levels of resolution, or as varying bucket widths on a histograph. To the question ‘does Jupiter orbit the Sun?’ you can productively answer ‘yes’ if you’re giving an elementary school class a basic lesson on the structure of the solar system. But if you’re trying to slingshot a satellite around Ganymede, the answer is going to be no, because the Solar-Jovian barycenter is way outside the solar corona, and at the level you’re operating, that’s actually relevant.
Most people don’t use the words ‘reality’ or ‘exist’ in the way we’re using it here, not because people are idiots, but because they don’t have a coherent existential base for non-idiocy, and because it’s hard to justify the importance of those questions when you spend your whole life in sensory reality.
As to the aliens, well, if they don’t distinguish between base level reality and abstractions, they can make plenty of good sensory predictions in day-to-day life, but they may run into some issues trying to make predictions in high-energy physics. If they manage to do both well, it sounds like they’re doing a good job operating across multiple levels of resolution. I confess I don’t have a strong grasp on the subject, or on the differences between a model being real versus not being real in terms of base reality, I’m gonna wait on JBlack’s response to that.
I generally agree with the content of the articles you linked, and that there are different notions of “really exist”. The issue is, I’m still not sure what “base-level reality” means. JBlack said it was what “really exists”, but since JBlack seems to be using a notion of “what really exists” that’s different from the one people normally use, I’m not really sure what it means.
In the end, you can choose to define “what really exists” or “base-level reality” however you want, but I’m still wondering about what people normally take them to mean.
I try to avoid using the word ‘really’ for this sort of reason. Gets you into all sorts of trouble.
(a) JBlack is using a definition related to simulation theory, and I don’t know enough about this to speculate too much, but it seems to rely on a hard discontinuity between base and sensory reality.
(b) Before I realized he was using it that way, I thought the phrase meant ‘reality as expressed on the most basic level yet conceivable’ which, if it is possible to understand it, explodes the abstractions of higher orders and possibly results in their dissolving into absurdity. This is a softer transition than the above.
(c) I figure most people use ‘really exist’ to refer to material sensory reality as opposed to ideas. This chair exists, the Platonic Idea of a chair does not. The rule with this sort of assumption is ‘if I can touch it, or it can touch me, it exists’ for a suitably broad understanding of ‘touch.’
(d) I’ve heard some people claim that the only things that ‘really exist’ are those you can prove with mathematics or deduction, and mere material reality is a frivolity.
(e) I know some religious people believe heavily in the primacy of God (or whichever concept you want to insert here) and regard the material world as illusory, and that the afterlife is the ‘true’ world. You can see this idea everywhere from the Kalachakra mandala to the last chapter of the Screwtape letters.
I guess the one thing uniting all these is that, if it were possible to take a true Outside View, this is what you would see; a Platonic World of ideas, or a purely material universe, or a marble held in the palm of God, or a mass of vibrating strings (or whatever the cool kids in quantum physics are thinking these days) or a huge simulation of any of the above instantiated on any of the above.
I think most people think in terms of option c, because it fits really easily into a modern materialist worldview, but the prevalence of e shouldn’t be downplayed. I’ve probably missed some important ones.
Reality is that which actually exists, regardless of how any agents within it might perceive it, choose to model it, or describe it to each other.
If reality happens to be infinitely complex, then all finite models of it must necessarily be incomplete. That might be annoying, but why would you consider that to mean “reality doesn’t really exist”?
Well, to be clear, I didn’t intend to say that reality doesn’t really exist. There’s definitely something that’s real. I was just wondering about if there is some base-level reality that’s ontologically different from other things, like the abstractions we use.
Now, what I’m saying feels pretty philosophical, and perhaps the question isn’t even meaningful.
Still, I’m wondering about the agents making an infinite sequence of decompositions that each have increased predictive accuracy. What would the base-level reality be in that case? Any of the decompositions the agents create would be wrong, even if some are infinitely complex.
Also, I’ve realize I’m confused about the meaning of “what really exists”, but I think it would be hard to clarify and reason about this. Perhaps I’m overthinking things, but I am still rather confused.
I’m imagining some other agent or AI that doesn’t distinguish between base-level reality and abstractions, I’m not sure how I could argue with them. I mean, in principle, I think you could come up with reasoning systems that distinguish between base-level reality and abstractions, as well as reasoning systems that don’t, that both make equally good empirical predictions. If there was some alien that didn’t make the distinction in their epistemology or ontology, I’m not sure how I could say, and support saying, “You’re wrong”.
I mean, I predict you could both make arbitrarily powerful agents with high predictive accuracy and high optimization-pressure that don’t distinguish between base-level reality and abstractions, and could do the same with agents that do make such a distinction. If both perform fine, them I’m not sure how I could argue that one’s “wrong”.
Is the existence of base-level reality subjective? Does this question even make sense?
We are probably just using different terminology and talking past each other. You agree that there is “something that’s real”. From my point of view, the term “base-level reality” refers to exactly that which is real, and no more. The abstractions we use do not necessarily correspond with base-level reality in any way at all. In particular if we are any of simulated entities, dreaming, full-sensory hallucinating, disembodied consciousness, or brains in jars with synthetic sensory input then we may not have any way to learn anything meaningful about base-level reality, but that does not preclude its existence because it is still certain that something exists.
None of the models are any sort of reality at all. At best, they are predictors of some sort of sensory reality (which may be base-level reality, or might not). It is possible that all of the models are actually completely wrong, as the agents have all been living in a simulation or are actually insane with false memories of correct predictions, etc.
The question makes sense, but the answer is the most emphatic NO that it is possible to give. Even in some hypothetical solipsistic universe in which only one bodiless mind exists and anything else is just internal experiences of that mind, that mind objectively exists.
It is conceivable to suppose a universe in which everything is a simulation in some lower-level universe resulting in an ordering with no least element to qualify as base-level reality, but this is still an objective fact about such a universe.
We do seem have have been talking past each other to some extent. Base-level reality, for course, exists if you define it to be “what really exists”.
However, I’m a little unsure about if that’s how people use the word. I mean, if someone asked me if Santa really exists, I’d say “No”, but if they asked if chairs really existed, I’d say “Yes”. That doesn’t seem wrong to me, but I thought our base-level reality only contained subatomic particles, not chairs. Does this mean the statement “Chairs really exist” is actually wrong? Or I am misinterpreting?
I’m also wondering how people justify thinking that models talking about things like chairs, trees, and anything other than subatomic particles don’t “really exist”. Is this even true?
I’m just imagining talking with some aliens with no distinction between base-level reality and what we would consider mere abstractions. For example, suppose the aliens knew about chairs, when they discovered quantum theory, they said say, “Oh! There are these atom things, and when they’re arrange in the right way, they cause chairs to exist!” But suppose they never distinguished between the subatomic particles being real and they chairs being real: they just saw both subatomic particles and chairs to both be fully real, and the correct arrangement of the former caused the latter to exist.
How could I argue with such aliens? They’re already making correct predictions, so I don’t see any way to show them evidence that disproves them. Is there some abstract reason to think models about thing like chairs don’t “really exist”?
The main places I’ve see the term “base-level reality” used are in discussions about the simulation hypothesis. “Base-level” being the actually real reality where sensory information tells you about interactions in the actual real world, as opposed to simulations where the sensory information is fabricated and almost completely independent of the rules that base-level reality follows. The abstraction is that the base-level reality serves as a foundation on which (potentially) a whole “tower” of simulations-within-simulations-within-simulations could be erected.
That semantic excursion aside, you don’t need to go to aliens to find beings that hold subatomic particles as being ontologically equivalent with chairs. Plenty of people hold that they’re both abstractions that help us deal with the world we live in, just at different length scales (and I’m one of them).
Well, even in a simulation, sensory information still tells you about interactions in the actual real world. I mean, based on your experiences in the simulation, you can potentially approximately infer the algorithm and computational state of the “base-level” computer you’re running in, and I believe those count as interactions in the “actual real world”. And if your simulation is sufficiently big and takes up a sufficiently large amount of the world, you could potentially learn quite a lot about the underlying “real world” just by examining your simulation.
That said, I still can’t say I really understand the concept of “base-level reality”. I know you said its what informs you about the “actual real world”, but this feels similarly confusing to me as defining base-level reality as “what really exists”. I know that reasoning and talking about things so abstract is hard and can easily lead to nonsense, but I’m still interested.
I’m curious about what even the purpose is of having an ontologically fundamental distinction between base-level reality and abstractions and whether it’s worth having. When asking, “Should I treat base-level reality and abstractions as fundamentally distinct?”, I think I good way to approximate this is by asking “Would I want an AI to reason as if its abstractions and base-level reality were fundamentally distinct?”
And I’m not completely sure they should. AIs, to reason practically, need to use “abstractions” in at least some of their models. If you want, you could have a special “This is just an abstraction” or “this is base-level reality” tag on each of your models, but I’m not sure what the benefit of this would be or what you would use it for.
Even without such a distinction, an AI would have both models that would be normally considered abstractions, as well as those of what you would think of as base-level reality, and would select which models to use based on their computational efficiency and the extent to which they are relevant and informative to the topic at hand. That sounds like a reasonable thing to do, and I’m not clear how ascribing fundamental difference to “abstractions” and “base-level” reality would do better than this.
If the AI talks with humans that use the phrase “base-level reality”, then it could potentially be useful for the AI to come up with an is-base-level-reality predicate in its world model in order to model things that answer, “When will this person call something base-level reality?” But such an predicate wouldn’t be treated as fundamentally different from any other predicate, like “Is a chair”.
Do you want an AI to be able to conceive of anything along the lines of “how correct is my model”, to distinguish hypothetical from actual, or illusion from substance?
If you do, then you want something that fits in the conceptual space pointed at by “base-level reality”, even if it doesn’t use that phrase or even have the capability to express it.
I suppose it might be possible to have a functioning AI that is capable of reasoning and forming models without being able to make any such distinctions, but I can’t see a way to do it that won’t be fundamentally crippled compared with human capability.
I’m interested in your thoughts on how the AI would be crippled.
I don’t think it would be crippled in terms of empirical predictive accuracy, at least. The AI could till come come up with all the low-level models like quantum physics, as well as keep the abstract ones like “this is what a chair is”, and then just use whichever it needs to make the highest possible predictive accuracy in a given circumstances.
If the AI is built to make and run quantum physics experiments, then in order to have high predictive accuracy is would need to learn and use an accurate model of quantum physics. But I don’t see why you would need a distinction between base-level reality and abstractions to do that.
The AI could still learn a sense of “illusion”. If the AI is around psychotic people who have illusions a lot, then I don’t see what’s stopping the AI from forming a model model saying, “Some people experience these things called ‘illusions’, and it makes them take the wrong action or wrong predictions as specified in <insert model of how people react to illusions”.
And I don’t see why the AI wouldn’t be able to consider the possibility that it also experiences illusions. For example, suppose the AI is in the desert and keeps seeing what looks like an oasis. But when the AI gets closer, it sees only sand. To have higher predictive accuracy in this situation, the AI could learn a (non-ontologically fundamental) “is-an-illusion” predicate.
Would the crippling me in terms of scoring highly on its utility function, rather than just predicting percepts? I don’t really see how this would be a problem. I mean, suppose you want an AI to make chairs. Then even if the AI lacked a notion of base-level reality, it could still learn an accurate models of how chairs work and how they are manufactured. Then the AI could have its utility function defined in terms of it’s notion of chairs to make it make chairs.
Could you give any specific example in which an AI using no ontologically fundamental notion of base-level reality would either make the wrong prediction or make the wrong action, in a way that would be avoided by using such a notion?
This feels like a bait-and-switch since you’re now talking about this in terms of an “ontologically fundamental” qualifier where previously you were only talking about “ontologically different”.
To you, does the phrase “ontologically fundamental” mean exactly the same thing as “ontologically different”? It certainly doesn’t to me!
It was a mistake for me to conflate “ontologically fundamental” and “ontologically different.
Still, I had in mind that they were ontologically different in some fundamental way. It was my mistake to merely use the word “different”. I had imagined that to make an AI that’s reasonable, it would actually make sense to hard-code some notion of base-level reality as well as abstractions, and to treat them differently. For example, you could have the AI have a single prior over “base-level reliaty”, then just come up with whatever abstractions that work well with predictively approximating the base-level reality. Instead it seems like the AI could just learn the concept of “base-level reality” like it would learn any other concept. Is this correct?
Also, in the examples I gave, I think the AI wouldn’t actually have needed a notion of base-level reality. The concept of a mirage is different from the concept of non-base-level reality. So is the concept of a mental illusion. Understanding both of those is different than understanding the concept of base-level reality.
If humans use the phrase “base-level reality”, I still don’t think it would be strictly necessary for an AI to have the concept. The AI could just know rules of the form, “If you ask a human if x is base-level reality, they will say ‘yes’ in the following situations...”, and then describe the situations.
So it doesn’t seem to me like the actual concept of “base-level reality” is essential, though it might be helpful. Of course, I might of course be missing or misunderstanding something. Corrections are appreciated.
Different in a narrow sense yes. “Refraction through heated air that can mislead a viewer into thinking it is reflection from water” is indeed different from “lifetime sensory perceptions that mislead about the true nature and behaviour of reality”. However, my opinion is that any intelligence that can conceive of the first without being able to conceive of the second is crippled by comparison with the range of human thought.
I don’t think you would actually need a concept of base-level reality to conceive of this.
First off, let me say that’s it seems pretty hard coming up with lifetime sensory precepts that would mislead about reality. Even if the AI was in a simulation, the physical implementation is part of reality. And the AI could learn about it. And from this, the AI could also potentially learn about the world outside the simulation. AIs commonly try to come up with the simplest (in terms of description length), most predictively accurate model of their percepts they can. And I bet the simplest models would involve having a world outside the simulation with specified physics, that would result in the simulations being built.
That said, lifetime sensory percepts can still mislead. For example, the simplest, highest-prior models that explain the AI’s percepts might say it’s in a simulation run by aliens. However, suppose the AI’s simulation actually just poofed into existed without a cause, and the rest of the world is filled with giant hats and no aliens. An AI, even without a distinction between base-level reality and abstractions, would still be able to come up with this model. If this isn’t a model involving percepts misleading you about the nature of reality, I’m not sure what is. So it seems to me that such AIs would be able to conceive of the idea of percepts misleading about reality. And the AIs would assign low probability to being in the all-hat world, just as they should.
The only means would be errors in the simulation.
Any underlying reality that supports Turing machines or any of the many equivalents can simulate every computable process. Even in the case of computers with bounded resources, there are corresponding theorems that show that the process being computed does not depend upon the underlying computing model.
So the only thing that can be discerned is that the underlying reality supports computation, and says essentially nothing about the form that it takes.
How can it conceive of the idea of percepts misleading about reality if it literally can’t conceive of any distinction between models (which are a special case of abstractions) and reality?
Well, the only absolute guarantee the AI can make is that the underlying reality supports computation.
But it can still probabilistically infer other things about it. Specifically, the AI knows not only that the underlying reality supports computation, but also that there was some underlying process that actually created the simulation it’s in. Even though Conway’s Game of Life can allow for arbitrary computation, many possible configurations of the world state would result in no AI simulations being made. The configurations that would result in AI simulations being made would likely involve some sort of intelligent civilization creating the simulations. So the AI could potentially predict the existence of this civilization and infer some things about it.
Regardless, even if the AI can’t infer anything else about outside reality, I don’t see how this is a fault of not having a notion of base-level reality. I mean, if you’re correct, then it’s not clear to me how an AI with a notion of base-level reality would do inferentially better.
I know we’ve been going back and forth a lot, but I think these are pretty interesting things to talk about, so I thank you for the discussion.
It might help if you try to describe a specific situation in which the AI makes the wrong prediction or takes the wrong action for its goals. This could help be better understand what you’re thinking about.
At this point I’m not sure there’s much point in discussing further. You’re using words in ways that seem self-contradictory to me.
You said “the AI could still consider the possibility that the world is composed of [...]”. Considering a possibility is creating a model. Models can be constructed about all sorts of things: mathematical statements, future sensory inputs, hypothetical AIs in simulated worlds, and so on. In this case, the AI’s model is about “the world”, that is to say, reality.
So it is using a concept of model, and a concept of reality. It is only considering the model as a possibility, so it knows that not everything true in the model is automatically true in reality and vice versa. Therefore it is distinguishing between them. But you posited that it can’t do that.
To me, this is a blatant contradiction. My model of you is that you are unlikely to post blatant contradictions, so I am left with the likelihood that what you mean by your statements is wholly unlike the meaning I assign to the same statements. This does not bode well for effective communication.
Yeah, it might be best to wrap up the discussion. It seems we aren’t really understanding what the other means.
Well, I can’t say I’m really following you there. The AI would still have a notion of reality. It just would consider abstractions like chairs and tables to be part of reality.
There is one thing I want to say though. We’ve been discussing the question of if a notion of base-level reality is necessary to avoid severe limitations in reasoning ability. And to see why I think it’s not, just consider regular humans. They often don’t have a distinction between base-level reality and abstractions. And yet, they can still reason about the possibility of life-long illusions as well as function well to accomplish their goals. And if you taught someone the concept of “base-level reality”, I’m not sure it would help them much.
It sounds like you’re using very different expectations for those questions, as opposed to the very rigorous interrogation of base reality. ‘Does Santa exist?’ and ‘does that chair exist?’ are questions which (implicitly, at least) are part of a system of questions like ‘what happens if I set trip mines in my chimney tonight?’ and ‘if I try to sit down, will I fall on my ass?’ which have consequences in terms of sensory input and feedback. You can respond ‘yes’ to the former, if you’re trying to preserve a child’s belief in Santa (although I contend that’s a lie) and you can truthfully answer ‘no’ to the latter if you want to talk about an investigation of base reality.
Of course, if you answer ‘no’ to ‘does that chair exist?’ your interlocutor will give you a contemptful look, because that wasn’t the question they were asking, and you knew that, and you chose to answer a different question anyway.
I choose to think of this as different levels of resolution, or as varying bucket widths on a histograph. To the question ‘does Jupiter orbit the Sun?’ you can productively answer ‘yes’ if you’re giving an elementary school class a basic lesson on the structure of the solar system. But if you’re trying to slingshot a satellite around Ganymede, the answer is going to be no, because the Solar-Jovian barycenter is way outside the solar corona, and at the level you’re operating, that’s actually relevant.
Most people don’t use the words ‘reality’ or ‘exist’ in the way we’re using it here, not because people are idiots, but because they don’t have a coherent existential base for non-idiocy, and because it’s hard to justify the importance of those questions when you spend your whole life in sensory reality.
As to the aliens, well, if they don’t distinguish between base level reality and abstractions, they can make plenty of good sensory predictions in day-to-day life, but they may run into some issues trying to make predictions in high-energy physics. If they manage to do both well, it sounds like they’re doing a good job operating across multiple levels of resolution. I confess I don’t have a strong grasp on the subject, or on the differences between a model being real versus not being real in terms of base reality, I’m gonna wait on JBlack’s response to that.
Relevant links (which you’ve probably already read):
How an Algorithm Feels From the Inside, Eliezer Yudkowsky
The Categories Were Made for Man, not Man for the Categories, Scott Alexander
Ontological Remodeling, David Chapman
The correctness of that post has been disputed; for an extended rebuttal, see “Where to Draw the Boundaries?” and “Unnatural Categories Are Optimized for Deception”.
Thanks Zack!
I generally agree with the content of the articles you linked, and that there are different notions of “really exist”. The issue is, I’m still not sure what “base-level reality” means. JBlack said it was what “really exists”, but since JBlack seems to be using a notion of “what really exists” that’s different from the one people normally use, I’m not really sure what it means.
In the end, you can choose to define “what really exists” or “base-level reality” however you want, but I’m still wondering about what people normally take them to mean.
I try to avoid using the word ‘really’ for this sort of reason. Gets you into all sorts of trouble.
(a) JBlack is using a definition related to simulation theory, and I don’t know enough about this to speculate too much, but it seems to rely on a hard discontinuity between base and sensory reality.
(b) Before I realized he was using it that way, I thought the phrase meant ‘reality as expressed on the most basic level yet conceivable’ which, if it is possible to understand it, explodes the abstractions of higher orders and possibly results in their dissolving into absurdity. This is a softer transition than the above.
(c) I figure most people use ‘really exist’ to refer to material sensory reality as opposed to ideas. This chair exists, the Platonic Idea of a chair does not. The rule with this sort of assumption is ‘if I can touch it, or it can touch me, it exists’ for a suitably broad understanding of ‘touch.’
(d) I’ve heard some people claim that the only things that ‘really exist’ are those you can prove with mathematics or deduction, and mere material reality is a frivolity.
(e) I know some religious people believe heavily in the primacy of God (or whichever concept you want to insert here) and regard the material world as illusory, and that the afterlife is the ‘true’ world. You can see this idea everywhere from the Kalachakra mandala to the last chapter of the Screwtape letters.
I guess the one thing uniting all these is that, if it were possible to take a true Outside View, this is what you would see; a Platonic World of ideas, or a purely material universe, or a marble held in the palm of God, or a mass of vibrating strings (or whatever the cool kids in quantum physics are thinking these days) or a huge simulation of any of the above instantiated on any of the above.
I think most people think in terms of option c, because it fits really easily into a modern materialist worldview, but the prevalence of e shouldn’t be downplayed. I’ve probably missed some important ones.