This feels like a bait-and-switch since you’re now talking about this in terms of an “ontologically fundamental” qualifier where previously you were only talking about “ontologically different”.
To you, does the phrase “ontologically fundamental” mean exactly the same thing as “ontologically different”? It certainly doesn’t to me!
It was a mistake for me to conflate “ontologically fundamental” and “ontologically different.
Still, I had in mind that they were ontologically different in some fundamental way. It was my mistake to merely use the word “different”. I had imagined that to make an AI that’s reasonable, it would actually make sense to hard-code some notion of base-level reality as well as abstractions, and to treat them differently. For example, you could have the AI have a single prior over “base-level reliaty”, then just come up with whatever abstractions that work well with predictively approximating the base-level reality. Instead it seems like the AI could just learn the concept of “base-level reality” like it would learn any other concept. Is this correct?
Also, in the examples I gave, I think the AI wouldn’t actually have needed a notion of base-level reality. The concept of a mirage is different from the concept of non-base-level reality. So is the concept of a mental illusion. Understanding both of those is different than understanding the concept of base-level reality.
If humans use the phrase “base-level reality”, I still don’t think it would be strictly necessary for an AI to have the concept. The AI could just know rules of the form, “If you ask a human if x is base-level reality, they will say ‘yes’ in the following situations...”, and then describe the situations.
So it doesn’t seem to me like the actual concept of “base-level reality” is essential, though it might be helpful. Of course, I might of course be missing or misunderstanding something. Corrections are appreciated.
The concept of a mirage is different from the concept of non-base-level reality.
Different in a narrow sense yes. “Refraction through heated air that can mislead a viewer into thinking it is reflection from water” is indeed different from “lifetime sensory perceptions that mislead about the true nature and behaviour of reality”. However, my opinion is that any intelligence that can conceive of the first without being able to conceive of the second is crippled by comparison with the range of human thought.
...lifetime sensory perceptions that mislead about the true nature and behaviour of reality
I don’t think you would actually need a concept of base-level reality to conceive of this.
First off, let me say that’s it seems pretty hard coming up with lifetime sensory precepts that would mislead about reality. Even if the AI was in a simulation, the physical implementation is part of reality. And the AI could learn about it. And from this, the AI could also potentially learn about the world outside the simulation. AIs commonly try to come up with the simplest (in terms of description length), most predictively accurate model of their percepts they can. And I bet the simplest models would involve having a world outside the simulation with specified physics, that would result in the simulations being built.
That said, lifetime sensory percepts can still mislead. For example, the simplest, highest-prior models that explain the AI’s percepts might say it’s in a simulation run by aliens. However, suppose the AI’s simulation actually just poofed into existed without a cause, and the rest of the world is filled with giant hats and no aliens. An AI, even without a distinction between base-level reality and abstractions, would still be able to come up with this model. If this isn’t a model involving percepts misleading you about the nature of reality, I’m not sure what is. So it seems to me that such AIs would be able to conceive of the idea of percepts misleading about reality. And the AIs would assign low probability to being in the all-hat world, just as they should.
Even if the AI was in a simulation, the physical implementation is part of reality. And the AI could learn about it.
The only means would be errors in the simulation.
Any underlying reality that supports Turing machines or any of the many equivalents can simulate every computable process. Even in the case of computers with bounded resources, there are corresponding theorems that show that the process being computed does not depend upon the underlying computing model.
So the only thing that can be discerned is that the underlying reality supports computation, and says essentially nothing about the form that it takes.
An AI, even without a distinction between base-level reality and abstractions, [...] would be able to conceive of the idea of percepts misleading about reality
How can it conceive of the idea of percepts misleading about reality if it literally can’t conceive of any distinction between models (which are a special case of abstractions) and reality?
Well, the only absolute guarantee the AI can make is that the underlying reality supports computation.
But it can still probabilistically infer other things about it. Specifically, the AI knows not only that the underlying reality supports computation, but also that there was some underlying process that actually created the simulation it’s in. Even though Conway’s Game of Life can allow for arbitrary computation, many possible configurations of the world state would result in no AI simulations being made. The configurations that would result in AI simulations being made would likely involve some sort of intelligent civilization creating the simulations. So the AI could potentially predict the existence of this civilization and infer some things about it.
Regardless, even if the AI can’t infer anything else about outside reality, I don’t see how this is a fault of not having a notion of base-level reality. I mean, if you’re correct, then it’s not clear to me how an AI with a notion of base-level reality would do inferentially better.
How can it conceive of the idea of percepts misleading about reality if it literally can’t conceive of any distinction between models (which are a special case of abstractions) and reality?
Well, as I said before, the AI could still consider the possibility that the world is composed entirely of hats (minus the AI simulation). The AI could also have a model of Bayesian inference and infer that the Bayesian probability that would be rational to assign to “the world is all hats” is low and its evidence makes it even lower. So, by combining these two models, the AI can come up with a model that says, “The world is all hats, even though everything I’ve seen, according to probability theory, makes it seem like this isn’t the case”. That sounds like a model about the idea of percepts misleading about reality.
I know we’ve been going back and forth a lot, but I think these are pretty interesting things to talk about, so I thank you for the discussion.
It might help if you try to describe a specific situation in which the AI makes the wrong prediction or takes the wrong action for its goals. This could help be better understand what you’re thinking about.
Well, as I said before, the AI could still consider the possibility that the world is composed entirely of hats (minus the AI simulation).
At this point I’m not sure there’s much point in discussing further. You’re using words in ways that seem self-contradictory to me.
You said “the AI could still consider the possibility that the world is composed of [...]”. Considering a possibility is creating a model. Models can be constructed about all sorts of things: mathematical statements, future sensory inputs, hypothetical AIs in simulated worlds, and so on. In this case, the AI’s model is about “the world”, that is to say, reality.
So it is using a concept of model, and a concept of reality. It is only considering the model as a possibility, so it knows that not everything true in the model is automatically true in reality and vice versa. Therefore it is distinguishing between them. But you posited that it can’t do that.
To me, this is a blatant contradiction. My model of you is that you are unlikely to post blatant contradictions, so I am left with the likelihood that what you mean by your statements is wholly unlike the meaning I assign to the same statements. This does not bode well for effective communication.
Yeah, it might be best to wrap up the discussion. It seems we aren’t really understanding what the other means.
So it is using a concept of model, and a concept of reality. It is only considering the model as a possibility, so it knows that not everything true in the model is automatically true in reality and vice versa. Therefore it is distinguishing between them. But you posited that it can’t do that.
Well, I can’t say I’m really following you there. The AI would still have a notion of reality. It just would consider abstractions like chairs and tables to be part of reality.
There is one thing I want to say though. We’ve been discussing the question of if a notion of base-level reality is necessary to avoid severe limitations in reasoning ability. And to see why I think it’s not, just consider regular humans. They often don’t have a distinction between base-level reality and abstractions. And yet, they can still reason about the possibility of life-long illusions as well as function well to accomplish their goals. And if you taught someone the concept of “base-level reality”, I’m not sure it would help them much.
This feels like a bait-and-switch since you’re now talking about this in terms of an “ontologically fundamental” qualifier where previously you were only talking about “ontologically different”.
To you, does the phrase “ontologically fundamental” mean exactly the same thing as “ontologically different”? It certainly doesn’t to me!
It was a mistake for me to conflate “ontologically fundamental” and “ontologically different.
Still, I had in mind that they were ontologically different in some fundamental way. It was my mistake to merely use the word “different”. I had imagined that to make an AI that’s reasonable, it would actually make sense to hard-code some notion of base-level reality as well as abstractions, and to treat them differently. For example, you could have the AI have a single prior over “base-level reliaty”, then just come up with whatever abstractions that work well with predictively approximating the base-level reality. Instead it seems like the AI could just learn the concept of “base-level reality” like it would learn any other concept. Is this correct?
Also, in the examples I gave, I think the AI wouldn’t actually have needed a notion of base-level reality. The concept of a mirage is different from the concept of non-base-level reality. So is the concept of a mental illusion. Understanding both of those is different than understanding the concept of base-level reality.
If humans use the phrase “base-level reality”, I still don’t think it would be strictly necessary for an AI to have the concept. The AI could just know rules of the form, “If you ask a human if x is base-level reality, they will say ‘yes’ in the following situations...”, and then describe the situations.
So it doesn’t seem to me like the actual concept of “base-level reality” is essential, though it might be helpful. Of course, I might of course be missing or misunderstanding something. Corrections are appreciated.
Different in a narrow sense yes. “Refraction through heated air that can mislead a viewer into thinking it is reflection from water” is indeed different from “lifetime sensory perceptions that mislead about the true nature and behaviour of reality”. However, my opinion is that any intelligence that can conceive of the first without being able to conceive of the second is crippled by comparison with the range of human thought.
I don’t think you would actually need a concept of base-level reality to conceive of this.
First off, let me say that’s it seems pretty hard coming up with lifetime sensory precepts that would mislead about reality. Even if the AI was in a simulation, the physical implementation is part of reality. And the AI could learn about it. And from this, the AI could also potentially learn about the world outside the simulation. AIs commonly try to come up with the simplest (in terms of description length), most predictively accurate model of their percepts they can. And I bet the simplest models would involve having a world outside the simulation with specified physics, that would result in the simulations being built.
That said, lifetime sensory percepts can still mislead. For example, the simplest, highest-prior models that explain the AI’s percepts might say it’s in a simulation run by aliens. However, suppose the AI’s simulation actually just poofed into existed without a cause, and the rest of the world is filled with giant hats and no aliens. An AI, even without a distinction between base-level reality and abstractions, would still be able to come up with this model. If this isn’t a model involving percepts misleading you about the nature of reality, I’m not sure what is. So it seems to me that such AIs would be able to conceive of the idea of percepts misleading about reality. And the AIs would assign low probability to being in the all-hat world, just as they should.
The only means would be errors in the simulation.
Any underlying reality that supports Turing machines or any of the many equivalents can simulate every computable process. Even in the case of computers with bounded resources, there are corresponding theorems that show that the process being computed does not depend upon the underlying computing model.
So the only thing that can be discerned is that the underlying reality supports computation, and says essentially nothing about the form that it takes.
How can it conceive of the idea of percepts misleading about reality if it literally can’t conceive of any distinction between models (which are a special case of abstractions) and reality?
Well, the only absolute guarantee the AI can make is that the underlying reality supports computation.
But it can still probabilistically infer other things about it. Specifically, the AI knows not only that the underlying reality supports computation, but also that there was some underlying process that actually created the simulation it’s in. Even though Conway’s Game of Life can allow for arbitrary computation, many possible configurations of the world state would result in no AI simulations being made. The configurations that would result in AI simulations being made would likely involve some sort of intelligent civilization creating the simulations. So the AI could potentially predict the existence of this civilization and infer some things about it.
Regardless, even if the AI can’t infer anything else about outside reality, I don’t see how this is a fault of not having a notion of base-level reality. I mean, if you’re correct, then it’s not clear to me how an AI with a notion of base-level reality would do inferentially better.
I know we’ve been going back and forth a lot, but I think these are pretty interesting things to talk about, so I thank you for the discussion.
It might help if you try to describe a specific situation in which the AI makes the wrong prediction or takes the wrong action for its goals. This could help be better understand what you’re thinking about.
At this point I’m not sure there’s much point in discussing further. You’re using words in ways that seem self-contradictory to me.
You said “the AI could still consider the possibility that the world is composed of [...]”. Considering a possibility is creating a model. Models can be constructed about all sorts of things: mathematical statements, future sensory inputs, hypothetical AIs in simulated worlds, and so on. In this case, the AI’s model is about “the world”, that is to say, reality.
So it is using a concept of model, and a concept of reality. It is only considering the model as a possibility, so it knows that not everything true in the model is automatically true in reality and vice versa. Therefore it is distinguishing between them. But you posited that it can’t do that.
To me, this is a blatant contradiction. My model of you is that you are unlikely to post blatant contradictions, so I am left with the likelihood that what you mean by your statements is wholly unlike the meaning I assign to the same statements. This does not bode well for effective communication.
Yeah, it might be best to wrap up the discussion. It seems we aren’t really understanding what the other means.
Well, I can’t say I’m really following you there. The AI would still have a notion of reality. It just would consider abstractions like chairs and tables to be part of reality.
There is one thing I want to say though. We’ve been discussing the question of if a notion of base-level reality is necessary to avoid severe limitations in reasoning ability. And to see why I think it’s not, just consider regular humans. They often don’t have a distinction between base-level reality and abstractions. And yet, they can still reason about the possibility of life-long illusions as well as function well to accomplish their goals. And if you taught someone the concept of “base-level reality”, I’m not sure it would help them much.