Either way, I can write down a list of N2 binary numbers each timestep, describing whether each tile is ON or OFF. I would describe this list of lists as a “view from nowhere”—i.e., an “objective” complete description of this GOL universe. Would you?
How does a conscious observer within the GOL universe write down this list?
We were talking about “view from nowhere”, with G Gordon saying “obviously there is no view from nowhere” and I was saying “yes there’s a view from nowhere, or else maybe I don’t know what that phrase means”. A view from nowhere, I would assume, does not need to exist inside somebody’s head, and in fact presumably does not, or else it would be “view from that person’s perspective”, right?
A view from nowhere, I would assume, does not need to exist inside somebody’s head,
Presumably it needs to exist somewhere in order to exist, whether that’s in someone’s head, in a computer, or on a piece of paper.
and in fact presumably does not, or else it would be “view from that person’s perspective”, right?
Generally the problems with views from nowhere pop up once you start talking about embedded agency. A lot of our theories of agency assume that you have a view from nowhere and that you then somehow place your actions in it. This is an OK model for non-embedded agents like chess AIs, where we can make a small-world assumption and be reasonably accurate, but it is not a very good model for real-world generally intelligent unboxed agents.
I would surmise that we don’t disagree about anything except what the term “view from nowhere” means. And I don’t really know what “view from nowhere” means anyway, I was just guessing.
The larger context was: I think there’s a universe, and that I live in it, and that claims about the universe can be true or false independently of what I or any other creature know and believe. And then (IIUC) G Gordon was saying that this perspective is wrong or incomplete or something, and in fact I’m missing out on insights related to AI alignment by having this perspective. So that was the disagreement.
It’s possible that theories of embedded agency have something to do with this disagreement, but if so, I’m not seeing it and would be interested if somebody spelled out the details for me.
The idea of a “view from nowhere” is basically the idea that there exists some objective, non-observer-based perspective of the world. This is also sometimes called a God’s eye view of the world.
However such a thing does not exist except to the extent we infer things we expect to be true independent of observer conditions.
Yes, embedded agency is quite connected to all this. Basically I view embedded agency as a way of thinking about AI that avoids many of the classical pitfalls of non-subjective models of the world. The tricky thing is that for many toy models, like chess or even most AI training today, the world is constrained enough such that we can have a view from nowhere onto the artificially constrained world, but we can’t get this same thing onto the universe because, to extend the analogy from above a bit, we are like chess or go pieces on the board and can only see the board from our place on it, not above it.
“God’s-eye view of the world” is utter nonsense—it’s just a confused notion, like “the set of all sets that don’t contain themselves” or “a positive integer that’s just like 6 in every way, except that it’s prime”.
“God’s-eye view of the world” might or might not be a concept that makes sense; we can’t really conclude with certainty one way or the other, from our vantage point.
“God’s-eye view of the world” is a perfectly sensible concept, however we are finite beings within the world and smaller than the world, so obviously we do not ourselves have access to a God’s-eye view of the world. Likewise, an AI cannot have a God’s-eye view of its own world. Nevertheless, since “God’s-eye view of the world” is a sensible concept, we can talk about it and reason about it. (Just like “the 10105th prime number” is a sensible concept that I can talk about and reason about and even prove things about, even if I can’t write down its digits.)
I endorse #3. I’m slightly sympathetic to #2, in the sense that no of course I don’t put literally 100% credence on “there is an objective reality” etc., that’s not the kind of thing that one can prove mathematically, I can imagine being convinced otherwise, even if I strongly believe it right now.
The reason I brought up the Game-Of-Life universe example in my earlier comment was to argue against #1.
I think it’s possible to simultaneously endorse #3 and do sound reasoning about embedded agency. Do you?
So to return to your GoL example, it only works because you exist outside the universe. If you were inside that GoL, you wouldn’t be able to construct such a view (at least based on the normal rules of GoL). I see this as exactly analogous to the case we find ourselves in: what we know about physics seems to imply that we couldn’t hope to gather enough information to ever successfully construct a God’s eye view.
This is why I make a claim more like your #1 (though, yes, #2 is obviously the right thing here because nothing is100% certain) that a God’s eye view is basically nonsense that our minds just happen to be able to image is possible because we can infer what it would be like if such a thing could exist from the sample set of our experience, but the logic of it seems to be that it just isn’t a sensible thing we could ever know about except via hypothesizing the possible existence of it, putting it on par with thinking about things outside our Hubble volume, for example.
I’m suspicious someone could endorse #3 and not get confused reasoning about embedded agency because I’d expect either assuming #3 to cause you to get confused thinking about the embedded agency situation (and getting tripped up on questions like “why can’t we just do thing X that endorsing #3 allows?”) or that thinking about embedded agency hard enough would cause you to have to break down the things that make you endorse #3 and then you would come to no longer endorse it (my claim here is backed in part by that fact that I and others have basically gone down this path before one way or another, previously having assumed something like #3 and then having to unassume it because it got in the way and was inconsistent with the rest of our thinking).
So to return to your GoL example, it only works because you exist outside the universe. If you were inside that GoL, you wouldn’t be able to construct such a view (at least based on the normal rules of GoL). I see this as exactly analogous to the case we find ourselves in: what we know about physics seems to imply that we couldn’t hope to gather enough information to ever successfully construct a God’s eye view.
I feel like this is an argument for #3, but you’re taking it to be an argument for #1. For example “we couldn’t hope to gather enough information to ever successfully construct a God’s eye view” is exactly the thing I said in #3.
Let’s walk through the GoL example. Here’s a dialog between two GoL agents within the GoL universe:
A: “There is a list of lists of N2 boolean variables describing a God’s-eye view of our universe.”
B: “Oh? If that’s true, then tell me all the entries of this alleged list of lists. Go.”
A: “Obviously I don’t know all the entries. The list has vastly more entries than I could hold in my head, or learn in a million lifetimes. Not to mention the fact that I can’t observe everything in our universe etc. etc.”
B: “Can you say anything about the entries in this list? Why even bring it up?”
A: “Oh sure! I know lots of things about the entries in the list! For example, I’m 99.99% confident that the entries in the list always obey these four rules. And I’m 99.99% confident that the sum of the entries of each list obeys the following mathematical relation: (mumble mumble). And I’m 99.99% confident that thus-and-such scientific experiment corresponds to thus-and-such pattern in the entries of the list, here let me show you the simulation results. And—”
B: “—You can stop right there. I don’t buy it. If you can’t tell me every single entry in the list of lists right now, then there is no list of lists, and everything you’re saying is total nonsense. I think you’re just deeply confused.”
OK. That’s my dialog. I think A is correct all around, and B is being very unreasonable (and I wrote it that way). I gather that you’re sympathetic to B. I’d be interested in what you would have B say differently at the end.
B: “Okay, cool, but that’s information you constructed from within our universe and so is contingent on the process you used to construct it thus it’s not actually a God’s eye view but an inference of one. Thus you should be very careful what you do with that information because if you start to use it as the basis for your reasoning you’re now making everything contingent on it and thus necessarily more likely to be mistaken in some way that will bite you in the ass at the limits even if it’s fine 99.99% of the time. And since I happen to know you care about AGI alignment and AGI alignment is in large part about getting things right at the extreme limits, you should probably think hard about if you’re not seeing yourself up to be inadvertently turned into a paperclip.”
It seems like you’re having B jump on argument #2, whereas I’m interested in a defense of #1. In other words, it’s trivial to say “we can’t be literally 100% certain that there’s an objective universe”, because after all we can’t be literally 100% certain of anything whatsoever. I can’t be literally 100% certain that 7 is prime either. But I would feel very comfortable building an AGI that kills everyone iff 7 is composite. Or if you want an physical example, I would feel very comfortable building an AGI that kills everyone iff the sun is not powered primarily by nuclear fusion. You have posts with philosophical arguments, are you literally 100% certain that those arguments are sound? It’s a fully general counterargument!
I don’t think your opinion is really #2, i.e. “There’s probably an objective universe out there but we can only be 99.99% confident of that, not literally 100%.” In the previous discussion you seemed to be frequently saying with some confidence that you regard “there is an objective universe” to be false if not nonsensical. Sorry if I’m misunderstanding.
In your quote above, you use the term “construct”, and I’m not sure why. GoL-Person A inferred that there is a list of lists, and inferred some properties of that list. And there is in fact a list of lists. And it does in fact have those properties. A is right, and if B is defending position #1, then B is wrong. Then we can talk about what types of observations and reasoning steps A and B might have used to reach their respective conclusions, and we can update our trust in those reasoning steps accordingly. And it seems to me that A and B would be making the same kinds of observations and arguments in their GoL universe that we are making in our string theory (or whatever) universe.
Perhaps it seems like I’m not really defending #1 because it still all has to add up to normality, so it’s not like I am going to go around claiming an objective universe is total nonsense except in a fairly technical sense; in an everyday sense I’m going to act not much different than a person claiming for there to definitely be objective reality because I’ve still got to respond to the conditions I find myself in.
From a pragmatic perspective most of the time it doesn’t matter what you believe so long as you get the right outcome, and that can be for a surprisingly large space where it can be hard to find the places where things break down. Mostly they break down when you try to justify how things are grounded without stopping when it’s practical and instead going until you can’t go anymore. That’s the kind of places where rejecting #3 (except as something contingent) and accepting something more like #1 starts to make sense, because you end up getting underneath the processes that were used to justify the belief in #3.
I still feel like you’re dodging the GoL issue. The situation is not that “GoL-person-A has harmless confusions, and it’s no big deal because they’ll still make good decisions”. The situation is that GoL-person-A is actually literally technically correct. There is in fact a list of lists of booleans, and it does have certain mathematical properties like obeying these four rules. Are you:
A) disagreeing with that? or
B) saying that GoL-person-A is correct by coincidence, i.e. that A did not have any sound basis to reach the beliefs that they believe, but they just happened to guess the right answer somehow? or
C) Asserting that there’s an important difference between the conversation between A and B in the GoL universe, versus the conversation between you and me in the string theory (or whatever) universe? or
OK thanks. I’m now kinda confused about your perspective because there seems to be a contradiction:
On the one hand, I think you said you were sympathetic to #1 (“There is a God’s-eye view of the world” is utter nonsense—it’s just a confused notion, like the set of all sets that don’t contain themselves”).
On the other hand, you seem to be agreeing here that “There is a God’s-eye view of the world” is something that might actually be true, and in fact is true in our GoL example.
Anyway, if we go with the second bullet point, i.e. “this is a thing that might be true”, then we can label it a “hypothesis” and put it into a Bayesian analysis, right?
To be specific: Let’s assume that GoL-person-A formulated the hypothesis: “There is a God’s-eye view of my universe, in the form of a list of lists of N2 booleans with thus-and-such mathematical properties etc.”.
Then, over time, A keeps noticing that every prediction that the hypothesis has ever made, has come true.
So, being a good Bayesian, A’s credence on the hypothesis goes up and up, asymptotically approaching 100%.
This strikes me as a sound, non-coincidental reason for A to have reached that (correct) belief. Where do you disagree?
The point is kinda that you can take it to be a hypothesis and have it approach 100% likelihood. That’s not possible if that hypothesis is instead assumed to be true. I mean, you might still run the calculations, they just don’t matter since you couldn’t change your mind in such a situation even if you wanted to.
I think the baked-in absurdity of that last statement (since people do in fact reject assumptions) points at why I think there’s actual no contradiction in my statements. It’s both true that I don’t have access to the “real” God’s eye view and that I can reconstruct one but will never be able to be 100% sure that I have. Thus I mean to be descriptive of how we find reality: we don’t have access to anything other than our own experience, and yet we’re able to infer lots of stuff. I’m just trying to be especial careful to not ground anything prior in the chain of epistemic reasoning on something inferred downstream, and that means not being able to predicate certain kinds of knowledge on the existence of an objective reality because I need those things to get to the point of being able to infer the existence of an objective reality.
Why would the universe need to exist within the universe in order for it to exist? In the GOL example, why would the whole N∗N bits have to be visible to some particular bit in order for them to exist?
It took me a day, but I can see your view on this. I think my position is fairly well reasoned through in the other thread, so I’m not going to keep this going unless you want it to (perhaps your position isn’t represented elsewhere or something).
How does a conscious observer within the GOL universe write down this list?
They don’t. The map is not the territory, right?
We were talking about “view from nowhere”, with G Gordon saying “obviously there is no view from nowhere” and I was saying “yes there’s a view from nowhere, or else maybe I don’t know what that phrase means”. A view from nowhere, I would assume, does not need to exist inside somebody’s head, and in fact presumably does not, or else it would be “view from that person’s perspective”, right?
Presumably it needs to exist somewhere in order to exist, whether that’s in someone’s head, in a computer, or on a piece of paper.
Generally the problems with views from nowhere pop up once you start talking about embedded agency. A lot of our theories of agency assume that you have a view from nowhere and that you then somehow place your actions in it. This is an OK model for non-embedded agents like chess AIs, where we can make a small-world assumption and be reasonably accurate, but it is not a very good model for real-world generally intelligent unboxed agents.
I would surmise that we don’t disagree about anything except what the term “view from nowhere” means. And I don’t really know what “view from nowhere” means anyway, I was just guessing.
The larger context was: I think there’s a universe, and that I live in it, and that claims about the universe can be true or false independently of what I or any other creature know and believe. And then (IIUC) G Gordon was saying that this perspective is wrong or incomplete or something, and in fact I’m missing out on insights related to AI alignment by having this perspective. So that was the disagreement.
It’s possible that theories of embedded agency have something to do with this disagreement, but if so, I’m not seeing it and would be interested if somebody spelled out the details for me.
The idea of a “view from nowhere” is basically the idea that there exists some objective, non-observer-based perspective of the world. This is also sometimes called a God’s eye view of the world.
However such a thing does not exist except to the extent we infer things we expect to be true independent of observer conditions.
Yes, embedded agency is quite connected to all this. Basically I view embedded agency as a way of thinking about AI that avoids many of the classical pitfalls of non-subjective models of the world. The tricky thing is that for many toy models, like chess or even most AI training today, the world is constrained enough such that we can have a view from nowhere onto the artificially constrained world, but we can’t get this same thing onto the universe because, to extend the analogy from above a bit, we are like chess or go pieces on the board and can only see the board from our place on it, not above it.
Can we distinguish three possible claims?
“God’s-eye view of the world” is utter nonsense—it’s just a confused notion, like “the set of all sets that don’t contain themselves” or “a positive integer that’s just like 6 in every way, except that it’s prime”.
“God’s-eye view of the world” might or might not be a concept that makes sense; we can’t really conclude with certainty one way or the other, from our vantage point.
“God’s-eye view of the world” is a perfectly sensible concept, however we are finite beings within the world and smaller than the world, so obviously we do not ourselves have access to a God’s-eye view of the world. Likewise, an AI cannot have a God’s-eye view of its own world. Nevertheless, since “God’s-eye view of the world” is a sensible concept, we can talk about it and reason about it. (Just like “the 10105th prime number” is a sensible concept that I can talk about and reason about and even prove things about, even if I can’t write down its digits.)
I endorse #3. I’m slightly sympathetic to #2, in the sense that no of course I don’t put literally 100% credence on “there is an objective reality” etc., that’s not the kind of thing that one can prove mathematically, I can imagine being convinced otherwise, even if I strongly believe it right now.
The reason I brought up the Game-Of-Life universe example in my earlier comment was to argue against #1.
I think it’s possible to simultaneously endorse #3 and do sound reasoning about embedded agency. Do you?
So to return to your GoL example, it only works because you exist outside the universe. If you were inside that GoL, you wouldn’t be able to construct such a view (at least based on the normal rules of GoL). I see this as exactly analogous to the case we find ourselves in: what we know about physics seems to imply that we couldn’t hope to gather enough information to ever successfully construct a God’s eye view.
This is why I make a claim more like your #1 (though, yes, #2 is obviously the right thing here because nothing is100% certain) that a God’s eye view is basically nonsense that our minds just happen to be able to image is possible because we can infer what it would be like if such a thing could exist from the sample set of our experience, but the logic of it seems to be that it just isn’t a sensible thing we could ever know about except via hypothesizing the possible existence of it, putting it on par with thinking about things outside our Hubble volume, for example.
I’m suspicious someone could endorse #3 and not get confused reasoning about embedded agency because I’d expect either assuming #3 to cause you to get confused thinking about the embedded agency situation (and getting tripped up on questions like “why can’t we just do thing X that endorsing #3 allows?”) or that thinking about embedded agency hard enough would cause you to have to break down the things that make you endorse #3 and then you would come to no longer endorse it (my claim here is backed in part by that fact that I and others have basically gone down this path before one way or another, previously having assumed something like #3 and then having to unassume it because it got in the way and was inconsistent with the rest of our thinking).
I feel like this is an argument for #3, but you’re taking it to be an argument for #1. For example “we couldn’t hope to gather enough information to ever successfully construct a God’s eye view” is exactly the thing I said in #3.
Let’s walk through the GoL example. Here’s a dialog between two GoL agents within the GoL universe:
A: “There is a list of lists of N2 boolean variables describing a God’s-eye view of our universe.”
B: “Oh? If that’s true, then tell me all the entries of this alleged list of lists. Go.”
A: “Obviously I don’t know all the entries. The list has vastly more entries than I could hold in my head, or learn in a million lifetimes. Not to mention the fact that I can’t observe everything in our universe etc. etc.”
B: “Can you say anything about the entries in this list? Why even bring it up?”
A: “Oh sure! I know lots of things about the entries in the list! For example, I’m 99.99% confident that the entries in the list always obey these four rules. And I’m 99.99% confident that the sum of the entries of each list obeys the following mathematical relation: (mumble mumble). And I’m 99.99% confident that thus-and-such scientific experiment corresponds to thus-and-such pattern in the entries of the list, here let me show you the simulation results. And—”
B: “—You can stop right there. I don’t buy it. If you can’t tell me every single entry in the list of lists right now, then there is no list of lists, and everything you’re saying is total nonsense. I think you’re just deeply confused.”
OK. That’s my dialog. I think A is correct all around, and B is being very unreasonable (and I wrote it that way). I gather that you’re sympathetic to B. I’d be interested in what you would have B say differently at the end.
B: “Okay, cool, but that’s information you constructed from within our universe and so is contingent on the process you used to construct it thus it’s not actually a God’s eye view but an inference of one. Thus you should be very careful what you do with that information because if you start to use it as the basis for your reasoning you’re now making everything contingent on it and thus necessarily more likely to be mistaken in some way that will bite you in the ass at the limits even if it’s fine 99.99% of the time. And since I happen to know you care about AGI alignment and AGI alignment is in large part about getting things right at the extreme limits, you should probably think hard about if you’re not seeing yourself up to be inadvertently turned into a paperclip.”
It seems like you’re having B jump on argument #2, whereas I’m interested in a defense of #1. In other words, it’s trivial to say “we can’t be literally 100% certain that there’s an objective universe”, because after all we can’t be literally 100% certain of anything whatsoever. I can’t be literally 100% certain that 7 is prime either. But I would feel very comfortable building an AGI that kills everyone iff 7 is composite. Or if you want an physical example, I would feel very comfortable building an AGI that kills everyone iff the sun is not powered primarily by nuclear fusion. You have posts with philosophical arguments, are you literally 100% certain that those arguments are sound? It’s a fully general counterargument!
I don’t think your opinion is really #2, i.e. “There’s probably an objective universe out there but we can only be 99.99% confident of that, not literally 100%.” In the previous discussion you seemed to be frequently saying with some confidence that you regard “there is an objective universe” to be false if not nonsensical. Sorry if I’m misunderstanding.
In your quote above, you use the term “construct”, and I’m not sure why. GoL-Person A inferred that there is a list of lists, and inferred some properties of that list. And there is in fact a list of lists. And it does in fact have those properties. A is right, and if B is defending position #1, then B is wrong. Then we can talk about what types of observations and reasoning steps A and B might have used to reach their respective conclusions, and we can update our trust in those reasoning steps accordingly. And it seems to me that A and B would be making the same kinds of observations and arguments in their GoL universe that we are making in our string theory (or whatever) universe.
Perhaps it seems like I’m not really defending #1 because it still all has to add up to normality, so it’s not like I am going to go around claiming an objective universe is total nonsense except in a fairly technical sense; in an everyday sense I’m going to act not much different than a person claiming for there to definitely be objective reality because I’ve still got to respond to the conditions I find myself in.
From a pragmatic perspective most of the time it doesn’t matter what you believe so long as you get the right outcome, and that can be for a surprisingly large space where it can be hard to find the places where things break down. Mostly they break down when you try to justify how things are grounded without stopping when it’s practical and instead going until you can’t go anymore. That’s the kind of places where rejecting #3 (except as something contingent) and accepting something more like #1 starts to make sense, because you end up getting underneath the processes that were used to justify the belief in #3.
I still feel like you’re dodging the GoL issue. The situation is not that “GoL-person-A has harmless confusions, and it’s no big deal because they’ll still make good decisions”. The situation is that GoL-person-A is actually literally technically correct. There is in fact a list of lists of booleans, and it does have certain mathematical properties like obeying these four rules. Are you:
A) disagreeing with that? or
B) saying that GoL-person-A is correct by coincidence, i.e. that A did not have any sound basis to reach the beliefs that they believe, but they just happened to guess the right answer somehow? or
C) Asserting that there’s an important difference between the conversation between A and B in the GoL universe, versus the conversation between you and me in the string theory (or whatever) universe? or
D) something else?
B
OK thanks. I’m now kinda confused about your perspective because there seems to be a contradiction:
On the one hand, I think you said you were sympathetic to #1 (“There is a God’s-eye view of the world” is utter nonsense—it’s just a confused notion, like the set of all sets that don’t contain themselves”).
On the other hand, you seem to be agreeing here that “There is a God’s-eye view of the world” is something that might actually be true, and in fact is true in our GoL example.
Anyway, if we go with the second bullet point, i.e. “this is a thing that might be true”, then we can label it a “hypothesis” and put it into a Bayesian analysis, right?
To be specific: Let’s assume that GoL-person-A formulated the hypothesis: “There is a God’s-eye view of my universe, in the form of a list of lists of N2 booleans with thus-and-such mathematical properties etc.”.
Then, over time, A keeps noticing that every prediction that the hypothesis has ever made, has come true.
So, being a good Bayesian, A’s credence on the hypothesis goes up and up, asymptotically approaching 100%.
This strikes me as a sound, non-coincidental reason for A to have reached that (correct) belief. Where do you disagree?
The point is kinda that you can take it to be a hypothesis and have it approach 100% likelihood. That’s not possible if that hypothesis is instead assumed to be true. I mean, you might still run the calculations, they just don’t matter since you couldn’t change your mind in such a situation even if you wanted to.
I think the baked-in absurdity of that last statement (since people do in fact reject assumptions) points at why I think there’s actual no contradiction in my statements. It’s both true that I don’t have access to the “real” God’s eye view and that I can reconstruct one but will never be able to be 100% sure that I have. Thus I mean to be descriptive of how we find reality: we don’t have access to anything other than our own experience, and yet we’re able to infer lots of stuff. I’m just trying to be especial careful to not ground anything prior in the chain of epistemic reasoning on something inferred downstream, and that means not being able to predicate certain kinds of knowledge on the existence of an objective reality because I need those things to get to the point of being able to infer the existence of an objective reality.
Why would the universe need to exist within the universe in order for it to exist? In the GOL example, why would the whole N∗N bits have to be visible to some particular bit in order for them to exist?
The bits exist but the view of the bits don’t exist. The map is not the territory.
It took me a day, but I can see your view on this. I think my position is fairly well reasoned through in the other thread, so I’m not going to keep this going unless you want it to (perhaps your position isn’t represented elsewhere or something).
Thanks for the concise clarification!