B: “Okay, cool, but that’s information you constructed from within our universe and so is contingent on the process you used to construct it thus it’s not actually a God’s eye view but an inference of one. Thus you should be very careful what you do with that information because if you start to use it as the basis for your reasoning you’re now making everything contingent on it and thus necessarily more likely to be mistaken in some way that will bite you in the ass at the limits even if it’s fine 99.99% of the time. And since I happen to know you care about AGI alignment and AGI alignment is in large part about getting things right at the extreme limits, you should probably think hard about if you’re not seeing yourself up to be inadvertently turned into a paperclip.”
It seems like you’re having B jump on argument #2, whereas I’m interested in a defense of #1. In other words, it’s trivial to say “we can’t be literally 100% certain that there’s an objective universe”, because after all we can’t be literally 100% certain of anything whatsoever. I can’t be literally 100% certain that 7 is prime either. But I would feel very comfortable building an AGI that kills everyone iff 7 is composite. Or if you want an physical example, I would feel very comfortable building an AGI that kills everyone iff the sun is not powered primarily by nuclear fusion. You have posts with philosophical arguments, are you literally 100% certain that those arguments are sound? It’s a fully general counterargument!
I don’t think your opinion is really #2, i.e. “There’s probably an objective universe out there but we can only be 99.99% confident of that, not literally 100%.” In the previous discussion you seemed to be frequently saying with some confidence that you regard “there is an objective universe” to be false if not nonsensical. Sorry if I’m misunderstanding.
In your quote above, you use the term “construct”, and I’m not sure why. GoL-Person A inferred that there is a list of lists, and inferred some properties of that list. And there is in fact a list of lists. And it does in fact have those properties. A is right, and if B is defending position #1, then B is wrong. Then we can talk about what types of observations and reasoning steps A and B might have used to reach their respective conclusions, and we can update our trust in those reasoning steps accordingly. And it seems to me that A and B would be making the same kinds of observations and arguments in their GoL universe that we are making in our string theory (or whatever) universe.
Perhaps it seems like I’m not really defending #1 because it still all has to add up to normality, so it’s not like I am going to go around claiming an objective universe is total nonsense except in a fairly technical sense; in an everyday sense I’m going to act not much different than a person claiming for there to definitely be objective reality because I’ve still got to respond to the conditions I find myself in.
From a pragmatic perspective most of the time it doesn’t matter what you believe so long as you get the right outcome, and that can be for a surprisingly large space where it can be hard to find the places where things break down. Mostly they break down when you try to justify how things are grounded without stopping when it’s practical and instead going until you can’t go anymore. That’s the kind of places where rejecting #3 (except as something contingent) and accepting something more like #1 starts to make sense, because you end up getting underneath the processes that were used to justify the belief in #3.
I still feel like you’re dodging the GoL issue. The situation is not that “GoL-person-A has harmless confusions, and it’s no big deal because they’ll still make good decisions”. The situation is that GoL-person-A is actually literally technically correct. There is in fact a list of lists of booleans, and it does have certain mathematical properties like obeying these four rules. Are you:
A) disagreeing with that? or
B) saying that GoL-person-A is correct by coincidence, i.e. that A did not have any sound basis to reach the beliefs that they believe, but they just happened to guess the right answer somehow? or
C) Asserting that there’s an important difference between the conversation between A and B in the GoL universe, versus the conversation between you and me in the string theory (or whatever) universe? or
OK thanks. I’m now kinda confused about your perspective because there seems to be a contradiction:
On the one hand, I think you said you were sympathetic to #1 (“There is a God’s-eye view of the world” is utter nonsense—it’s just a confused notion, like the set of all sets that don’t contain themselves”).
On the other hand, you seem to be agreeing here that “There is a God’s-eye view of the world” is something that might actually be true, and in fact is true in our GoL example.
Anyway, if we go with the second bullet point, i.e. “this is a thing that might be true”, then we can label it a “hypothesis” and put it into a Bayesian analysis, right?
To be specific: Let’s assume that GoL-person-A formulated the hypothesis: “There is a God’s-eye view of my universe, in the form of a list of lists of N2 booleans with thus-and-such mathematical properties etc.”.
Then, over time, A keeps noticing that every prediction that the hypothesis has ever made, has come true.
So, being a good Bayesian, A’s credence on the hypothesis goes up and up, asymptotically approaching 100%.
This strikes me as a sound, non-coincidental reason for A to have reached that (correct) belief. Where do you disagree?
The point is kinda that you can take it to be a hypothesis and have it approach 100% likelihood. That’s not possible if that hypothesis is instead assumed to be true. I mean, you might still run the calculations, they just don’t matter since you couldn’t change your mind in such a situation even if you wanted to.
I think the baked-in absurdity of that last statement (since people do in fact reject assumptions) points at why I think there’s actual no contradiction in my statements. It’s both true that I don’t have access to the “real” God’s eye view and that I can reconstruct one but will never be able to be 100% sure that I have. Thus I mean to be descriptive of how we find reality: we don’t have access to anything other than our own experience, and yet we’re able to infer lots of stuff. I’m just trying to be especial careful to not ground anything prior in the chain of epistemic reasoning on something inferred downstream, and that means not being able to predicate certain kinds of knowledge on the existence of an objective reality because I need those things to get to the point of being able to infer the existence of an objective reality.
B: “Okay, cool, but that’s information you constructed from within our universe and so is contingent on the process you used to construct it thus it’s not actually a God’s eye view but an inference of one. Thus you should be very careful what you do with that information because if you start to use it as the basis for your reasoning you’re now making everything contingent on it and thus necessarily more likely to be mistaken in some way that will bite you in the ass at the limits even if it’s fine 99.99% of the time. And since I happen to know you care about AGI alignment and AGI alignment is in large part about getting things right at the extreme limits, you should probably think hard about if you’re not seeing yourself up to be inadvertently turned into a paperclip.”
It seems like you’re having B jump on argument #2, whereas I’m interested in a defense of #1. In other words, it’s trivial to say “we can’t be literally 100% certain that there’s an objective universe”, because after all we can’t be literally 100% certain of anything whatsoever. I can’t be literally 100% certain that 7 is prime either. But I would feel very comfortable building an AGI that kills everyone iff 7 is composite. Or if you want an physical example, I would feel very comfortable building an AGI that kills everyone iff the sun is not powered primarily by nuclear fusion. You have posts with philosophical arguments, are you literally 100% certain that those arguments are sound? It’s a fully general counterargument!
I don’t think your opinion is really #2, i.e. “There’s probably an objective universe out there but we can only be 99.99% confident of that, not literally 100%.” In the previous discussion you seemed to be frequently saying with some confidence that you regard “there is an objective universe” to be false if not nonsensical. Sorry if I’m misunderstanding.
In your quote above, you use the term “construct”, and I’m not sure why. GoL-Person A inferred that there is a list of lists, and inferred some properties of that list. And there is in fact a list of lists. And it does in fact have those properties. A is right, and if B is defending position #1, then B is wrong. Then we can talk about what types of observations and reasoning steps A and B might have used to reach their respective conclusions, and we can update our trust in those reasoning steps accordingly. And it seems to me that A and B would be making the same kinds of observations and arguments in their GoL universe that we are making in our string theory (or whatever) universe.
Perhaps it seems like I’m not really defending #1 because it still all has to add up to normality, so it’s not like I am going to go around claiming an objective universe is total nonsense except in a fairly technical sense; in an everyday sense I’m going to act not much different than a person claiming for there to definitely be objective reality because I’ve still got to respond to the conditions I find myself in.
From a pragmatic perspective most of the time it doesn’t matter what you believe so long as you get the right outcome, and that can be for a surprisingly large space where it can be hard to find the places where things break down. Mostly they break down when you try to justify how things are grounded without stopping when it’s practical and instead going until you can’t go anymore. That’s the kind of places where rejecting #3 (except as something contingent) and accepting something more like #1 starts to make sense, because you end up getting underneath the processes that were used to justify the belief in #3.
I still feel like you’re dodging the GoL issue. The situation is not that “GoL-person-A has harmless confusions, and it’s no big deal because they’ll still make good decisions”. The situation is that GoL-person-A is actually literally technically correct. There is in fact a list of lists of booleans, and it does have certain mathematical properties like obeying these four rules. Are you:
A) disagreeing with that? or
B) saying that GoL-person-A is correct by coincidence, i.e. that A did not have any sound basis to reach the beliefs that they believe, but they just happened to guess the right answer somehow? or
C) Asserting that there’s an important difference between the conversation between A and B in the GoL universe, versus the conversation between you and me in the string theory (or whatever) universe? or
D) something else?
B
OK thanks. I’m now kinda confused about your perspective because there seems to be a contradiction:
On the one hand, I think you said you were sympathetic to #1 (“There is a God’s-eye view of the world” is utter nonsense—it’s just a confused notion, like the set of all sets that don’t contain themselves”).
On the other hand, you seem to be agreeing here that “There is a God’s-eye view of the world” is something that might actually be true, and in fact is true in our GoL example.
Anyway, if we go with the second bullet point, i.e. “this is a thing that might be true”, then we can label it a “hypothesis” and put it into a Bayesian analysis, right?
To be specific: Let’s assume that GoL-person-A formulated the hypothesis: “There is a God’s-eye view of my universe, in the form of a list of lists of N2 booleans with thus-and-such mathematical properties etc.”.
Then, over time, A keeps noticing that every prediction that the hypothesis has ever made, has come true.
So, being a good Bayesian, A’s credence on the hypothesis goes up and up, asymptotically approaching 100%.
This strikes me as a sound, non-coincidental reason for A to have reached that (correct) belief. Where do you disagree?
The point is kinda that you can take it to be a hypothesis and have it approach 100% likelihood. That’s not possible if that hypothesis is instead assumed to be true. I mean, you might still run the calculations, they just don’t matter since you couldn’t change your mind in such a situation even if you wanted to.
I think the baked-in absurdity of that last statement (since people do in fact reject assumptions) points at why I think there’s actual no contradiction in my statements. It’s both true that I don’t have access to the “real” God’s eye view and that I can reconstruct one but will never be able to be 100% sure that I have. Thus I mean to be descriptive of how we find reality: we don’t have access to anything other than our own experience, and yet we’re able to infer lots of stuff. I’m just trying to be especial careful to not ground anything prior in the chain of epistemic reasoning on something inferred downstream, and that means not being able to predicate certain kinds of knowledge on the existence of an objective reality because I need those things to get to the point of being able to infer the existence of an objective reality.