20: What is the probability that this is the ultimate base layer of reality?
Eliezer gave the joke answer to this question, because this is something that seems impossible to know.
However, I myself assign a significant probability that this is not the base level of reality. Theuncertainfuture.com tells me that I assign a 99% probability of AI by 2070 and it starts approaching .99 before 2070. So why would I be likely to be living as an original human circa 2000 when transhumans will be running ancestor simulations? I suppose it’s possible that transhumans won’t run ancestor simulations, but I would want to run ancestor simulations, for my merged transhuman mind to be able to assimilate the knowledge of running a human consciousness of myself through interesting points in human history.
It seems rather convenient that I am living in the most interesting period in human history. Not to mention I have a lifestyle in the top 1% of all humans living today.
I believe this is a minority viewpoint here, so my rationalist calculus is probably wrong. Why?
In my posts, I’ve argued that indexical uncertainty like this shouldn’t be represented using probabilities. Instead, I suggest that you consider yourself to be all of the many copies of you, i.e., both the ones in the ancestor simulations and the one in 2010, making decisions for all of them. Depending on your preferences, you might consider the consequences of the decisions of the copy in 2010 to be the most important and far-reaching, and therefore act mostly as if that was the only copy.
Coming back to this comment, it seems to be another example of UDT giving a technically correct but incomplete answer.
Imagine you have a device that will tell you, tomorrow at 12am, whether you are in a simulation or in the base layer. (It turns out that all simulations are required by multiverse law to have such devices.) There’s probably not much you can do before 12am tomorrow that can cause important and far-reaching consequences. But fortunately you also have another device that you can hook up to the first. The second device generates moments of pleasure or pain for the user. More precisely, it gives you X pleasure/pain if you turn out to be in a sim, and Y pleasure/pain if you are in the base layer (presumably X and Y have different signs). Depending on X and Y, how do you decide whether to turn the second device on?
Have you pulled it all together anywhere? I’ve sometimes seen & thought this Pascal’s wager-like logic before (act as if your choices matter because if they don’t...), but I’ve always been suspicious precisely because it looks too much to me like Pascal’s wager.
I’ve thought about writing a post on the application of TDT/UDT to the Simulation Argument, but I could’t think of much to say. But to expand a bit more on what I wrote in the grandparent, in the Simulation Argument, the decision of the original you interacts with the decisions of the simulations. If you make the wrong decision, your simulations might end up not existing at all, so it doesn’t make sense to put a probability on “being in a simulation”. (This is like in the absent-minded driver problem, where your decision at the first exit determines whether you get to the second exit.)
I’m not sure I see what you mean by “Pascal’s wager-like logic”. Can you explain a bit more?
A top level post telling me whether TDT and UDT are supposed to be identical or different (or whether they are the same but at different levels of development) would also be handy!
I’ve thought about writing a post on the application of TDT/UDT to the Simulation Argument, but I could’t think of much to say.
I think that’s enough. I feel I understand the SA very well, but not TDT or UDT much at all; approaching the latter from the former might make things click for me.
I’m not sure I see what you mean by “Pascal’s wager-like logic”. Can you explain a bit more?
I mean that I read Pascal’s Wager as basically ‘p implies x reward for believing in p, and ~p implies no reward (either positive or negative); thus, best to believe in p regardless of the evidence for p’. (Clumsy phrasing, I’m afraid.)
Your example sounds like that: ‘believing you-are-not-being-simulated implies x utility (motivation for one’s actions & efforts), and if ~you-are-not-being-simulated then your utility to the real world is just 0; so believe you-are-not-being-simulated.’ This seems to be a substitution of ‘not-being-simulated’ into the PW schema.
Being in a simulation within a simulation (nested to any level) implies being in a simulation. The proper decomposition is p = sum over all positive N of (probability of simulation nested to level N)
The top simulator has N operations to execute before his free enthalpy basin is empty.
Every level down, this number is smaller. Before long, there is impossible to create a nontrivial simulation inside the current. This is the bottom one.
This simulation tower is just a great way to squander all the free enthalpy you have. Is the top simulation master that stupid?
In that sense, there’s actually a significant risk to the singularity. Why should the simulation master (I usually facetiously use the phrase “our overlords” when referring to this entity) let us ever run a simulation that is likely to result in an infinitely nested simulation? Maybe that’s why the LHC keeps blowing up.
You also need to include scenarios for infinitely-high towers, or closed-loop towers, or branching and merging networks, or one simulation being run in several (perhaps infinitely many) simulating worlds, or the other way around...
I don’t think we can assign a meaningful prior to any of these, and so we can’t calculate the probability of being in a simulation.
I don’t think the probability calculation is meaningful because the infinities mess it up. But you still need to ask, are you in the original 2010 or one of infinitely many possible ways to be in a simulated 2010? I can’t assign a probability; but I have a strong intuition when comparing one to infinite.
The zero one infinity rule also makes it seem more unlikely this is the base level of reality.
The Zero-One-Infinity Rule hasn’t been shown to apply to our reality, and even if it applied to our reality it would also permit “One”.
It seems rather convenient that I am living in the most interesting period in human history.
Can you give us a list of most-to-least interesting periods in human history? You have an anglo name, and I think you’re living in a particularly boring period of Anglo-American history. (If you had an Arab name, this might be an interesting period though, though not as interesting as if you were an Arab in the period of Mohammed or the first few Caliphs)
but I would want to run ancestor simulations, for my merged transhuman mind to be able to assimilate the knowledge of running a human consciousness of myself through interesting points in human history.
You don’t actually know what you would want with a transhuman mind. If simulations are fully conscious (the only sort of simulation relevant to our argument) I think that would be a particularly cruel thing for a transhuman mind to want.
You are suggesting a world with much more energy then the one that we know. It seems you should assign a lower probability to there being a much higher energy universe.
By the zero one infinity rule, I also think it likely that there are infinite spacial dimensions. Just a few extra spacial dimensions should give you plenty of computing power to run a lower dimensional universe.
By the zero one infinity rule, I also think it likely that there are infinite spacial dimensions. Just a few extra spacial dimensions should give you plenty of computing power to run a lower dimensional universe.
Why do you think there are only 3 or 4 or 5 or 6 or 8 or 12 or 42 or 248 or n spatial dimensions? If there actually are 42 spatial dimensions, I will accept it as the existence of God and clear evidence that he is a fan of Douglas Adams.
The extra dimensions could likely not impact our system of physics in any way we can detect. They are non-measurable sets.
Also, the Jargon File seems as likely a candidate for accidentally containing universal truth as anything.
In 1 or 2 dimensions, random walks return to the origin infinitely often. In 3 dimensions, they have but a 34% chance. There are nontrivial qualitative differences between numbers of spatial dimensions that we don’t see when we think “2? 3? 5? 179? It’s just a choice of N!”
Why do you think there are only 3 or 4 or 5 or 6 or 8 or 12 or 42 or 248 or n spatial dimensions?
I think we have good reason to believe that we are in 3 spatial dimensions. But as you say:
The extra dimensions could likely not impact our system of physics in any way we can detect.
What exactly is the point of these dimensions? I see no reason to concede extra dimensions to make the fact that we are living in a simulation more probable.
20: What is the probability that this is the ultimate base layer of reality?
Eliezer gave the joke answer to this question, because this is something that seems impossible to know.
However, I myself assign a significant probability that this is not the base level of reality. Theuncertainfuture.com tells me that I assign a 99% probability of AI by 2070 and it starts approaching .99 before 2070. So why would I be likely to be living as an original human circa 2000 when transhumans will be running ancestor simulations? I suppose it’s possible that transhumans won’t run ancestor simulations, but I would want to run ancestor simulations, for my merged transhuman mind to be able to assimilate the knowledge of running a human consciousness of myself through interesting points in human history.
The zero one infinity rule also makes it seem more unlikely this is the base level of reality. http://catb.org/jargon/html/Z/Zero-One-Infinity-Rule.html
It seems rather convenient that I am living in the most interesting period in human history. Not to mention I have a lifestyle in the top 1% of all humans living today.
I believe this is a minority viewpoint here, so my rationalist calculus is probably wrong. Why?
In my posts, I’ve argued that indexical uncertainty like this shouldn’t be represented using probabilities. Instead, I suggest that you consider yourself to be all of the many copies of you, i.e., both the ones in the ancestor simulations and the one in 2010, making decisions for all of them. Depending on your preferences, you might consider the consequences of the decisions of the copy in 2010 to be the most important and far-reaching, and therefore act mostly as if that was the only copy.
BTW, I agree with this.
Coming back to this comment, it seems to be another example of UDT giving a technically correct but incomplete answer.
Imagine you have a device that will tell you, tomorrow at 12am, whether you are in a simulation or in the base layer. (It turns out that all simulations are required by multiverse law to have such devices.) There’s probably not much you can do before 12am tomorrow that can cause important and far-reaching consequences. But fortunately you also have another device that you can hook up to the first. The second device generates moments of pleasure or pain for the user. More precisely, it gives you X pleasure/pain if you turn out to be in a sim, and Y pleasure/pain if you are in the base layer (presumably X and Y have different signs). Depending on X and Y, how do you decide whether to turn the second device on?
Have you pulled it all together anywhere? I’ve sometimes seen & thought this Pascal’s wager-like logic before (act as if your choices matter because if they don’t...), but I’ve always been suspicious precisely because it looks too much to me like Pascal’s wager.
I’ve thought about writing a post on the application of TDT/UDT to the Simulation Argument, but I could’t think of much to say. But to expand a bit more on what I wrote in the grandparent, in the Simulation Argument, the decision of the original you interacts with the decisions of the simulations. If you make the wrong decision, your simulations might end up not existing at all, so it doesn’t make sense to put a probability on “being in a simulation”. (This is like in the absent-minded driver problem, where your decision at the first exit determines whether you get to the second exit.)
I’m not sure I see what you mean by “Pascal’s wager-like logic”. Can you explain a bit more?
A top-level post on the application of TDT/UDT to the Simulation Argument would be worthwhile even if it was just a paragraph or two long.
A top level post telling me whether TDT and UDT are supposed to be identical or different (or whether they are the same but at different levels of development) would also be handy!
I think that’s enough. I feel I understand the SA very well, but not TDT or UDT much at all; approaching the latter from the former might make things click for me.
I mean that I read Pascal’s Wager as basically ‘p implies x reward for believing in p, and ~p implies no reward (either positive or negative); thus, best to believe in p regardless of the evidence for p’. (Clumsy phrasing, I’m afraid.)
Your example sounds like that: ‘believing you-are-not-being-simulated implies x utility (motivation for one’s actions & efforts), and if ~you-are-not-being-simulated then your utility to the real world is just 0; so believe you-are-not-being-simulated.’ This seems to be a substitution of ‘not-being-simulated’ into the PW schema.
If the probability, that you are inside a simulation is p, what’s the probability that your master simulator is also simulated?
How tall is this tower, most likely?
Being in a simulation within a simulation (nested to any level) implies being in a simulation. The proper decomposition is p = sum over all positive N of (probability of simulation nested to level N)
The top simulator has N operations to execute before his free enthalpy basin is empty.
Every level down, this number is smaller. Before long, there is impossible to create a nontrivial simulation inside the current. This is the bottom one.
This simulation tower is just a great way to squander all the free enthalpy you have. Is the top simulation master that stupid?
I doubt it.
In that sense, there’s actually a significant risk to the singularity. Why should the simulation master (I usually facetiously use the phrase “our overlords” when referring to this entity) let us ever run a simulation that is likely to result in an infinitely nested simulation? Maybe that’s why the LHC keeps blowing up.
You also need to include scenarios for infinitely-high towers, or closed-loop towers, or branching and merging networks, or one simulation being run in several (perhaps infinitely many) simulating worlds, or the other way around...
I don’t think we can assign a meaningful prior to any of these, and so we can’t calculate the probability of being in a simulation.
I don’t think the probability calculation is meaningful because the infinities mess it up. But you still need to ask, are you in the original 2010 or one of infinitely many possible ways to be in a simulated 2010? I can’t assign a probability; but I have a strong intuition when comparing one to infinite.
The Zero-One-Infinity Rule hasn’t been shown to apply to our reality, and even if it applied to our reality it would also permit “One”.
Can you give us a list of most-to-least interesting periods in human history? You have an anglo name, and I think you’re living in a particularly boring period of Anglo-American history. (If you had an Arab name, this might be an interesting period though, though not as interesting as if you were an Arab in the period of Mohammed or the first few Caliphs)
You don’t actually know what you would want with a transhuman mind. If simulations are fully conscious (the only sort of simulation relevant to our argument) I think that would be a particularly cruel thing for a transhuman mind to want.
You are suggesting a world with much more energy then the one that we know. It seems you should assign a lower probability to there being a much higher energy universe.
By the zero one infinity rule, I also think it likely that there are infinite spacial dimensions. Just a few extra spacial dimensions should give you plenty of computing power to run a lower dimensional universe.
By the zero one infinity rule, I also think it likely that there are infinite spacial dimensions. Just a few extra spacial dimensions should give you plenty of computing power to run a lower dimensional universe.
Wow, I really am curious why you think this would apply to spacial dimensions.
Why do you think there are only 3 or 4 or 5 or 6 or 8 or 12 or 42 or 248 or n spatial dimensions? If there actually are 42 spatial dimensions, I will accept it as the existence of God and clear evidence that he is a fan of Douglas Adams.
The extra dimensions could likely not impact our system of physics in any way we can detect. They are non-measurable sets.
Also, the Jargon File seems as likely a candidate for accidentally containing universal truth as anything.
In 1 or 2 dimensions, random walks return to the origin infinitely often. In 3 dimensions, they have but a 34% chance. There are nontrivial qualitative differences between numbers of spatial dimensions that we don’t see when we think “2? 3? 5? 179? It’s just a choice of N!”
I think we have good reason to believe that we are in 3 spatial dimensions. But as you say:
What exactly is the point of these dimensions? I see no reason to concede extra dimensions to make the fact that we are living in a simulation more probable.