Probability: You are living in a simulation run by some sort of intelligence.
Probability: Other people exist independently of your own mind.
Probability: You are dreaming at this very moment. (Learning to dream lucidly is largely a matter of giving this a high probability and keeping it in mind, and updating on it when you encounter, for instance, people asking whether you’re dreaming.)
Meta comment: If this questions were in separate comments, I’d upvote/downvote differently. I’m interested in thoughts/arguments related to probability of simulation and I have little interest in solipsism or lucid dreaming. They don’t seem very much related topics to me. Am I missing something?
They all seem to be asking variants on the question “how likely is apparent reality real?”. They also all seem to have weird properties as far as evidence is concerned, because the observable evidence must all come from the very source (observed reality) whose credibility we’re questioning.
Also, except for the solipsism one, they seem to be questions where, contrary to LW canon, it might be a good idea to deliberately self-delude (by which I mean, for instance, not bothering to look at the evidence in-depth). If I really felt a .5 probability in my bones that I was living in a simulation, I don’t think I’d be able to work as hard at achieving my goals; I wouldn’t have as much will to power when it could all disappear any moment.
Aside: I’m genuinely surprised at the lack of discussion of lucid dreaming on LW. Lucid dreaming seems like a big gaping loophole in reality, like one of the elements you’d need in a real-life equivalent of the infinite-wish-spell-cycle, yet nobody seems to be seriously experimenting with finding innovative uses for it.
In hindsight, though, it seems like removing the middle question might have been better.
If I really felt a .5 probability in my bones that I was living in a simulation, I don’t think I’d be able to work as hard at achieving my goals; I wouldn’t have as much will to power when it could all disappear any moment.
Would that depend at all on your beliefs about the simulators?
E.g., if you felt a .5 probability that you were in a simulation being run by a real person who shared various important attributes with you, who was attempting to determine the best available strategy for achieving their goals, such that you being successful at achieving yours led directly to them being more successful at achieving theirs, would your motivations change?
I agree that intuitions are challenging here but I really cannot think of a reason to believe that my actions are less meaningful or that reality is any more or less permanent if we’re all being simulated. So maybe there is a tie there to solipsism as I don’t think I have any problems with simulations that are faithfully executing our physics and not making some sort of patchwork Sim in which I’m the only sentient. If I thought Solipsism was .5 probably then I’d have the problem you describe.
P(Simulation) < 0.01; little evidence in favor of it and it requires that there is some other intelligence doing the simulation, that there can be the kind of fault-tolerant hardware that can (flawlessly) compute the universe. I don’t think posthuman ancestors are capable of running a universe as a simulation. I think Bostrom’s simulation argument is sound.
1 - P(Solipsism) > 0.999; My mind doesn’t contain minds that are consistently smarter than I am and can out-think me on every level.
P(Dreaming) < 0.001; We don’t dream of meticulously filling out tax forms and doing the dishes.
[ Probabilities are not discounted for expecting to come into contact with additional evidence or arguments ]
My mind doesn’t contain minds that are consistently smarter than I am and can out-think me on every level.
Idea: play a game of chess against someone while in a lucid dream.
If you won or lost consistently, it would show that you are better at chess than you are at chess.
If anyone actually does this, I think you should alternate games sitting normally and with your opponent’s pieces on your side of the board (i.e. the board turned 180 degrees), because I’d expect your internal agents to think better when they’re seeing the board as they would in a real chess match.
My favorite moment along those lines was at work years ago, when a developer asked me to validate the strategy she was proposing to solve a particular problem.
She laid out the strategy for me, I worked through some examples, and said “OK… this looks right to me. But you should ask Mark about it, too, because Mark is way more familar with our tax code than I am, and he might notice something I didn’t… like, for example, the fact that this piece over here will fail under this obscure use case.”
Then I blinked, listened to what I’d just said, and added “Which I, of course, would never notice. So you should go ask Mark about it.”
She, being very polite, simply nodded and smiled and backed away quickly.
On your argument, there is little need to flawlessly compute the universe. If a civilization sees that their laws are inconsistent with their observations, then they will change their laws to reflect their observations. Because there is no way to conclusively prove your laws are correct, it is impossible for a simulation to state that “Our laws are correct, therefore there is a flaw in the universe”. Furthermore, on the probability that our ancestors have obtained the computing power of running a simulation:
An estimate for the power of a (non-quantum) planet sized computer is 10^42 (R. J. Bradbury, “Matrioshka Brains.”) operation per second. Its hard to pin down how many atoms there are in the universe, but lets put it at around 10^80, and with 128 bits needed to hold each coordinate, to the degree of one pm, and another for its movement, that puts it at around 10^83 operations to run a simulation.
So at first it looks impractical to compute a universe, but this computer need to perform its operations in a seconds time. (Practical value of a computer that runs infinitely slowly), it can compute its values infinitely slowly. And so, no matter the size of the universe, a computer can simulate it. And because it can compute its values infinitely slowly, it can compute an infinite number of universes.
So in conclusion, there is a very low probability that a civilization evolves to the point where it can simulate a universe, and the motives are also dubious. But, because of that fact that if it does, there is no upper bound to the number of universes the civilization can simulate, and so we are almost certainly in a simulated universe, because the probability of us being in a simulated universe is determined by n/p, where p is the probability of a universe being simulated, and n is the number of universes being simulated, that ends up being a probability of infinity, and so we are most likely part of a simulated universe.
You seem to be assuming that we’d be simulated by a universe which is physically like our own.
Our simulations, at least, are of much simpler scenarios than what we’re living in.
I’m not sure what properties a universe would need to have to make simulating our universe relatively cheap and easy. I’m guessing at smaller and faster fundamental particles.
I’m not sure what properties a universe would need to have to make simulating our universe relatively cheap and easy. I’m guessing at smaller and faster fundamental particles.
Egan has some fun thoughts about this with his Autoverse in Permutation City. The inhabitants do eventually get stuck with some contradictions that arise from the initial conditions of their universe.
You’re right, that was one of the erroneous assumptions I made. The problem with that is that there are an infinite number of permutations of possible universes. Even if only a small fraction of them are habitable, and a small fraction of those are conducive to intelligent life, we still have the multiplying by infinity issue. I don’t know how valid using infinity in an equation is though, because when there are two infinities it breaks down. For example, if they’re are an infinite amount of dogs in New York, and 10% of dogs are terriers, technically the probability of the next dog you see being a terrier is equal to that of any other dog. That again simply doesn’t make sense to me
P(Dreaming) < 0.001; We don’t dream of meticulously filling out tax forms and doing the dishes.
You don’t? My dreams suck more than I thought.
(I also give P(muflax is dreaming) < 0.001, but because I can’t easily manipulate the mindstream right now. I can’t rewind time, shift my location or abort completely, so I’m probably awake. I can always do these things in dreams.)
(Learning to dream lucidly is largely a matter of giving this a high probability and keeping it in mind, and updating on it when you encounter, for instance, people asking whether you’re dreaming.)
I find this statement curious. Perhaps my memory is simply biased on the matter but every dream I can recall—or, rather, every dream I recall recalling (and those are far and few between at that) -- has always been lucid. Even growing up this was the case. I’ve always had bouts of insomnia as well. I cannot discount the possibility that I’m simply recalling those things that conform to the patterns of my expectations, but I do know for a fact that I never had to “learn” how to dream lucidly. I recall one particularly vivid string of dreams I had as a child—or, rather, one particular recurring facet of said dreams—that all involved me being able to walk two inches off the ground. This is actually one of my earliest memories (I recall little about my early childhood). This “walking off the ground” was something I did because I knew it was a dream.
I have no inclination towards guessing the significance (or magnitude of that significance) of this.
Probability: You are living in a simulation run by some sort of intelligence.
Probability: Other people exist independently of your own mind.
Probability: You are dreaming at this very moment. (Learning to dream lucidly is largely a matter of giving this a high probability and keeping it in mind, and updating on it when you encounter, for instance, people asking whether you’re dreaming.)
Meta comment: If this questions were in separate comments, I’d upvote/downvote differently. I’m interested in thoughts/arguments related to probability of simulation and I have little interest in solipsism or lucid dreaming. They don’t seem very much related topics to me. Am I missing something?
They all seem to be asking variants on the question “how likely is apparent reality real?”. They also all seem to have weird properties as far as evidence is concerned, because the observable evidence must all come from the very source (observed reality) whose credibility we’re questioning.
Also, except for the solipsism one, they seem to be questions where, contrary to LW canon, it might be a good idea to deliberately self-delude (by which I mean, for instance, not bothering to look at the evidence in-depth). If I really felt a .5 probability in my bones that I was living in a simulation, I don’t think I’d be able to work as hard at achieving my goals; I wouldn’t have as much will to power when it could all disappear any moment.
Aside: I’m genuinely surprised at the lack of discussion of lucid dreaming on LW. Lucid dreaming seems like a big gaping loophole in reality, like one of the elements you’d need in a real-life equivalent of the infinite-wish-spell-cycle, yet nobody seems to be seriously experimenting with finding innovative uses for it.
In hindsight, though, it seems like removing the middle question might have been better.
Would that depend at all on your beliefs about the simulators?
E.g., if you felt a .5 probability that you were in a simulation being run by a real person who shared various important attributes with you, who was attempting to determine the best available strategy for achieving their goals, such that you being successful at achieving yours led directly to them being more successful at achieving theirs, would your motivations change?
I would like to be working on lucid dreaming research but am unaware of any avenues towards obtaining the very expensive MRI time to do it.
I agree that intuitions are challenging here but I really cannot think of a reason to believe that my actions are less meaningful or that reality is any more or less permanent if we’re all being simulated. So maybe there is a tie there to solipsism as I don’t think I have any problems with simulations that are faithfully executing our physics and not making some sort of patchwork Sim in which I’m the only sentient. If I thought Solipsism was .5 probably then I’d have the problem you describe.
P(Simulation) < 0.01; little evidence in favor of it and it requires that there is some other intelligence doing the simulation, that there can be the kind of fault-tolerant hardware that can (flawlessly) compute the universe. I don’t think posthuman ancestors are capable of running a universe as a simulation. I think Bostrom’s simulation argument is sound.
1 - P(Solipsism) > 0.999; My mind doesn’t contain minds that are consistently smarter than I am and can out-think me on every level.
P(Dreaming) < 0.001; We don’t dream of meticulously filling out tax forms and doing the dishes.
[ Probabilities are not discounted for expecting to come into contact with additional evidence or arguments ]
Idea: play a game of chess against someone while in a lucid dream.
If you won or lost consistently, it would show that you are better at chess than you are at chess.
If anyone actually does this, I think you should alternate games sitting normally and with your opponent’s pieces on your side of the board (i.e. the board turned 180 degrees), because I’d expect your internal agents to think better when they’re seeing the board as they would in a real chess match.
My favorite moment along those lines was at work years ago, when a developer asked me to validate the strategy she was proposing to solve a particular problem.
She laid out the strategy for me, I worked through some examples, and said “OK… this looks right to me. But you should ask Mark about it, too, because Mark is way more familar with our tax code than I am, and he might notice something I didn’t… like, for example, the fact that this piece over here will fail under this obscure use case.”
Then I blinked, listened to what I’d just said, and added “Which I, of course, would never notice. So you should go ask Mark about it.”
She, being very polite, simply nodded and smiled and backed away quickly.
On your argument, there is little need to flawlessly compute the universe. If a civilization sees that their laws are inconsistent with their observations, then they will change their laws to reflect their observations. Because there is no way to conclusively prove your laws are correct, it is impossible for a simulation to state that “Our laws are correct, therefore there is a flaw in the universe”. Furthermore, on the probability that our ancestors have obtained the computing power of running a simulation:
An estimate for the power of a (non-quantum) planet sized computer is 10^42 (R. J. Bradbury, “Matrioshka Brains.”) operation per second. Its hard to pin down how many atoms there are in the universe, but lets put it at around 10^80, and with 128 bits needed to hold each coordinate, to the degree of one pm, and another for its movement, that puts it at around 10^83 operations to run a simulation.
So at first it looks impractical to compute a universe, but this computer need to perform its operations in a seconds time. (Practical value of a computer that runs infinitely slowly), it can compute its values infinitely slowly. And so, no matter the size of the universe, a computer can simulate it. And because it can compute its values infinitely slowly, it can compute an infinite number of universes.
So in conclusion, there is a very low probability that a civilization evolves to the point where it can simulate a universe, and the motives are also dubious. But, because of that fact that if it does, there is no upper bound to the number of universes the civilization can simulate, and so we are almost certainly in a simulated universe, because the probability of us being in a simulated universe is determined by n/p, where p is the probability of a universe being simulated, and n is the number of universes being simulated, that ends up being a probability of infinity, and so we are most likely part of a simulated universe.
You seem to be assuming that we’d be simulated by a universe which is physically like our own.
Our simulations, at least, are of much simpler scenarios than what we’re living in.
I’m not sure what properties a universe would need to have to make simulating our universe relatively cheap and easy. I’m guessing at smaller and faster fundamental particles.
Egan has some fun thoughts about this with his Autoverse in Permutation City. The inhabitants do eventually get stuck with some contradictions that arise from the initial conditions of their universe.
You’re right, that was one of the erroneous assumptions I made. The problem with that is that there are an infinite number of permutations of possible universes. Even if only a small fraction of them are habitable, and a small fraction of those are conducive to intelligent life, we still have the multiplying by infinity issue. I don’t know how valid using infinity in an equation is though, because when there are two infinities it breaks down. For example, if they’re are an infinite amount of dogs in New York, and 10% of dogs are terriers, technically the probability of the next dog you see being a terrier is equal to that of any other dog. That again simply doesn’t make sense to me
You don’t? My dreams suck more than I thought.
(I also give P(muflax is dreaming) < 0.001, but because I can’t easily manipulate the mindstream right now. I can’t rewind time, shift my location or abort completely, so I’m probably awake. I can always do these things in dreams.)
Given your argument, I’m a bit confused by why you assign such a high upper bound to P(Solipsism).
Ah, you’re right. Thanks for the correction.
I edited the post above. I intended P(Solipsism) < 0.001
And now I think a bit more about it I realize the arguments I gave are probably not “my true objections”. They are mostly appeals to (my) intuition.
P(simulation) ~ .01 P(other minds) ~ .9999 P(dreaming) ~ .0001
I find this statement curious. Perhaps my memory is simply biased on the matter but every dream I can recall—or, rather, every dream I recall recalling (and those are far and few between at that) -- has always been lucid. Even growing up this was the case. I’ve always had bouts of insomnia as well. I cannot discount the possibility that I’m simply recalling those things that conform to the patterns of my expectations, but I do know for a fact that I never had to “learn” how to dream lucidly. I recall one particularly vivid string of dreams I had as a child—or, rather, one particular recurring facet of said dreams—that all involved me being able to walk two inches off the ground. This is actually one of my earliest memories (I recall little about my early childhood). This “walking off the ground” was something I did because I knew it was a dream.
I have no inclination towards guessing the significance (or magnitude of that significance) of this.
Some people are naturally better at lucid dreaming than others. There is a great forum for lucid dreaming if you’re interested at dreamviews.com
Could it be a selection effect? Maybe you only remember lucid dreams.
As I said; perhaps my memory is simply biased. But that then yields to the question: why would it be so uniquely biased?