Until we come up with a better way to deal with the Measure Problem, I’m personally not taking any probabilistic arguments in the form of the one Bostrom uses to argue for the simulation hypothesis very seriously.
If people aren’t familiar with the measure problem, it’s a serious problem in cosmology right now in any model that assumes infinite universes (like most versions of inflation do, for example.) In this specific case, it would look something like “if there are infinite universes, and an infinite number of simulated universes, what are the odds you are in a simulated universe? How do you divide infinity by infinity?” And we really don’t know how to answer questions in that form; there have been a number of mathematical attempts to do so, but they come up with wildly different answers, and it’s not clear which if any is correct. Depending on exactally how you write the fraction out the answer can be very different.
Edit: I see that you did mention the measure problem in your post, but in my opinion you’re missing the part of the measure problem that causes the biggest problems for Bostrom’s argument.
Yes, there are several possible solution to the measure problem in the map.
If a share of simulations is extremely large, it may overweight my real copies in most of measure problem solutions. That means that no matter how we solve measure problem, I will most likely find my self in the simulation. But it is just a conjecture.
If a share of simulations is extremely large, it may overweight my real copies in most of measure problem solutions.
But without solving the measure problem you can’t even say that.
For example, let’s say that X% of all universes form simulated universes, and in each of the universes that does, there are Y universes formed on average. Sounds simple, right? Just multiply the two and you get how many simulated universes there are per real universe.
Except that without solving the measure problem, you can’t actually say what X% is OR what Y is. It all depends on which way you slice infinity, and you will get very different numbers for both. You could make X 99% or .0000001%, you could make Y 2 or a hundred trillion, just by measurement the infinite universes infinity in different ways, and we don’t know which way to measure them is right.
A more concrete problem is that this same kind of anthro reasoning in an infinite multiverse leads to all kinds of bizarre conclusions, many of which are clearly not true. (The Youngness Paradox for example). That makes me doubt that that kind of reasoning actually works at all.
May be it is the reason why Bostrom tried to make simulation argument only about one civilization—human, which either will simulate its ancestors or not.
In this case he is almost independent from other civilizations in the Universe (but not in fact: there are other human civilizations there and as real ones are earlier in time they will dominate landscape in hyperexponential universe)
SA does not work for one civilization also because human-llke simulation may be created by completely unhuman creators.
But also finding holes in SA does not prove that we are not in the simulation. If we don’t know how to calculate probabilities we have to use vague prior, in which simulation and reality is equally possible.
But also finding holes in SA does not prove that we are not in the simulation. If we don’t know how to calculate probabilities we have to use vague prior, in which simulation and reality is equally possible.
Oh, I don’t think you can prove we’re not in a simulation; almost by definition it can’t really be disproven.
I’m not 100% convinced that it’s actually possible in our universe to simulate an entire other universe just as complicated as ours (you start running into problem with the minimum energy and space requirements in order to hold that much information, for example), but even if not that isn’t a proof that we’re not in a simulation, since it’s possible that beings in a more complicated universe then ours are simulating us.
I think that the most abundant class of simulations are ones that are much simpler than reality it tries to simulate. Such simulations simulate only surfaces of things, which I see but not all atoms in the the universe.
Also if we speak about ordinary space-time in visible universe, the measure problem is not strong. It only starts to weight in if we add uncountable amount of my copies in casually non-connected part of Multiverse.
But even the fact we account for reality of such copies, no matter in what proportion, result in big world immortality—the analog of quantum immortality in inflationary large universe. It happens because such accounting means that I sample my self from many my copies in different parts of inflationary universe, and no matter how I could die some of such copies should survive.
Basically it means that at least one of two conjectures are true:
a) I am in simulation
b) I am immortal because of big world immortality (and even argument about diminishing measure is not working as measure is growing exponentially in inflational multiverse). See also post of yvain about big world immortality: http://lesswrong.com/lw/bg0/cryonics_without_freezers_resurrection/
But measure problem may be applied to simulation problem in another way. That is me and my copy in a computer may have different measure of existence even if we exist simultaneously in one world.
If you’re going by that logic, though, then even the odds that you are a simulation are utterly dwarfed by the odds that, say, you are just a random event in the quantum foam at the end of the universe that for a fraction of a second comes together with your exact brain and all your memories and experiences, and then is gone. Simulations in any given finite universe would still be finite, but end of the universe quantum fluctuation would happen an infinite number of times, no matter how low the odds are, given an infinite post-heat death time frame.
As I’ve been saying, following that same form of logic inevitably leads to a lot of bizarre conclusions, many much weirder then the simulation hypothesis.
Until we come up with a better way to deal with the Measure Problem, I’m personally not taking any probabilistic arguments in the form of the one Bostrom uses to argue for the simulation hypothesis very seriously.
https://en.wikipedia.org/wiki/Measure_problem_%28cosmology%29
If people aren’t familiar with the measure problem, it’s a serious problem in cosmology right now in any model that assumes infinite universes (like most versions of inflation do, for example.) In this specific case, it would look something like “if there are infinite universes, and an infinite number of simulated universes, what are the odds you are in a simulated universe? How do you divide infinity by infinity?” And we really don’t know how to answer questions in that form; there have been a number of mathematical attempts to do so, but they come up with wildly different answers, and it’s not clear which if any is correct. Depending on exactally how you write the fraction out the answer can be very different.
Edit: I see that you did mention the measure problem in your post, but in my opinion you’re missing the part of the measure problem that causes the biggest problems for Bostrom’s argument.
Yes, there are several possible solution to the measure problem in the map.
If a share of simulations is extremely large, it may overweight my real copies in most of measure problem solutions. That means that no matter how we solve measure problem, I will most likely find my self in the simulation. But it is just a conjecture.
But without solving the measure problem you can’t even say that.
For example, let’s say that X% of all universes form simulated universes, and in each of the universes that does, there are Y universes formed on average. Sounds simple, right? Just multiply the two and you get how many simulated universes there are per real universe.
Except that without solving the measure problem, you can’t actually say what X% is OR what Y is. It all depends on which way you slice infinity, and you will get very different numbers for both. You could make X 99% or .0000001%, you could make Y 2 or a hundred trillion, just by measurement the infinite universes infinity in different ways, and we don’t know which way to measure them is right.
A more concrete problem is that this same kind of anthro reasoning in an infinite multiverse leads to all kinds of bizarre conclusions, many of which are clearly not true. (The Youngness Paradox for example). That makes me doubt that that kind of reasoning actually works at all.
May be it is the reason why Bostrom tried to make simulation argument only about one civilization—human, which either will simulate its ancestors or not.
In this case he is almost independent from other civilizations in the Universe (but not in fact: there are other human civilizations there and as real ones are earlier in time they will dominate landscape in hyperexponential universe) SA does not work for one civilization also because human-llke simulation may be created by completely unhuman creators.
But also finding holes in SA does not prove that we are not in the simulation. If we don’t know how to calculate probabilities we have to use vague prior, in which simulation and reality is equally possible.
Oh, I don’t think you can prove we’re not in a simulation; almost by definition it can’t really be disproven.
I’m not 100% convinced that it’s actually possible in our universe to simulate an entire other universe just as complicated as ours (you start running into problem with the minimum energy and space requirements in order to hold that much information, for example), but even if not that isn’t a proof that we’re not in a simulation, since it’s possible that beings in a more complicated universe then ours are simulating us.
I think that the most abundant class of simulations are ones that are much simpler than reality it tries to simulate. Such simulations simulate only surfaces of things, which I see but not all atoms in the the universe.
Also if we speak about ordinary space-time in visible universe, the measure problem is not strong. It only starts to weight in if we add uncountable amount of my copies in casually non-connected part of Multiverse.
But even the fact we account for reality of such copies, no matter in what proportion, result in big world immortality—the analog of quantum immortality in inflationary large universe. It happens because such accounting means that I sample my self from many my copies in different parts of inflationary universe, and no matter how I could die some of such copies should survive.
Basically it means that at least one of two conjectures are true: a) I am in simulation b) I am immortal because of big world immortality (and even argument about diminishing measure is not working as measure is growing exponentially in inflational multiverse). See also post of yvain about big world immortality: http://lesswrong.com/lw/bg0/cryonics_without_freezers_resurrection/
But measure problem may be applied to simulation problem in another way. That is me and my copy in a computer may have different measure of existence even if we exist simultaneously in one world.
If you’re going by that logic, though, then even the odds that you are a simulation are utterly dwarfed by the odds that, say, you are just a random event in the quantum foam at the end of the universe that for a fraction of a second comes together with your exact brain and all your memories and experiences, and then is gone. Simulations in any given finite universe would still be finite, but end of the universe quantum fluctuation would happen an infinite number of times, no matter how low the odds are, given an infinite post-heat death time frame.
As I’ve been saying, following that same form of logic inevitably leads to a lot of bizarre conclusions, many much weirder then the simulation hypothesis.