If we ask whether the entities embedded in strings watched over by the self-consistent universe detector really have experiences, aren’t we violating the anti-zombie principle?
This.
I think that a correct metaphor for computer-simulating other universe is not that we create it, but that we look at it. It already exists somewhere in the multiverse, but previously it was separated from our universe.
If simulating things doesn’t add measure to them, why do you believe you’re not a Boltzmann brain just because lawful versions of you are much more commonly simulated by your universe’s physics?
This is not a full answer (I don’t have one), just a sidenote: Believing to most likely not be a Boltzmann brain does not necessarily mean that Boltzmann brains are less likely. It could also be some kind of a survivor bias.
Imagine that every night when you sleep, someone makes hundred copies of you. One copy, randomly selected, remains in your bed. Other 99 copies are taken away and killed horribly. This was happening all your life, you just didn’t know it. What do you expect about tomorrow?
From the outside view, tomorrow the 99 copies of you will be killed, and 1 copy will continue to live. Therefore you should expect to be killed.
But from inside, today’s you is the lucky copy of the lucky copy, because all the unlucky copies are dead. Your whole experience is about surviving, because the unlucky ones don’t have experiences now. So based on your past, you expect to survive the next day. And the next day, 99 copies of you will die, but the remaining 1 will say: “I told you so!”.
So even if the Boltzmann brains are more simulated, and 99.99% of my copies are dying horribly in vacuum within the next seconds, they don’t have a story. The remaining copy does. And the story says: “I am not a Boltzman brain”.
By the way, how precise must be a simulation to add measure? Did I commit genocide by watching Star Wars, or is particle-level simulation necessary?
A possible answer could be that an imprecise simulation adds way less, but still nonzero measure, so my pleasure from watching Star Wars exceeds the suffering of all the people dying in the movie, multiplied by the epsilon increase of their measure. (A variant of a torture vs dust specks argument.) Running a particle-level Star Wars simulation would be a real crime.
This would mean there is no clear boundary between simulating and not simulating, so the ethical concerns about simulation must be solved by weighting how detailed is the simulation versus what benefits do we get by running it.
First, knowing you’re a Boltzmann brain doesn’t give you anything useful. Even if I believed that 90% of my measure were Boltzmann brains, that wouldn’t let me make any useful predictions about the future (because Boltzmann brains have no future). Our past narrative is the only thing we can even try and extract any useful predictions from.
Second, it might be possible to recover “traditional” predictability from vanity. If some observer looks at a creature that implements my behavior, I want that someone to find that creature to make correct predictions about the future. Assuming any finite distribution of probabilities over observers, I expect observers finding me via a causal, coherent, simple simulation to vastly outweigh observers finding me as a Boltzmann brain (since Boltzmann brains are scattered [because there’s no prior reason to anticipate any brain over another] but causal simulations recur in any form of “iterate all possible universes” search, and in a causal simulation, I am much more likely to implement this reasoning). Call it vanity logic—I want to be found to have been correct. I think (intuitively), but am not sure, that given any finite distribution of expectation over observers, I should expect to be observed via a simple simulation with near-certainty. I mean—how would you find a Boltzmann brain? I’m fairly sure any universe that can find me in simulation space is either looking for me specifically—in which case, they’re effectively hostile and should not be surprised at finding that my reasoning failed—or are iterating universes looking for brains, in which case they’ll find vastly more this-reasoning-implementers through causal processes than random ones.
This is a side point, but I’m curious if there is a strong argument for claiming lawful brains are more common (had an argument with some theists on this issue, they used BB to argue against multiverse theories)
I would say: because it seems that (in our universe and those sufficiently similar to count, anyway) the total number of observer-moments experienced by evolved brains should vastly exceed the total number of observer-moments experienced by Boltzmann brains. Evolved brains necessarily exist in large groups, and stick around for absolutely aeons as compared to the near-instantaneous conscious moment of a BB.
If they can host brains, they’re “similar” enough for my original intention—I was just excluding “alien worlds”.
I don’t see why the total count of brains matters as such; you are not actually sampling your brain (a complex 4-dimensional object) you are sampling an observer-moment of consciousness. A Boltzmann brain has one such moment, an evolved human brain has (rough back of an envelope calculation, based on a ballpark figure of 25ms for the “quantum” of human conscious experience and a 70-year lifespan) 88.3 x 10^9. Add in the aforementioned requirement for evolved brains to exist in multiplicity wherever they do occur, and the ratio of human moments:Boltzmann moments in a sufficiently large defined volume of (large-scale homogenous) multiverse gets higher still.
This is all assuming that a Boltzmann brain actually experiences consciousness at all. Most descriptions of them seem to be along the lines of “matter spontaneously organises such that for an instant it mimics the structure of a conscious brain”. It’s not clear to me, though, that an instantaneous time-slice through a consciousness is itself conscious (for much the same reason that an instant extracted from a physical trajectory lacks the property of movement). If you overcome that by requiring them to exist for a certain minimum amount of time, they obviously become astronomically rarer than they already are.
Seems to me that combining those factors gives a reasonably low expectation for being a Boltzmann brain.
… but I’m only an amateur, this is probably nonsense ;-)
it does add measure, but probably a tiny fraction of it’s total measure, making it more of “making it slightly more real” then “creating” it. But that’s semantics.
Edit: and it may very well be the case that other types of “looking at” also add measure, such as accessing a highly optimized/cryptographically obscuficated simulation through a straightforward analog interface.
I think that a correct metaphor for computer-simulating other universe is not that we create it, but that we look at it.
“Correct” is too strong. It might be a useful metaphor in showing which way the information is flowing, but it doesn’t address the question about the moral worth of the action of running a simulation. Certain computations must have moral worth, for example consider running an uploaded person in a similar setup (so that they can’t observe the outside world, and only use whatever was pre-packaged with them, but can be observed by the simulators). The fact of running this computation appears to be morally relevant, and it’s either better to run the computation or to avoid running it. So similarly with simulating a world, it’s either better to run it or not.
Whether it’s better to simulate a world appears to be dependent on what’s going on inside of it. Any decision that takes place within a world has an impact on the value of each particular simulation of the world, and if there are more simulations, the decision has a greater impact, because it influences the moral value of more simulations. Thus, by deciding to run a simulation, you are amplifying the moral value of the world that you are simulating and of decisions that take place in it, which can be interpreted as being equivalent to increasing its probability mass.
Just how much additional probability mass a simulation provides is unclear, for example a second simulation probably adds less than the first, and the first might matter very little already. It probably depends on how a world is defined in some way.
This.
I think that a correct metaphor for computer-simulating other universe is not that we create it, but that we look at it. It already exists somewhere in the multiverse, but previously it was separated from our universe.
If simulating things doesn’t add measure to them, why do you believe you’re not a Boltzmann brain just because lawful versions of you are much more commonly simulated by your universe’s physics?
This is not a full answer (I don’t have one), just a sidenote: Believing to most likely not be a Boltzmann brain does not necessarily mean that Boltzmann brains are less likely. It could also be some kind of a survivor bias.
Imagine that every night when you sleep, someone makes hundred copies of you. One copy, randomly selected, remains in your bed. Other 99 copies are taken away and killed horribly. This was happening all your life, you just didn’t know it. What do you expect about tomorrow?
From the outside view, tomorrow the 99 copies of you will be killed, and 1 copy will continue to live. Therefore you should expect to be killed.
But from inside, today’s you is the lucky copy of the lucky copy, because all the unlucky copies are dead. Your whole experience is about surviving, because the unlucky ones don’t have experiences now. So based on your past, you expect to survive the next day. And the next day, 99 copies of you will die, but the remaining 1 will say: “I told you so!”.
So even if the Boltzmann brains are more simulated, and 99.99% of my copies are dying horribly in vacuum within the next seconds, they don’t have a story. The remaining copy does. And the story says: “I am not a Boltzman brain”.
If you can’t tell the difference, what’s the use of considering that you might be a Boltzmann brain, regardless of how likely it is?
By the way, how precise must be a simulation to add measure? Did I commit genocide by watching Star Wars, or is particle-level simulation necessary?
A possible answer could be that an imprecise simulation adds way less, but still nonzero measure, so my pleasure from watching Star Wars exceeds the suffering of all the people dying in the movie, multiplied by the epsilon increase of their measure. (A variant of a torture vs dust specks argument.) Running a particle-level Star Wars simulation would be a real crime.
This would mean there is no clear boundary between simulating and not simulating, so the ethical concerns about simulation must be solved by weighting how detailed is the simulation versus what benefits do we get by running it.
Sort of discussed here and here.
First, knowing you’re a Boltzmann brain doesn’t give you anything useful. Even if I believed that 90% of my measure were Boltzmann brains, that wouldn’t let me make any useful predictions about the future (because Boltzmann brains have no future). Our past narrative is the only thing we can even try and extract any useful predictions from.
Second, it might be possible to recover “traditional” predictability from vanity. If some observer looks at a creature that implements my behavior, I want that someone to find that creature to make correct predictions about the future. Assuming any finite distribution of probabilities over observers, I expect observers finding me via a causal, coherent, simple simulation to vastly outweigh observers finding me as a Boltzmann brain (since Boltzmann brains are scattered [because there’s no prior reason to anticipate any brain over another] but causal simulations recur in any form of “iterate all possible universes” search, and in a causal simulation, I am much more likely to implement this reasoning). Call it vanity logic—I want to be found to have been correct. I think (intuitively), but am not sure, that given any finite distribution of expectation over observers, I should expect to be observed via a simple simulation with near-certainty. I mean—how would you find a Boltzmann brain? I’m fairly sure any universe that can find me in simulation space is either looking for me specifically—in which case, they’re effectively hostile and should not be surprised at finding that my reasoning failed—or are iterating universes looking for brains, in which case they’ll find vastly more this-reasoning-implementers through causal processes than random ones.
This is a side point, but I’m curious if there is a strong argument for claiming lawful brains are more common (had an argument with some theists on this issue, they used BB to argue against multiverse theories)
I would say: because it seems that (in our universe and those sufficiently similar to count, anyway) the total number of observer-moments experienced by evolved brains should vastly exceed the total number of observer-moments experienced by Boltzmann brains. Evolved brains necessarily exist in large groups, and stick around for absolutely aeons as compared to the near-instantaneous conscious moment of a BB.
The problem is that the count of “similar” universes does not matter, the total count of brains does. It seems a serious enough issue for prominent multiverse theorists to reason backwards and adjust things to avoid the undesirable conclusion http://www.researchgate.net/publication/1772034_Boltzmann_brains_and_the_scale-factor_cutoff_measure_of_the_multiverse
If they can host brains, they’re “similar” enough for my original intention—I was just excluding “alien worlds”.
I don’t see why the total count of brains matters as such; you are not actually sampling your brain (a complex 4-dimensional object) you are sampling an observer-moment of consciousness. A Boltzmann brain has one such moment, an evolved human brain has (rough back of an envelope calculation, based on a ballpark figure of 25ms for the “quantum” of human conscious experience and a 70-year lifespan) 88.3 x 10^9. Add in the aforementioned requirement for evolved brains to exist in multiplicity wherever they do occur, and the ratio of human moments:Boltzmann moments in a sufficiently large defined volume of (large-scale homogenous) multiverse gets higher still.
This is all assuming that a Boltzmann brain actually experiences consciousness at all. Most descriptions of them seem to be along the lines of “matter spontaneously organises such that for an instant it mimics the structure of a conscious brain”. It’s not clear to me, though, that an instantaneous time-slice through a consciousness is itself conscious (for much the same reason that an instant extracted from a physical trajectory lacks the property of movement). If you overcome that by requiring them to exist for a certain minimum amount of time, they obviously become astronomically rarer than they already are.
Seems to me that combining those factors gives a reasonably low expectation for being a Boltzmann brain.
… but I’m only an amateur, this is probably nonsense ;-)
it does add measure, but probably a tiny fraction of it’s total measure, making it more of “making it slightly more real” then “creating” it. But that’s semantics.
Edit: and it may very well be the case that other types of “looking at” also add measure, such as accessing a highly optimized/cryptographically obscuficated simulation through a straightforward analog interface.
“Correct” is too strong. It might be a useful metaphor in showing which way the information is flowing, but it doesn’t address the question about the moral worth of the action of running a simulation. Certain computations must have moral worth, for example consider running an uploaded person in a similar setup (so that they can’t observe the outside world, and only use whatever was pre-packaged with them, but can be observed by the simulators). The fact of running this computation appears to be morally relevant, and it’s either better to run the computation or to avoid running it. So similarly with simulating a world, it’s either better to run it or not.
Whether it’s better to simulate a world appears to be dependent on what’s going on inside of it. Any decision that takes place within a world has an impact on the value of each particular simulation of the world, and if there are more simulations, the decision has a greater impact, because it influences the moral value of more simulations. Thus, by deciding to run a simulation, you are amplifying the moral value of the world that you are simulating and of decisions that take place in it, which can be interpreted as being equivalent to increasing its probability mass.
Just how much additional probability mass a simulation provides is unclear, for example a second simulation probably adds less than the first, and the first might matter very little already. It probably depends on how a world is defined in some way.
Why? Seems like the simulated universe gets at least as much additional reality juice as the simulating universe has.
It’s starting to seem like the concept of “probability mass” is violating the “anti-zombie principle”.
Edit: this is why I don’t believe in the “anti-zombie principle”.