1) If we ask whether the entities embedded in strings watched over by the self-consistent universe detector really have experiences, aren’t we violating the anti-zombie principle?
2) If Tegmark possible worlds have measure inverse their algorithmic complexity, and causal universes are much more easily computable than logical ones, should we not then find it not surprising that we are in an (apparently) causal universe even if the UE includes logical ones?
If we ask whether the entities embedded in strings watched over by the self-consistent universe detector really have experiences, aren’t we violating the anti-zombie principle?
This.
I think that a correct metaphor for computer-simulating other universe is not that we create it, but that we look at it. It already exists somewhere in the multiverse, but previously it was separated from our universe.
If simulating things doesn’t add measure to them, why do you believe you’re not a Boltzmann brain just because lawful versions of you are much more commonly simulated by your universe’s physics?
This is not a full answer (I don’t have one), just a sidenote: Believing to most likely not be a Boltzmann brain does not necessarily mean that Boltzmann brains are less likely. It could also be some kind of a survivor bias.
Imagine that every night when you sleep, someone makes hundred copies of you. One copy, randomly selected, remains in your bed. Other 99 copies are taken away and killed horribly. This was happening all your life, you just didn’t know it. What do you expect about tomorrow?
From the outside view, tomorrow the 99 copies of you will be killed, and 1 copy will continue to live. Therefore you should expect to be killed.
But from inside, today’s you is the lucky copy of the lucky copy, because all the unlucky copies are dead. Your whole experience is about surviving, because the unlucky ones don’t have experiences now. So based on your past, you expect to survive the next day. And the next day, 99 copies of you will die, but the remaining 1 will say: “I told you so!”.
So even if the Boltzmann brains are more simulated, and 99.99% of my copies are dying horribly in vacuum within the next seconds, they don’t have a story. The remaining copy does. And the story says: “I am not a Boltzman brain”.
By the way, how precise must be a simulation to add measure? Did I commit genocide by watching Star Wars, or is particle-level simulation necessary?
A possible answer could be that an imprecise simulation adds way less, but still nonzero measure, so my pleasure from watching Star Wars exceeds the suffering of all the people dying in the movie, multiplied by the epsilon increase of their measure. (A variant of a torture vs dust specks argument.) Running a particle-level Star Wars simulation would be a real crime.
This would mean there is no clear boundary between simulating and not simulating, so the ethical concerns about simulation must be solved by weighting how detailed is the simulation versus what benefits do we get by running it.
First, knowing you’re a Boltzmann brain doesn’t give you anything useful. Even if I believed that 90% of my measure were Boltzmann brains, that wouldn’t let me make any useful predictions about the future (because Boltzmann brains have no future). Our past narrative is the only thing we can even try and extract any useful predictions from.
Second, it might be possible to recover “traditional” predictability from vanity. If some observer looks at a creature that implements my behavior, I want that someone to find that creature to make correct predictions about the future. Assuming any finite distribution of probabilities over observers, I expect observers finding me via a causal, coherent, simple simulation to vastly outweigh observers finding me as a Boltzmann brain (since Boltzmann brains are scattered [because there’s no prior reason to anticipate any brain over another] but causal simulations recur in any form of “iterate all possible universes” search, and in a causal simulation, I am much more likely to implement this reasoning). Call it vanity logic—I want to be found to have been correct. I think (intuitively), but am not sure, that given any finite distribution of expectation over observers, I should expect to be observed via a simple simulation with near-certainty. I mean—how would you find a Boltzmann brain? I’m fairly sure any universe that can find me in simulation space is either looking for me specifically—in which case, they’re effectively hostile and should not be surprised at finding that my reasoning failed—or are iterating universes looking for brains, in which case they’ll find vastly more this-reasoning-implementers through causal processes than random ones.
This is a side point, but I’m curious if there is a strong argument for claiming lawful brains are more common (had an argument with some theists on this issue, they used BB to argue against multiverse theories)
I would say: because it seems that (in our universe and those sufficiently similar to count, anyway) the total number of observer-moments experienced by evolved brains should vastly exceed the total number of observer-moments experienced by Boltzmann brains. Evolved brains necessarily exist in large groups, and stick around for absolutely aeons as compared to the near-instantaneous conscious moment of a BB.
If they can host brains, they’re “similar” enough for my original intention—I was just excluding “alien worlds”.
I don’t see why the total count of brains matters as such; you are not actually sampling your brain (a complex 4-dimensional object) you are sampling an observer-moment of consciousness. A Boltzmann brain has one such moment, an evolved human brain has (rough back of an envelope calculation, based on a ballpark figure of 25ms for the “quantum” of human conscious experience and a 70-year lifespan) 88.3 x 10^9. Add in the aforementioned requirement for evolved brains to exist in multiplicity wherever they do occur, and the ratio of human moments:Boltzmann moments in a sufficiently large defined volume of (large-scale homogenous) multiverse gets higher still.
This is all assuming that a Boltzmann brain actually experiences consciousness at all. Most descriptions of them seem to be along the lines of “matter spontaneously organises such that for an instant it mimics the structure of a conscious brain”. It’s not clear to me, though, that an instantaneous time-slice through a consciousness is itself conscious (for much the same reason that an instant extracted from a physical trajectory lacks the property of movement). If you overcome that by requiring them to exist for a certain minimum amount of time, they obviously become astronomically rarer than they already are.
Seems to me that combining those factors gives a reasonably low expectation for being a Boltzmann brain.
… but I’m only an amateur, this is probably nonsense ;-)
it does add measure, but probably a tiny fraction of it’s total measure, making it more of “making it slightly more real” then “creating” it. But that’s semantics.
Edit: and it may very well be the case that other types of “looking at” also add measure, such as accessing a highly optimized/cryptographically obscuficated simulation through a straightforward analog interface.
I think that a correct metaphor for computer-simulating other universe is not that we create it, but that we look at it.
“Correct” is too strong. It might be a useful metaphor in showing which way the information is flowing, but it doesn’t address the question about the moral worth of the action of running a simulation. Certain computations must have moral worth, for example consider running an uploaded person in a similar setup (so that they can’t observe the outside world, and only use whatever was pre-packaged with them, but can be observed by the simulators). The fact of running this computation appears to be morally relevant, and it’s either better to run the computation or to avoid running it. So similarly with simulating a world, it’s either better to run it or not.
Whether it’s better to simulate a world appears to be dependent on what’s going on inside of it. Any decision that takes place within a world has an impact on the value of each particular simulation of the world, and if there are more simulations, the decision has a greater impact, because it influences the moral value of more simulations. Thus, by deciding to run a simulation, you are amplifying the moral value of the world that you are simulating and of decisions that take place in it, which can be interpreted as being equivalent to increasing its probability mass.
Just how much additional probability mass a simulation provides is unclear, for example a second simulation probably adds less than the first, and the first might matter very little already. It probably depends on how a world is defined in some way.
1) If we ask whether the entities embedded in strings watched over by the self-consistent universe detector really have experiences, aren’t we violating the anti-zombie principle?
We’re not asking if they have experiences; obviously if they exist, they have experiences. Rather we’re asking if their entire universe gains any magical reality-fluid from our universe simulating it (e.g., that mysterious stuff which, in our universe, manifests in proportion to the integrated squared modulus in the Born probabilities) which will then flow into any conscious agents embedded within.
Sadly, my usual toolbox for dissolving questions about consciousness does not seem to yield results on reality-fluid as yet—all thought experiments about “What if I simulate / what if I see...” either don’t vary with the amount of reality-fluid, or presume that the simulating universe exists in the first place.
There are people who claim to be less confused about this than I am. They appear to me to be jumping the gun on what constitutes lack of confusion, and ought to be able to answer questions like e.g. “Would straightforwardly simulating the quantum wavefunction in sufficient detail automatically give rise to sentients experiencing outcomes in proportion to the Born probabilities, i.e., reproduce our current experience?” by something other than e.g. “But people in branches like ours will have utility functions that go by squared modulus” which I consider to be blatantly silly for reasons I may need to talk about further at some point.
“Would straightforwardly simulating the quantum wavefunction in sufficient detail automatically give rise to sentients experiencing outcomes in proportion to the Born probabilities, i.e., reproduce our current experience?”
I suspect I’m misunderstanding the question, because I notice that I’m not confused, and that’s usually a bad sign when dealing with a question which is supposed to be complicated.
Is this not equivalent to asking “If one were to simulated our entire universe, would it be exactly like ours? Could we use it to predict the future (or at least the possible space of futures) in our own universe with complete accuracy?”
If so, the immediate answer that comes to mind is “yes...why not?”
It captures my feelings on the matter pretty well, although it also seems like an unnecessarily rude way of summarizing the opinions of any qualiaphiles I might be debating. Like if a Christian self-deprecatingly said that yes, he believes the reason for akrasia is a magic snake, that seems (a) reasonable (description), whereas if an atheist described a Christian’s positions in those terms she’s just an asshole.
I don’t feel confused about this at all, and your entire concept of reality fluid looks confused. Keywords here are “look” and “feel”, I don’t have any actual justification and thus despite feeling lots of confidence-the-emotion I probably (hopefully) wouldn’t bet on it.
It sure looks a lot like “reality fluid” is just what extrapolated priors over universes feel like from the inside when they have been excluded from feeling like probabilities for one reason or another.
in response to the actual test thou: it seem that depends on what exactly you mean by “straightforwardly”, as well as on actual physics. There are basically 3 main classes of possibilities: Either something akin to Mangled Worlds automatically falls out of the equations, in which case they do with most types of simulation method. Or that doesn’t happen and you simulate in the forwards direction way with a number attached to each point in configuration space (aka, what’d happen automatically if you did it in C++), in which case they don’t. Or you “simulate” it functional programming style where history is traced backwards in a more particle like way from the point you are trying to look at (aka, what would happen automatically if you did it in Haskell), in which case they sort of do, but probably with some bias. In all cases, the “reason” for the simulations “realness” turning out like it did is in some sense the same one as for ours. This information probably does not make sense since it’s a 5 second intuition haphazardly translated from visual metaphor as well as some other noise source I forgot about.
Oh, and I don’t really know anything about quantum mechanics and there’s probably some catch specific to them that precludes one or more of these alternatives, possibly all of them. I’m fully aware most of what I’m saying is probably nonsense, I’m just hoping it’s surprising nonsense and ironmaning it might yield something useful.
I downvoted because this seems to be a case of “I don’t know, but I don’t happen to feel confused.” It does not, at least, seem to be “I don’t know, but I don’t feel confused, therefore I know,” which can occasionally happen :D
It’s more of a case of not knowing if I know or not, nor even if I’m confused or not. I do know that thus I’m meta-confused, but that does not necessarily imply object level confusion. It’s a black boxes and lack of introspective access thing.
1) If we ask whether the entities embedded in strings watched over by the self-consistent universe detector really have experiences, aren’t we violating the anti-zombie principle?
2) If Tegmark possible worlds have measure inverse their algorithmic complexity, and causal universes are much more easily computable than logical ones, should we not then find it not surprising that we are in an (apparently) causal universe even if the UE includes logical ones?
This.
I think that a correct metaphor for computer-simulating other universe is not that we create it, but that we look at it. It already exists somewhere in the multiverse, but previously it was separated from our universe.
If simulating things doesn’t add measure to them, why do you believe you’re not a Boltzmann brain just because lawful versions of you are much more commonly simulated by your universe’s physics?
This is not a full answer (I don’t have one), just a sidenote: Believing to most likely not be a Boltzmann brain does not necessarily mean that Boltzmann brains are less likely. It could also be some kind of a survivor bias.
Imagine that every night when you sleep, someone makes hundred copies of you. One copy, randomly selected, remains in your bed. Other 99 copies are taken away and killed horribly. This was happening all your life, you just didn’t know it. What do you expect about tomorrow?
From the outside view, tomorrow the 99 copies of you will be killed, and 1 copy will continue to live. Therefore you should expect to be killed.
But from inside, today’s you is the lucky copy of the lucky copy, because all the unlucky copies are dead. Your whole experience is about surviving, because the unlucky ones don’t have experiences now. So based on your past, you expect to survive the next day. And the next day, 99 copies of you will die, but the remaining 1 will say: “I told you so!”.
So even if the Boltzmann brains are more simulated, and 99.99% of my copies are dying horribly in vacuum within the next seconds, they don’t have a story. The remaining copy does. And the story says: “I am not a Boltzman brain”.
If you can’t tell the difference, what’s the use of considering that you might be a Boltzmann brain, regardless of how likely it is?
By the way, how precise must be a simulation to add measure? Did I commit genocide by watching Star Wars, or is particle-level simulation necessary?
A possible answer could be that an imprecise simulation adds way less, but still nonzero measure, so my pleasure from watching Star Wars exceeds the suffering of all the people dying in the movie, multiplied by the epsilon increase of their measure. (A variant of a torture vs dust specks argument.) Running a particle-level Star Wars simulation would be a real crime.
This would mean there is no clear boundary between simulating and not simulating, so the ethical concerns about simulation must be solved by weighting how detailed is the simulation versus what benefits do we get by running it.
Sort of discussed here and here.
First, knowing you’re a Boltzmann brain doesn’t give you anything useful. Even if I believed that 90% of my measure were Boltzmann brains, that wouldn’t let me make any useful predictions about the future (because Boltzmann brains have no future). Our past narrative is the only thing we can even try and extract any useful predictions from.
Second, it might be possible to recover “traditional” predictability from vanity. If some observer looks at a creature that implements my behavior, I want that someone to find that creature to make correct predictions about the future. Assuming any finite distribution of probabilities over observers, I expect observers finding me via a causal, coherent, simple simulation to vastly outweigh observers finding me as a Boltzmann brain (since Boltzmann brains are scattered [because there’s no prior reason to anticipate any brain over another] but causal simulations recur in any form of “iterate all possible universes” search, and in a causal simulation, I am much more likely to implement this reasoning). Call it vanity logic—I want to be found to have been correct. I think (intuitively), but am not sure, that given any finite distribution of expectation over observers, I should expect to be observed via a simple simulation with near-certainty. I mean—how would you find a Boltzmann brain? I’m fairly sure any universe that can find me in simulation space is either looking for me specifically—in which case, they’re effectively hostile and should not be surprised at finding that my reasoning failed—or are iterating universes looking for brains, in which case they’ll find vastly more this-reasoning-implementers through causal processes than random ones.
This is a side point, but I’m curious if there is a strong argument for claiming lawful brains are more common (had an argument with some theists on this issue, they used BB to argue against multiverse theories)
I would say: because it seems that (in our universe and those sufficiently similar to count, anyway) the total number of observer-moments experienced by evolved brains should vastly exceed the total number of observer-moments experienced by Boltzmann brains. Evolved brains necessarily exist in large groups, and stick around for absolutely aeons as compared to the near-instantaneous conscious moment of a BB.
The problem is that the count of “similar” universes does not matter, the total count of brains does. It seems a serious enough issue for prominent multiverse theorists to reason backwards and adjust things to avoid the undesirable conclusion http://www.researchgate.net/publication/1772034_Boltzmann_brains_and_the_scale-factor_cutoff_measure_of_the_multiverse
If they can host brains, they’re “similar” enough for my original intention—I was just excluding “alien worlds”.
I don’t see why the total count of brains matters as such; you are not actually sampling your brain (a complex 4-dimensional object) you are sampling an observer-moment of consciousness. A Boltzmann brain has one such moment, an evolved human brain has (rough back of an envelope calculation, based on a ballpark figure of 25ms for the “quantum” of human conscious experience and a 70-year lifespan) 88.3 x 10^9. Add in the aforementioned requirement for evolved brains to exist in multiplicity wherever they do occur, and the ratio of human moments:Boltzmann moments in a sufficiently large defined volume of (large-scale homogenous) multiverse gets higher still.
This is all assuming that a Boltzmann brain actually experiences consciousness at all. Most descriptions of them seem to be along the lines of “matter spontaneously organises such that for an instant it mimics the structure of a conscious brain”. It’s not clear to me, though, that an instantaneous time-slice through a consciousness is itself conscious (for much the same reason that an instant extracted from a physical trajectory lacks the property of movement). If you overcome that by requiring them to exist for a certain minimum amount of time, they obviously become astronomically rarer than they already are.
Seems to me that combining those factors gives a reasonably low expectation for being a Boltzmann brain.
… but I’m only an amateur, this is probably nonsense ;-)
it does add measure, but probably a tiny fraction of it’s total measure, making it more of “making it slightly more real” then “creating” it. But that’s semantics.
Edit: and it may very well be the case that other types of “looking at” also add measure, such as accessing a highly optimized/cryptographically obscuficated simulation through a straightforward analog interface.
“Correct” is too strong. It might be a useful metaphor in showing which way the information is flowing, but it doesn’t address the question about the moral worth of the action of running a simulation. Certain computations must have moral worth, for example consider running an uploaded person in a similar setup (so that they can’t observe the outside world, and only use whatever was pre-packaged with them, but can be observed by the simulators). The fact of running this computation appears to be morally relevant, and it’s either better to run the computation or to avoid running it. So similarly with simulating a world, it’s either better to run it or not.
Whether it’s better to simulate a world appears to be dependent on what’s going on inside of it. Any decision that takes place within a world has an impact on the value of each particular simulation of the world, and if there are more simulations, the decision has a greater impact, because it influences the moral value of more simulations. Thus, by deciding to run a simulation, you are amplifying the moral value of the world that you are simulating and of decisions that take place in it, which can be interpreted as being equivalent to increasing its probability mass.
Just how much additional probability mass a simulation provides is unclear, for example a second simulation probably adds less than the first, and the first might matter very little already. It probably depends on how a world is defined in some way.
Why? Seems like the simulated universe gets at least as much additional reality juice as the simulating universe has.
It’s starting to seem like the concept of “probability mass” is violating the “anti-zombie principle”.
Edit: this is why I don’t believe in the “anti-zombie principle”.
We’re not asking if they have experiences; obviously if they exist, they have experiences. Rather we’re asking if their entire universe gains any magical reality-fluid from our universe simulating it (e.g., that mysterious stuff which, in our universe, manifests in proportion to the integrated squared modulus in the Born probabilities) which will then flow into any conscious agents embedded within.
Sadly, my usual toolbox for dissolving questions about consciousness does not seem to yield results on reality-fluid as yet—all thought experiments about “What if I simulate / what if I see...” either don’t vary with the amount of reality-fluid, or presume that the simulating universe exists in the first place.
There are people who claim to be less confused about this than I am. They appear to me to be jumping the gun on what constitutes lack of confusion, and ought to be able to answer questions like e.g. “Would straightforwardly simulating the quantum wavefunction in sufficient detail automatically give rise to sentients experiencing outcomes in proportion to the Born probabilities, i.e., reproduce our current experience?” by something other than e.g. “But people in branches like ours will have utility functions that go by squared modulus” which I consider to be blatantly silly for reasons I may need to talk about further at some point.
I suspect I’m misunderstanding the question, because I notice that I’m not confused, and that’s usually a bad sign when dealing with a question which is supposed to be complicated.
Is this not equivalent to asking “If one were to simulated our entire universe, would it be exactly like ours? Could we use it to predict the future (or at least the possible space of futures) in our own universe with complete accuracy?”
If so, the immediate answer that comes to mind is “yes...why not?”
I’m not convinced “reality fluid” is an improvement over “qualia”.
“Magical reality fluid” highlights the fact that it’s still mysterious, and so seems to be a fairly honest phrasing.
So what would you think of “magical qualia”?
It captures my feelings on the matter pretty well, although it also seems like an unnecessarily rude way of summarizing the opinions of any qualiaphiles I might be debating. Like if a Christian self-deprecatingly said that yes, he believes the reason for akrasia is a magic snake, that seems (a) reasonable (description), whereas if an atheist described a Christian’s positions in those terms she’s just an asshole.
Solipsists should be able to dissolve the whole thing easily.
I don’t feel confused about this at all, and your entire concept of reality fluid looks confused. Keywords here are “look” and “feel”, I don’t have any actual justification and thus despite feeling lots of confidence-the-emotion I probably (hopefully) wouldn’t bet on it.
It sure looks a lot like “reality fluid” is just what extrapolated priors over universes feel like from the inside when they have been excluded from feeling like probabilities for one reason or another.
in response to the actual test thou: it seem that depends on what exactly you mean by “straightforwardly”, as well as on actual physics. There are basically 3 main classes of possibilities: Either something akin to Mangled Worlds automatically falls out of the equations, in which case they do with most types of simulation method. Or that doesn’t happen and you simulate in the forwards direction way with a number attached to each point in configuration space (aka, what’d happen automatically if you did it in C++), in which case they don’t. Or you “simulate” it functional programming style where history is traced backwards in a more particle like way from the point you are trying to look at (aka, what would happen automatically if you did it in Haskell), in which case they sort of do, but probably with some bias. In all cases, the “reason” for the simulations “realness” turning out like it did is in some sense the same one as for ours. This information probably does not make sense since it’s a 5 second intuition haphazardly translated from visual metaphor as well as some other noise source I forgot about.
Oh, and I don’t really know anything about quantum mechanics and there’s probably some catch specific to them that precludes one or more of these alternatives, possibly all of them. I’m fully aware most of what I’m saying is probably nonsense, I’m just hoping it’s surprising nonsense and ironmaning it might yield something useful.
I downvoted because this seems to be a case of “I don’t know, but I don’t happen to feel confused.” It does not, at least, seem to be “I don’t know, but I don’t feel confused, therefore I know,” which can occasionally happen :D
It’s more of a case of not knowing if I know or not, nor even if I’m confused or not. I do know that thus I’m meta-confused, but that does not necessarily imply object level confusion. It’s a black boxes and lack of introspective access thing.