Is quantum phenomena anthropic evidence for BQP=BPP? Is existing evidence against many-worlds?
Suppose I live inside a simulation ran by a computer over which I have some control.
Scenario 1: I make the computer run the following:
pause simulation
if is even(calculate billionth digit of pi):
resume simulation
Suppose, after running this program, that I observe that I still exist. This is some anthropic evidence for the billionth digit of pi being even.
Thus, one can get anthropic evidence about logical facts.
Scenario 2: I make the computer run the following:
pause simulation
if is even(calculate billionth digit of pi):
resume simulation
else:
resume simulation but run it a trillion times slower
If you’re running on the non-time-penalized solomonoff prior, then that’s no evidence at all — observing existing is evidence that you’re being ran, not that you’re being ran fast. But if you do that, a bunch of things break including anthropic probabilities and expected utility calculations. What you want is a time-penalized (probably quadratically) prior, in which later compute-steps have less realityfluid than earlier ones — and thus, observing existing is evidence for being computed early — and thus, observing existing is some evidence that the billionth digit of pi is even.
Scenario 3: I make the computer run the following:
pause simulation
quantum_algorithm <- classical-compute algorithm which simulates quantum algorithms the fastest
infinite loop:
use quantum_algorithm to compute the result of some complicated quantum phenomena
compute simulation forwards by 1 step
Observing existing after running this program is evidence that BQP=BPP — that is, classical computers can efficiently run quantum algorithms: if BQP≠BPP, then my simulation should become way slower, and existing is evidence for being computed early and fast (see scenario 2).
Except, living in a world which contains the outcome of cohering quantum phenomena (quantum computers, double-slit experiments, etc) is very similar to the scenario above! If your prior for the universe is a programs, penalized for how long they take to run on classical computation, then observing that the outcome of quantum phenomena is being computed is evidence that they can be computed efficiently.
Scenario 4: I make the computer run the following:
in the simulation, give the human a device which generates a sequence of random bits
pause simulation
list_of_simulations <- [current simulation state]
quantum_algorithm <- classical-compute algorithm which simulates quantum algorithms the fastest
infinite loop:
list_of_new_simulations <- []
for simulation in list_of_simulations:
list_of_new_simulations +=
[ simulation advanced by one step where the device generated bit 0,
simulation advanced by one step where the device generated bit 1 ]
list_of_simulations <- list_of_new_simulations
This is similar to what it’s like to being in a many-worlds universe where there’s constant forking.
Yes, in this scenario, there is no “mutual destruction”, the way there is in quantum. But with decohering everett branches, you can totally build exponentially many non-mutually-destructing timelines too! For example, you can choose to make important life decisions based on the output of the RNG, and end up with exponentially many different lives each with some (exponentially little) quantum amplitude, without any need for those to be compressible together, or to be able to mutually-destruct. That’s what decohering means! “Recohering” quantum phenomena interacts destructively such that you can compute the output, but decohering* phenomena just branches.
The amount of different simulations that need to be computed increases exponentially with simulation time.
Observing existing after running this program is very strange. Yes, there are exponentially many me’s, but all of the me’s are being ran exponentially slowly; they should all not observe existing. I should not be any of them.
This is what I mean by “existing is evidence against many-worlds” — there’s gotta be something like an agent (or physics, through some real RNG or through computing whichever variables have the most impact) picking a only-polynomially-large set of decohered non-compressible-together timelines to explain continuing existing.
Some friends tell me “but tammy, sure at step N each you has only 1/2^N quantum amplitude, but at step N there’s 2^N such you’s, so you still have 1 unit of realityfluid” — but my response is “I mean, I guess, sure, but regardless of that, step N occurs 2^N units of classical-compute-time in the future! That’s the issue!”.
Some notes:
I heard about pilot wave theory recently, and sure, if that’s one way to get single history, why not. I hear that it “doesn’t have locality”, which like, okay I guess, that’s plausibly worse program-complexity wise, but it’s exponentially better after accounting for the time penalty.
What if “the world is just Inherently Quantum”? Well, my main answer here is, what the hell does that mean? It’s very easy for me to imagine existing inside of a classical computation (eg conway’s game of life); I have no idea what it’d mean for me to exist in “one of the exponentially many non-compressible-together decohered exponenially-small-amplitude quantum states that are all being computed forwards”. Quadratically-decaying-realityfluid classical-computation makes sense, dammit.
What if it’s still true — what if I am observing existing with exponentially little (as a function of the age of the universe) realityfluid? What if the set of real stuff is just that big?
Well, I guess that’s vaguely plausible (even though, ugh, that shouldn’t be how being real works, I think), but then the tegmark 4 multiverse has to contain no hypotheses in which observers in my reference class occupy more than exponentially little realityfluid.
Like, if there’s a conway’s-game-of-life simulation out there in tegmark 4, whose entire realityfluid-per-timestep is equivalent to my realityfluid-per-timestep, then they can just bruteforce-generate all human-brain-states and run into mine by chance, and I should have about as much probability of being one of those random generations as I’d have being in this universe — both have exponentially little of their universe’s realityfluid! The conway’s-game-of-life bruteforced-me has exponentially little realityfluid because she’s getting generated exponentially late, and quantum-universe me has exponentially little realityfluid because I occupy exponentially little of the quantum amplitude, at every time-step.
See why that’s weird? As a general observer, I should exponentially favor observing being someone who lives in a world where I don’t have exponentially little realityfluid, such as “person who lives only-polynomially-late into a conway’s-game-of-life, but happened to get randomly very confused about thinking that they might inhabit a quantum world”.
Existing inside of a many-worlds quantum universe feels like aliens pranksters-at-orthogonal-angles running the kind of simulation where the observers inside of it to be very anthropically confused once they think about anthropics hard enough. (This is not my belief.)
If you’re running on the non-time-penalized solomonoff prior[...]a bunch of things break including anthropic probabilities and expected utility calculations
This isn’t true, you can get perfectly fine probabilities and expected utilities from ordinary Solmonoff induction(barring computability issues, ofc). The key here is that SI is defined in terms of a prefix-free UTM whose set of valid programs forms a prefix-free code, which automatically grants probabilities adding up to less than 1, etc. This issue is often glossed over in popular accounts.
If you use the UTMs for cartesian-framed inputs/outputs, sure; but if you’re running the programs as entire worlds, then you still have the issue of “where are you in time”.
Say there’s an infinitely growing conway’s-game-of-life program, or some universal program, which contains a copy of me at infinitely many locations. How do I weigh which ones are me?
It doesn’t matter that the UTM has a fixed amount of weight, there’s still infinitely many locations within it.
If you want to pick out locations within some particular computation, you can just use the universal prior again, applied to indices to parts of the computation.
What you propose, ≈”weigh indices by kolmogorov complexity” is indeed a way to go about picking indices, but “weigh indices by one over their square” feels a lot more natural to me; a lot simpler than invoking the universal prior twice.
I think using the universal prior again is more natural. It’s simpler to use the same complexity metric for everything; it’s more consistent with Solomonoff induction, in that the weight assigned by Solomonoff induction to a given (world, claw) pair would be approximately the sum of their Kolmogorov complexities; and the universal prior dominates the inverse square measure but the converse doesn’t hold.
It doesn’t matter? Like, if your locations are identical (say, simulations of entire observable universe and you never find any difference no matter “where” you are), your weight is exactly the weight of program. If you expect dfferences, you can select some kind of simplicity prior to weight this differences, because there is basically no difference between “list all programs for this UTM, run in parallel”.
Interesting idea. I don’t think using a classical Turing machine in this way would be the right prior for the multiverse. Classical Turing machines are a way for ape brains to think about computation using the circuitry we have available (“imagine other apes following these social contentions about marking long tapes of paper”). They aren’t the cosmically simplest form of computation. For example, the (microscopic non-course-grained) laws of physics are deeply time reversible, where Turing machines are not. I suspect this computation speed prior would lead to Boltzmann-brain problems. Your brain at this moment might be computed at high fidelity, but everything else in the universe would be approximated for the computational speed-up.
(cross-posted from my blog)
Is quantum phenomena anthropic evidence for BQP=BPP? Is existing evidence against many-worlds?
Suppose I live inside a simulation ran by a computer over which I have some control.
Scenario 1: I make the computer run the following:
Suppose, after running this program, that I observe that I still exist. This is some anthropic evidence for the billionth digit of pi being even.
Thus, one can get anthropic evidence about logical facts.
Scenario 2: I make the computer run the following:
If you’re running on the non-time-penalized solomonoff prior, then that’s no evidence at all — observing existing is evidence that you’re being ran, not that you’re being ran fast. But if you do that, a bunch of things break including anthropic probabilities and expected utility calculations. What you want is a time-penalized (probably quadratically) prior, in which later compute-steps have less realityfluid than earlier ones — and thus, observing existing is evidence for being computed early — and thus, observing existing is some evidence that the billionth digit of pi is even.
Scenario 3: I make the computer run the following:
Observing existing after running this program is evidence that BQP=BPP — that is, classical computers can efficiently run quantum algorithms: if BQP≠BPP, then my simulation should become way slower, and existing is evidence for being computed early and fast (see scenario 2).
Except, living in a world which contains the outcome of cohering quantum phenomena (quantum computers, double-slit experiments, etc) is very similar to the scenario above! If your prior for the universe is a programs, penalized for how long they take to run on classical computation, then observing that the outcome of quantum phenomena is being computed is evidence that they can be computed efficiently.
Scenario 4: I make the computer run the following:
This is similar to what it’s like to being in a many-worlds universe where there’s constant forking.
Yes, in this scenario, there is no “mutual destruction”, the way there is in quantum. But with decohering everett branches, you can totally build exponentially many non-mutually-destructing timelines too! For example, you can choose to make important life decisions based on the output of the RNG, and end up with exponentially many different lives each with some (exponentially little) quantum amplitude, without any need for those to be compressible together, or to be able to mutually-destruct. That’s what decohering means! “Recohering” quantum phenomena interacts destructively such that you can compute the output, but decohering* phenomena just branches.
The amount of different simulations that need to be computed increases exponentially with simulation time.
Observing existing after running this program is very strange. Yes, there are exponentially many me’s, but all of the me’s are being ran exponentially slowly; they should all not observe existing. I should not be any of them.
This is what I mean by “existing is evidence against many-worlds” — there’s gotta be something like an agent (or physics, through some real RNG or through computing whichever variables have the most impact) picking a only-polynomially-large set of decohered non-compressible-together timelines to explain continuing existing.
Some friends tell me “but tammy, sure at step N each you has only 1/2^N quantum amplitude, but at step N there’s 2^N such you’s, so you still have 1 unit of realityfluid” — but my response is “I mean, I guess, sure, but regardless of that, step N occurs 2^N units of classical-compute-time in the future! That’s the issue!”.
Some notes:
I heard about pilot wave theory recently, and sure, if that’s one way to get single history, why not. I hear that it “doesn’t have locality”, which like, okay I guess, that’s plausibly worse program-complexity wise, but it’s exponentially better after accounting for the time penalty.
What if “the world is just Inherently Quantum”? Well, my main answer here is, what the hell does that mean? It’s very easy for me to imagine existing inside of a classical computation (eg conway’s game of life); I have no idea what it’d mean for me to exist in “one of the exponentially many non-compressible-together decohered exponenially-small-amplitude quantum states that are all being computed forwards”. Quadratically-decaying-realityfluid classical-computation makes sense, dammit.
What if it’s still true — what if I am observing existing with exponentially little (as a function of the age of the universe) realityfluid? What if the set of real stuff is just that big?
Well, I guess that’s vaguely plausible (even though, ugh, that shouldn’t be how being real works, I think), but then the tegmark 4 multiverse has to contain no hypotheses in which observers in my reference class occupy more than exponentially little realityfluid.
Like, if there’s a conway’s-game-of-life simulation out there in tegmark 4, whose entire realityfluid-per-timestep is equivalent to my realityfluid-per-timestep, then they can just bruteforce-generate all human-brain-states and run into mine by chance, and I should have about as much probability of being one of those random generations as I’d have being in this universe — both have exponentially little of their universe’s realityfluid! The conway’s-game-of-life bruteforced-me has exponentially little realityfluid because she’s getting generated exponentially late, and quantum-universe me has exponentially little realityfluid because I occupy exponentially little of the quantum amplitude, at every time-step.
See why that’s weird? As a general observer, I should exponentially favor observing being someone who lives in a world where I don’t have exponentially little realityfluid, such as “person who lives only-polynomially-late into a conway’s-game-of-life, but happened to get randomly very confused about thinking that they might inhabit a quantum world”.
Existing inside of a many-worlds quantum universe feels like aliens pranksters-at-orthogonal-angles running the kind of simulation where the observers inside of it to be very anthropically confused once they think about anthropics hard enough. (This is not my belief.)
This isn’t true, you can get perfectly fine probabilities and expected utilities from ordinary Solmonoff induction(barring computability issues, ofc). The key here is that SI is defined in terms of a prefix-free UTM whose set of valid programs forms a prefix-free code, which automatically grants probabilities adding up to less than 1, etc. This issue is often glossed over in popular accounts.
If you use the UTMs for cartesian-framed inputs/outputs, sure; but if you’re running the programs as entire worlds, then you still have the issue of “where are you in time”.
Say there’s an infinitely growing conway’s-game-of-life program, or some universal program, which contains a copy of me at infinitely many locations. How do I weigh which ones are me?
It doesn’t matter that the UTM has a fixed amount of weight, there’s still infinitely many locations within it.
If you want to pick out locations within some particular computation, you can just use the universal prior again, applied to indices to parts of the computation.
What you propose, ≈”weigh indices by kolmogorov complexity” is indeed a way to go about picking indices, but “weigh indices by one over their square” feels a lot more natural to me; a lot simpler than invoking the universal prior twice.
I think using the universal prior again is more natural. It’s simpler to use the same complexity metric for everything; it’s more consistent with Solomonoff induction, in that the weight assigned by Solomonoff induction to a given (world, claw) pair would be approximately the sum of their Kolmogorov complexities; and the universal prior dominates the inverse square measure but the converse doesn’t hold.
It doesn’t matter? Like, if your locations are identical (say, simulations of entire observable universe and you never find any difference no matter “where” you are), your weight is exactly the weight of program. If you expect dfferences, you can select some kind of simplicity prior to weight this differences, because there is basically no difference between “list all programs for this UTM, run in parallel”.
There could be a difference but only after a certain point in time, which you’re trying to predict / plan for.
Interesting idea.
I don’t think using a classical Turing machine in this way would be the right prior for the multiverse. Classical Turing machines are a way for ape brains to think about computation using the circuitry we have available (“imagine other apes following these social contentions about marking long tapes of paper”). They aren’t the cosmically simplest form of computation. For example, the (microscopic non-course-grained) laws of physics are deeply time reversible, where Turing machines are not.
I suspect this computation speed prior would lead to Boltzmann-brain problems. Your brain at this moment might be computed at high fidelity, but everything else in the universe would be approximated for the computational speed-up.