Separate point: I also strongly disagree with the idea that “there’s a strong chance we live in a simulation”. Any such simulation must be either:
fully-quantum, in which case it would require the simulating hardware to be at least as massive as the simulated matter, and probably orders of magitude more massive. The log-odds of being inside such a simulation must therefore be negative by at least those orders of magnitude.
not-fully-quantum, in which case the quantum branching factor per time interval is many many many orders of magnitude less than that of an unsimulated reality. In this case, the log-odds of being inside such a simulation would be very very very negative.
based on some substrate governed by physics whose “computational branching power” is even greater than quantum mechanics, in which case we should anthropically expect to live in that simulator’s world and not this simulated one.
Unlike my separate point about the great filter, I can claim no special expertise on this; though both my parents have PhDs in physics, I couldn’t even write the Dirac equation without looking it up (though, given a week to work through things, I could probably do a passable job reconstructing Shor’s algorithm with nothing more than access to Wikipedia articles on non-quantum FFT). Still, I’m decently confident about this point, too.
As someone who mostly expects to be in a simulation, this is the clearest and most plausible anti-simulation-hypothesis argument I’ve seen, thanks.
How does it hold up against the point that the universe looks large enough to support a large number of even fully-quantum single-world simulations (with a low-resolution approximation of the rest of reality), even if it costs many orders of magnitude more resources to run them?
Perhaps would-be simulators would tend not to value the extra information from full-quantum simulations enough to build many or even any of them? My guess is that many purposes for simulations would want to explore a bunch of the possibility tree, but depending on how costly very large quantum computers are to mature civilizations maybe they’d just get by with a bunch of low-branching factor simulations instead?
I think both your question and self-response are pertinent. I have nothing to add to either, save a personal intuition that large-scale fully-quantum simulators are probably highly impractical. (I have no particular opinion about partially-quantum simulators — even possibly using quantum subcomponents larger than today’s computers — but they wouldn’t change the substance of my not-in-a-sim argument.)
The other point that comes to mind is that if you have a classical simulation running on a quantum world, maybe that counts as branching for the purposes of where we expect to find ourselves? I’m still somewhat confused about whether exact duplicates ‘count’, but if they do then maybe the branching factor of the underlying reality carries over to sims running further down the stack?
I don’t think the branching factor of the simulation matters, since the weight of each individual branch decreases as the number of branches increases. The Born measure is conserved by branching.
This is certainly a cogent counterargument. Either side of this debate relies on a theory of “measure of consciousness” that is, as far as I can tell, not obviously self-contradictory. We won’t work out the details here.
In other words: this is a point on which I think we can respectfully agree to disagree.
I don’t think the point you were arguing against is the same as the one I’m making here, though I understand why you think so.
My understanding of your model is that, simplifying relativistic issues so that “simultaneous” has a single unambiguous meaning, total measure across quantum branches of a simultaneous time slice is preserved; and your argument is that, otherwise, we’d have to assign equal measure to each unique moment of consciousness, which would lead to ridiculous “Bolzmann brain” scenarios. I’d agree that your argument is convincing that different simultaneous branches have different weight according to the rules of QM, but that does not at all imply that total weight across branches is constant across time.
The argument I made there was that we should consider observer-moments to be ‘real’ according to their Hilbert measure, since that is what we use to predict our own sense-experiences. This does imply that observer-weight will be preserved over time, since unitary evolution preserves the measure(as you say, this also proves it is conserved by splitting into branches, since you can consider that to be projecting onto different subspaces)
Even without unitarity, you shouldn’t expect the total amount of observer-weight to increase exponentially in time, since that would cause the total amount of observer-weight to diverge, giving undefined predictions.
Our sense-experiences are “unitary” (in some sense which I hope we can agree on without defining rigorously), so of course we use unitary measure to predict them. Branching worlds are not unitary in that sense, so carrying over unitarity from the former to the latter seems an entirely arbitrary assumption.
A finite number (say, the number of particles in the known universe), raised to a finite number (say, the number of Planck time intervals before dark energy tears the universe apart), gives a finite number. No need for divergence. (I think both of those are severe overestimates for the actual possible branching, but they are reasonable as handwavy demonstrations of the existence of finite upper bounds)
Ah, by ‘unitary’ I mean a unitary operator, that is an operator which preserves the Hilbert measure. It’s an axiom of quantum mechanics that time evolution is represented by a unitary operator.
Fair point about the probable finitude of time(but wouldn’t it be better if our theory could handle the possibility of infinite time as well?)
Separate point: I also strongly disagree with the idea that “there’s a strong chance we live in a simulation”. Any such simulation must be either:
fully-quantum, in which case it would require the simulating hardware to be at least as massive as the simulated matter, and probably orders of magitude more massive. The log-odds of being inside such a simulation must therefore be negative by at least those orders of magnitude.
not-fully-quantum, in which case the quantum branching factor per time interval is many many many orders of magnitude less than that of an unsimulated reality. In this case, the log-odds of being inside such a simulation would be very very very negative.
based on some substrate governed by physics whose “computational branching power” is even greater than quantum mechanics, in which case we should anthropically expect to live in that simulator’s world and not this simulated one.
Unlike my separate point about the great filter, I can claim no special expertise on this; though both my parents have PhDs in physics, I couldn’t even write the Dirac equation without looking it up (though, given a week to work through things, I could probably do a passable job reconstructing Shor’s algorithm with nothing more than access to Wikipedia articles on non-quantum FFT). Still, I’m decently confident about this point, too.
As someone who mostly expects to be in a simulation, this is the clearest and most plausible anti-simulation-hypothesis argument I’ve seen, thanks.
How does it hold up against the point that the universe looks large enough to support a large number of even fully-quantum single-world simulations (with a low-resolution approximation of the rest of reality), even if it costs many orders of magnitude more resources to run them?
Perhaps would-be simulators would tend not to value the extra information from full-quantum simulations enough to build many or even any of them? My guess is that many purposes for simulations would want to explore a bunch of the possibility tree, but depending on how costly very large quantum computers are to mature civilizations maybe they’d just get by with a bunch of low-branching factor simulations instead?
I think both your question and self-response are pertinent. I have nothing to add to either, save a personal intuition that large-scale fully-quantum simulators are probably highly impractical. (I have no particular opinion about partially-quantum simulators — even possibly using quantum subcomponents larger than today’s computers — but they wouldn’t change the substance of my not-in-a-sim argument.)
hm, that intuition seems plausible.
The other point that comes to mind is that if you have a classical simulation running on a quantum world, maybe that counts as branching for the purposes of where we expect to find ourselves? I’m still somewhat confused about whether exact duplicates ‘count’, but if they do then maybe the branching factor of the underlying reality carries over to sims running further down the stack?
It seems to me that exact duplicate timelines don’t “count”, but duplicates that split and/or rejoin do. YMMV.
I don’t think the branching factor of the simulation matters, since the weight of each individual branch decreases as the number of branches increases. The Born measure is conserved by branching.
This is certainly a cogent counterargument. Either side of this debate relies on a theory of “measure of consciousness” that is, as far as I can tell, not obviously self-contradictory. We won’t work out the details here.
In other words: this is a point on which I think we can respectfully agree to disagree.
Fair, although I do think your theory might be ultimately self-contradictory ;)
Instead or arguing that here, I’ll link an identical argument I had somewhere else and let you judge if I was persuasive.
I don’t think the point you were arguing against is the same as the one I’m making here, though I understand why you think so.
My understanding of your model is that, simplifying relativistic issues so that “simultaneous” has a single unambiguous meaning, total measure across quantum branches of a simultaneous time slice is preserved; and your argument is that, otherwise, we’d have to assign equal measure to each unique moment of consciousness, which would lead to ridiculous “Bolzmann brain” scenarios. I’d agree that your argument is convincing that different simultaneous branches have different weight according to the rules of QM, but that does not at all imply that total weight across branches is constant across time.
The argument I made there was that we should consider observer-moments to be ‘real’ according to their Hilbert measure, since that is what we use to predict our own sense-experiences. This does imply that observer-weight will be preserved over time, since unitary evolution preserves the measure(as you say, this also proves it is conserved by splitting into branches, since you can consider that to be projecting onto different subspaces)
Even without unitarity, you shouldn’t expect the total amount of observer-weight to increase exponentially in time, since that would cause the total amount of observer-weight to diverge, giving undefined predictions.
Our sense-experiences are “unitary” (in some sense which I hope we can agree on without defining rigorously), so of course we use unitary measure to predict them. Branching worlds are not unitary in that sense, so carrying over unitarity from the former to the latter seems an entirely arbitrary assumption.
A finite number (say, the number of particles in the known universe), raised to a finite number (say, the number of Planck time intervals before dark energy tears the universe apart), gives a finite number. No need for divergence. (I think both of those are severe overestimates for the actual possible branching, but they are reasonable as handwavy demonstrations of the existence of finite upper bounds)
Ah, by ‘unitary’ I mean a unitary operator, that is an operator which preserves the Hilbert measure. It’s an axiom of quantum mechanics that time evolution is represented by a unitary operator.
Fair point about the probable finitude of time(but wouldn’t it be better if our theory could handle the possibility of infinite time as well?)