IDK why you think that TM is simpler than one that computes, say, QM. But either way, I don’t know why to favor (in terms of ascribing reality-juice) worlds that are simple TMs but not worlds that are simple physics equations. You can complain that you don’t know how to execute physics equations, but I can also complain that I don’t know how to execute state transitions. (Presumably there’s still something central and real about some things being more executable than others; I’m just saying it’s not clear what that is and how it relates to reality-juice and TMs vs physics.)
Of course, just because we can’t execute continuum models, or models of physics that require actually infinite computation, not just unlimited amounts of compute, doesn’t mean the universe can’t execute such a program.
Ok, another example is that physical laws are generally descriptive, not fully specified worlds. You can “simulate” the ideal gas law or Maxwell’s equations but you’re doing extra work beyond just what the equations say (like, you have to run “import diffeq” first, and pick a space topology, and pick EM fields) and it’s not a full world.
Yes, which is why I explicitly said that the scenario involves actual/manifest infinity of compute to actually implement the equations to actually make it a full world, and if you wanted to analogize physical laws to a computer system, I’d argue that they are analogous to the source code of a computer, or the rules/state of a Turing Machine, and I’m arguing that there is a very vast difference between us simulating Maxwell’s equations or the ideal gas law and the universe simulating whatever physical laws we turn out to actually have, and all of the difference is the universe has an actual infinity/manifest infinity of compute like FLOPs/FLOP/s and memory such that you can actually run the equations directly without relying on shortcuts to make the problem more tractable, whereas we have to rely on shortcuts that change the physics a little but get us a reasonable answer in a reasonable time.
The randomness of the Geiger counter comes from wave function decoherence. From the perspective of any observers who are part of the world generated by the Turing machine, this is irreducible indexical uncertainty.
I don’t know how many of the random bits in Lava lamps come from decoherence.
I’m fairly sure it isn’t actually compatible with a world that is generated by a Turing Machine, but the basic problem is all the real number constants in the universe which in QM are infinitely precise, not just arbitrarily precise, which wreaks havoc on Turing Machine models, but Signer has another explanation of another problem that is fatal to the approach.
Connotationally, even if things are pseudorandom, they still might be “random” for all practical purposes, e.g. if the only way to calculate them is to simulate the entire universe. In other words, we may be unable to exploit the pseudorandomness.
Probability is in the mind. There’s no way to achieve entanglement between what’s necessary to make these predictions and the state of your brain, so for you, some of these are random.
In multi-worlds, the Turing machine will compute many copies of you, and there might be more of those who see one thing when they open their eyes than of those who see another thing. When you open your eyes, there’s some probability of being a copy that sees one thing and a copy that sees the other thing. In a deterministic world with many copies of you, there’s “true” randomness in where you end up opening your eyes.
I think he’s saying that there’s a simple-ish deterministic machine that uses pseudorandomness to make a world observationally equivalent to ours. Since it’s simple, it has a lot of the reality-juice, so it’s most of “where we really are”.
I mean that if turing machine is computing universe according to the laws of quantum mechanics, observers in such universe would be distributed uniformly, not by Born probability. So you either need some modification to current physics, such as mangled worlds, or you can postulate that Born probabilities are truly random.
I mean that if turing machine is computing universe according to the laws of quantum mechanics,
I assume you mean the laws of QM except the collapse postulate.
observers in such universe would be distributed uniformly,
Not at all. The problem is that their observations would mostly not be in a classical basis.
not by Born probability.
Born probability relates to observations, not observers.
So you either need some modification to current physics, such as mangled worlds,
Or collapse. Mangled worlds is kind of a nothing burger—its a variation on the idea than interference between superposed states leads to both a classical basi and the Born probabilities, which is an old idea, but wihtout making it any more quantiative.
or you can postulate that Born probabilities are truly random.
Not at all. The problem is that their observations would mostly not be in a classical basis.
I phrased it badly, but what I mean is that there is a simulation of Hilbert space, where some regions contain patterns that can be interpreted as observers observing something, and if you count them by similarity, you won’t get counts consistent with Born measure of these patterns. I don’t think basis matters in this model, if you change basis for observer, observations and similarity threshold simultaneously? Change of basis would just rotate or scale patterns, without changing how many distinct observers you can interpret them as, right?
??
Collapse or reality fluid. The point of mangled worlds or some other modification is to evade postulating probabilities on the level of physics.
The pseudorandom lie under the Lava lamp
Our observations are compatible with a world that is generated by a Turing machine with just a couple thousand bits.
That means that all the seemingly random bits we see in Geiger counters, Lava lamps, gasses and the like is only pseudorandomness in actuality.
IDK why you think that TM is simpler than one that computes, say, QM. But either way, I don’t know why to favor (in terms of ascribing reality-juice) worlds that are simple TMs but not worlds that are simple physics equations. You can complain that you don’t know how to execute physics equations, but I can also complain that I don’t know how to execute state transitions. (Presumably there’s still something central and real about some things being more executable than others; I’m just saying it’s not clear what that is and how it relates to reality-juice and TMs vs physics.)
I’m confused, in what sense don’t we know how to do this? Lattice quantum field theory simulations work fine.
For example, we couldn’t execute continuum models.
Of course, just because we can’t execute continuum models, or models of physics that require actually infinite computation, not just unlimited amounts of compute, doesn’t mean the universe can’t execute such a program.
Ok, another example is that physical laws are generally descriptive, not fully specified worlds. You can “simulate” the ideal gas law or Maxwell’s equations but you’re doing extra work beyond just what the equations say (like, you have to run “import diffeq” first, and pick a space topology, and pick EM fields) and it’s not a full world.
Yes, which is why I explicitly said that the scenario involves actual/manifest infinity of compute to actually implement the equations to actually make it a full world, and if you wanted to analogize physical laws to a computer system, I’d argue that they are analogous to the source code of a computer, or the rules/state of a Turing Machine, and I’m arguing that there is a very vast difference between us simulating Maxwell’s equations or the ideal gas law and the universe simulating whatever physical laws we turn out to actually have, and all of the difference is the universe has an actual infinity/manifest infinity of compute like FLOPs/FLOP/s and memory such that you can actually run the equations directly without relying on shortcuts to make the problem more tractable, whereas we have to rely on shortcuts that change the physics a little but get us a reasonable answer in a reasonable time.
Oh I misparsed your comment somehow, I don’t even remember how.
This distinction isnt material. The distinction I am getting at is whether our physics (simulation) is using a large K-incompressible seed or not.
QM doesn’t need a random seed!
The randomness of the Geiger counter comes from wave function decoherence. From the perspective of any observers who are part of the world generated by the Turing machine, this is irreducible indexical uncertainty.
I don’t know how many of the random bits in Lava lamps come from decoherence.
I’m fairly sure it isn’t actually compatible with a world that is generated by a Turing Machine, but the basic problem is all the real number constants in the universe which in QM are infinitely precise, not just arbitrarily precise, which wreaks havoc on Turing Machine models, but Signer has another explanation of another problem that is fatal to the approach.
Connotationally, even if things are pseudorandom, they still might be “random” for all practical purposes, e.g. if the only way to calculate them is to simulate the entire universe. In other words, we may be unable to exploit the pseudorandomness.
Yes, this is exactly the point.
Probability is in the mind. There’s no way to achieve entanglement between what’s necessary to make these predictions and the state of your brain, so for you, some of these are random.
In multi-worlds, the Turing machine will compute many copies of you, and there might be more of those who see one thing when they open their eyes than of those who see another thing. When you open your eyes, there’s some probability of being a copy that sees one thing and a copy that sees the other thing. In a deterministic world with many copies of you, there’s “true” randomness in where you end up opening your eyes.
I think he’s saying that there’s a simple-ish deterministic machine that uses pseudorandomness to make a world observationally equivalent to ours. Since it’s simple, it has a lot of the reality-juice, so it’s most of “where we really are”.
Yes, but this is kinda incompatible with QM without mangled worlds.
Oh ? What do you mean !
I don’t know about mangled worlds
https://mason.gmu.edu/~rhanson/mangledworlds.html
I mean that if turing machine is computing universe according to the laws of quantum mechanics, observers in such universe would be distributed uniformly, not by Born probability. So you either need some modification to current physics, such as mangled worlds, or you can postulate that Born probabilities are truly random.
I assume you mean the laws of QM except the collapse postulate.
Not at all. The problem is that their observations would mostly not be in a classical basis.
Born probability relates to observations, not observers.
Or collapse. Mangled worlds is kind of a nothing burger—its a variation on the idea than interference between superposed states leads to both a classical basi and the Born probabilities, which is an old idea, but wihtout making it any more quantiative.
??
I phrased it badly, but what I mean is that there is a simulation of Hilbert space, where some regions contain patterns that can be interpreted as observers observing something, and if you count them by similarity, you won’t get counts consistent with Born measure of these patterns. I don’t think basis matters in this model, if you change basis for observer, observations and similarity threshold simultaneously? Change of basis would just rotate or scale patterns, without changing how many distinct observers you can interpret them as, right?
Collapse or reality fluid. The point of mangled worlds or some other modification is to evade postulating probabilities on the level of physics.