“Non-zero probability” doesn’t seem like quite the right word. If a parameter describing the way things could conceivably turn out to be can take, say, arbitrary real values, then we really want “non-zero probability density.” (It’s mathematically impossible to assign non-zero probability to each of uncountably many disjoint hypotheses because they can’t add to 1.)
The first answer that occurred to me was “enumerate all Turing machines” but I’m worried because it seems pretty straightforward to coherently think up a universe that can’t be described by a Turing machine (either because Turing machines aren’t capable of doing computations with infinite-precision real numbers or because they can’t solve the halting problem). More generally I’m worried that “coherently-thinkable” implies “not necessarily describable using math,” and that would make me sad.
can’t be described by a Turing machine… because Turing machines aren’t capable of doing computations with infinite-precision real numbers
I think you can get around that by defining “describe” to mean “for some tolerance t greater than zero, simulate with accuracy within t”. Since computable numbers are dense in the reals, for any t > 0 there will always be a Turing machine that can do the job.
The halting problem is insuperable, though. Universes with initial conditions or dynamics that depend on, e.g., Chaitin’s constant are coherently thinkable but not computable.
I don’t think your first point solves the problem. If the universe is exponentially sensitive to initial conditions, then even arbitrarily small inaccuracies in initial conditions make any simulation exponentially worse with time.
The function exp(x—K) grows exponentially in x, but is nevertheless really, really small for any x << K. Unbounded resources for computing means that the analogue of K may be made as large as necessary to satisfy any fixed tolerance t.
Yes, for a fixed amount of time. I should have made that explicit in my definition of “describe”: for some tolerance t greater than zero, simulate results at time T with accuracy within t. Then for any t > 0 and any T there will always be a Turing machine that can do the job.
“Non-zero probability” doesn’t seem like quite the right word. If a parameter describing the way things could conceivably turn out to be can take, say, arbitrary real values, then we really want “non-zero probability density.” (It’s mathematically impossible to assign non-zero probability to each of uncountably many disjoint hypotheses because they can’t add to 1.)
The first answer that occurred to me was “enumerate all Turing machines” but I’m worried because it seems pretty straightforward to coherently think up a universe that can’t be described by a Turing machine (either because Turing machines aren’t capable of doing computations with infinite-precision real numbers or because they can’t solve the halting problem). More generally I’m worried that “coherently-thinkable” implies “not necessarily describable using math,” and that would make me sad.
I think you can get around that by defining “describe” to mean “for some tolerance t greater than zero, simulate with accuracy within t”. Since computable numbers are dense in the reals, for any t > 0 there will always be a Turing machine that can do the job.
The halting problem is insuperable, though. Universes with initial conditions or dynamics that depend on, e.g., Chaitin’s constant are coherently thinkable but not computable.
What about a universe with really mean laws of physics, like gravity that acts in reverse on particles whose masses aren’t computable numbers?
How is that different than “within accuracy t, these particles have those computable masses, but gravity acts backwards on them”?
The intention of my example was that you couldn’t tell for a given particle which direction gravity went.
Wouldn’t you just need one additional bit of information for each particle as an initial condition to make this computable again?
I don’t think your first point solves the problem. If the universe is exponentially sensitive to initial conditions, then even arbitrarily small inaccuracies in initial conditions make any simulation exponentially worse with time.
The function exp(x—K) grows exponentially in x, but is nevertheless really, really small for any x << K. Unbounded resources for computing means that the analogue of K may be made as large as necessary to satisfy any fixed tolerance t.
For a fixed amount of time. What if you wanted to simulate a universe that runs forever?
Yes, for a fixed amount of time. I should have made that explicit in my definition of “describe”: for some tolerance t greater than zero, simulate results at time T with accuracy within t. Then for any t > 0 and any T there will always be a Turing machine that can do the job.