As far as we know, we can fit the laws of physics on a note card, yet the universe contains well over 10^80 particles, and don’t get me started on the amount of computing power necessary to run it.
But you can’t fit a description of the current state of those particles on a note card, which you would need in order to actually make predictions.
No, they’re the parts that are obvious enough to figure out. If there was something complicated that made a big difference, we’d have noticed it, and just not figured it out. The parts that we don’t understand are parts that make such a tiny difference that they can’t properly be experimented with.
This kind of depends on what you mean by “big difference” and “complicated enough”.
For example, I can see humanity not yet detecting that spontaneous fission isn’t random but decided by a really good random number generator.
One might also imagine a ridiculously complicated program that makes elementary particles appear to behave randomly, except that when a huge number of such particles are “assembled” in a strange loop of sufficient complexity to be considered “a human-level self-aware entity” the pseudo-random micro-behavior leads to powerful effects at the macro-scale. Pretty much anything about what brains do that we don’t yet understand in detail could be hypothesised to arise from this. (I.e., the gods jiggle the dice to make humans, e.g., risk-adverse. Or send hurricanes towards groups of people who aren’t capitalist enough, whatever.)
Of course, such hypotheses are very complex and we’d usually say that it’s unreasonable to contemplate them. But using Occam’s razor would be circular in this particular thread.
For example, I can see humanity not yet detecting that spontaneous fission isn’t random but decided by a really good random number generator.
There are infinitely many possible realities that look simple to program but aren’t, but most complicated realities don’t look nearly this simple. The Copenhagen interpretation looks the same as Many Worlds, but that’s because it was discovered while trying to figure out this reality, not because most possibilities do.
But using Occam’s razor would be circular in this particular thread.
But no alternative has been put forward where this is more likely, let alone one that gives nearly the probability of a reality that looks like this than Occam’s razor does.
But no alternative has been put forward where this is more likely, let alone one that gives nearly the probability of a reality that looks like this than Occam’s razor does.
Well, yeah (I mean, there might have been, I just don’t know of any). Don’t get me wrong, I’m not arguing against Occam’s razor.
I just meant that you seem to have given the assertion “the probability that even just the parts we know [about physics] are so simple by coincidence is vanishingly small” as a not-quite-but-kind-of justification for Occam’s razor—even though just in the previous paragraph you said there’s no reason to think not-yet-known laws of physics aren’t complicated, the “that being said” seems to kind of contradict it—but to judge that the probability of a coincidence is small, or to say “most complicated realities don’t look this simple”, would need either Occam’s razor (simpler is likelier) or some kind of uniform prior (all possible realities are as likely, and there are more of the complicated-looking ones), or another alternative that hasn’t been put forward yet, as you say. Thus, the circularity.
If for some reason we thought that extremely complicated realities that simulate very simple rules at some scales are extremely more likely to observe than all others, then “the probability that even just the parts we know are so simple” would be large. I could see some anthropic argument for this kind of thing. Suppose we discover that conscious observers cannot exist in environments with too complicated behavior; then we’d expect to only see environments with simple behavior; but (with a “uniform prior”) there are vastly more complex systems that simulate simple behaviors within than simple ones. (For any simple behavior, there is an infinity of much more complicated systems that simulate it in a part of themselves.) Thus, under the “limit complexity for observers” hypothesis we should expect a few layers of simple rules and then an incomprehensibly complicated system they arise out of “by accident”.
But you can’t fit a description of the current state of those particles on a note card, which you would need in order to actually make predictions.
The laws of physics, combined with the initial conditions of the universe, is sufficient to describe the state of all the particles for all eternity.
We don’t really have much of an idea of what the initial conditions are, but there’s no reason to believe that they’re complicated.
Are there any reasons to believe they’re not complicated that don’t rely on assuming a K-complexity prior?
No, nor is there a reason to assume that anything else we don’t know about physics isn’t complicated.
That being said, the probability that even just the parts we know are so simple by coincidence is vanishingly small.
Not when you realize that the parts we know are the parts that were simple enough for us to figure out.
No, they’re the parts that are obvious enough to figure out. If there was something complicated that made a big difference, we’d have noticed it, and just not figured it out. The parts that we don’t understand are parts that make such a tiny difference that they can’t properly be experimented with.
This kind of depends on what you mean by “big difference” and “complicated enough”.
For example, I can see humanity not yet detecting that spontaneous fission isn’t random but decided by a really good random number generator.
One might also imagine a ridiculously complicated program that makes elementary particles appear to behave randomly, except that when a huge number of such particles are “assembled” in a strange loop of sufficient complexity to be considered “a human-level self-aware entity” the pseudo-random micro-behavior leads to powerful effects at the macro-scale. Pretty much anything about what brains do that we don’t yet understand in detail could be hypothesised to arise from this. (I.e., the gods jiggle the dice to make humans, e.g., risk-adverse. Or send hurricanes towards groups of people who aren’t capitalist enough, whatever.)
Of course, such hypotheses are very complex and we’d usually say that it’s unreasonable to contemplate them. But using Occam’s razor would be circular in this particular thread.
There are infinitely many possible realities that look simple to program but aren’t, but most complicated realities don’t look nearly this simple. The Copenhagen interpretation looks the same as Many Worlds, but that’s because it was discovered while trying to figure out this reality, not because most possibilities do.
But no alternative has been put forward where this is more likely, let alone one that gives nearly the probability of a reality that looks like this than Occam’s razor does.
Well, yeah (I mean, there might have been, I just don’t know of any). Don’t get me wrong, I’m not arguing against Occam’s razor.
I just meant that you seem to have given the assertion “the probability that even just the parts we know [about physics] are so simple by coincidence is vanishingly small” as a not-quite-but-kind-of justification for Occam’s razor—even though just in the previous paragraph you said there’s no reason to think not-yet-known laws of physics aren’t complicated, the “that being said” seems to kind of contradict it—but to judge that the probability of a coincidence is small, or to say “most complicated realities don’t look this simple”, would need either Occam’s razor (simpler is likelier) or some kind of uniform prior (all possible realities are as likely, and there are more of the complicated-looking ones), or another alternative that hasn’t been put forward yet, as you say. Thus, the circularity.
If for some reason we thought that extremely complicated realities that simulate very simple rules at some scales are extremely more likely to observe than all others, then “the probability that even just the parts we know are so simple” would be large. I could see some anthropic argument for this kind of thing. Suppose we discover that conscious observers cannot exist in environments with too complicated behavior; then we’d expect to only see environments with simple behavior; but (with a “uniform prior”) there are vastly more complex systems that simulate simple behaviors within than simple ones. (For any simple behavior, there is an infinity of much more complicated systems that simulate it in a part of themselves.) Thus, under the “limit complexity for observers” hypothesis we should expect a few layers of simple rules and then an incomprehensibly complicated system they arise out of “by accident”.