I’m glad you raised this issue, because I don’t think it’s a simple task to unpack ‘not one unusual thing has ever happened’ in a way that is neither trivial nor false. (It’s also quite difficult to do this with Egan’s Law.)
A trivial reading of the usualness koan: ‘Since the beginning, nothing that never happens has ever happened.’ (Here ‘unusual’ means ‘violating a Law of Nature, in the sense of violating a true generalization about the universe’.)
A false reading of the usualness koan: ‘Since the beginning, nothing that infrequently happens has ever happened.’
A non-trivial and true reading will be somewhat sophisticated, and will (I think) narrow down what sort of universe we live in a lot more than ‘reality is normal’ or Egan’s Law do. What the koan means is that we live in a cosmos whose structure and dynamics are determined by
a short list of
simple
exceptionless rules
that are uniform across space and time
and deterministic. (Or, more strongly, locally deterministic, allowing one to use spacetime regions to predict the properties of their neighbors.)
A universe in which (cosmically) “unusual” things sometimes happen would be one where the rules vary substantially across spacetime regions, where things sometimes happen for no reason (indeterminism), or where the universal rules are extremely convoluted and gerrymandered.
Interestingly, there is one candidate event for a cosmically unusual thing that happened in our own universe. To our knowledge, only one unusual thing has ever happened — at t=0, a state of low entropy occurred. But since then, every spacetime region has followed the same rules as every other spacetime region. Knowing that our universe is lawful, and knowing everything about any two contiguous spacetime regions, would together (I think) allow one to immediately infer the causal dynamics of all other spacetime regions in the observable universe.
I don’t think that most high-complexity algorithms for building a life-permitting observable universe would allow a theory as simple as human physics to approximate the algorithm as well as our observable universe does.
Do you think the observable universe is a lot more complicated than it appears?
This is trivially false. Imagine, for the sake of argument, that there is a short, simple set of rules for building a life permitting observable universe. Now add an arbitrary, small, highly complex perturbation to that set of rules. Voila, infinitely many high complexity algorithms which can be well-approximated by low complexity algorithms.
How does demonstrating ‘infinitely many algorithms have property X’ help falsify ‘most algorithms lack property X’? Infinitely many integers end with the string …30811, but that does nothing to suggest that most integers do.
Maybe most random life-permitting algorithms beyond a certain level of complexity have lawful regions where all one’s immediate observations are predictable by simple rules. But in that case I’d want to know the proportion of observers in such universes that are lucky enough to end up in an island of simplicity. (As opposed to being, say, Boltzmann brains.)
most high-complexity algorithms for building a life-permitting observable universe would allow
I have no idea what these algorithms might be are and neither do you. Accordingly I don’t see any basis for speculating what will they allow.
Do you think the observable universe is a lot more complicated than it appears?
I think the observable universe appears to be very complicated.
I am still interested in what do you mean by “short” and “simple”. The default rule is that “man is the measure of all things” so presumably you are using these words in the context of what is short and simple for the human brain.
Requiring the universe to be constructed in a way that is short and simple for the brains of a single species on a planet in a provincial star system in some galaxy seems to be carrying the anthropic principle a bit too far.
I have no idea what these algorithms might be are and neither do you. Accordingly I don’t see any basis for speculating what will they allow.
Well, let’s think about whether we have a proof of concept. What’s an example of a generalization about high-complexity algorithms that might show most of them to be easily usefully compressed, for an observer living inside one? At this point it’s OK if we don’t know that the generalization holds; I just want to know what it could even look like to discover that a universe that looks like ours (as opposed to, say, one that looks like a patchwork or a Boltzmann Braintopia) is the norm for high-complexity sapience-permitting worlds.
ETA: Since most conceivable universes are very very complicated, I’d agree that we probably live in a very very complicated universe, if it could be shown that our empirical data doesn’t strongly support nomic simplicity.
The default rule is that “man is the measure of all things” so presumably you are using these words in the context of what is short and simple for the human brain.
No, I’m saying it’s short and simple relative to the number of ways a universe could be, and short and simple relative to the number of ways a life-bearing universe could be. There’s no upper bound on how complicated a universe could in principle be, but there is a lower bound, and our physics is, even in human terms, not far off from that lower bound.
Humans have a preference for simple laws because those are the ones we can understand and reason about. The history of physics is a history of coming up with gradually more complex laws that are better approximations to reality.
Why not expect this trend to continue with our best model of reality becoming more and more complex?
Uh, most of these are demonstrably false, so the koan does not seem overly useful.
The list of rules has been growing awfully quick, and there is no guarantee that it is finite. Here is the latest list of the basic rules, already quite compressed. And it does not even describe any macroscopic phenomena, which have their own rules, some more ad hoc than others. Thus there is no indication that “the territory” can be fully described by a map with “a short list of simple rules”, though some subsets of it certainly can.
If you think that the above is simple, I shudder to think what you consider complex.
“exceptionless rules” is either vacuous or false. We observe plenty of phenomena we don’t know the rules for. If the koan means that “but the rules are still there even for these exceptions, we just don’t have them on the map yet”, this meta-model seems to contradict 1.
Uniformity across space and time does seem to apply, roughly, to the observed universe, but plenty of models in the modern high-energy physics suggest that the laws and values we observe and deduce might be an accident of some local false vacuum state, or just one of many options in the chaotically inflating multiverse.
Local determinism is only saved in QM by postulating that “everything possible happens, even if we can never observe it”, and even this locality is severely challenged by the EPR/Bell. Even if “the cosmos” were deterministic, it is still not necessarily predictable, both due to chaotic effects and due to potential inherent Knightean uncertainty related to the uninteracted parts of the Big Bang.
Anyway, I agree that the “weird rules” are there first to explain the (weird and non-weird) observations, but I disagree with the narrow interpretation in the wiki
The purpose of a theory is to add up to observed reality, rather than something else
The purpose of the rules is actually to change reality, at least as we perceive it. While we are not (yet) able to change some “fundamental” laws, we certainly affect reality by learning (and changing) some of the others.
Nearly all possible lists of rules would be too lengthy and complex to be encoded in a space the size of the observable universe. By comparison, our universe’s rules seem likely to be able to be written on a t-shirt or two, especially when you consider how many of the rules are structurally similar. So, yes, I consider that simple. Simple to build into a universe simulator, if not simple for a human to inutit. (I’m not sure what you mean by “macroscopic phenomena… have their own rules”.)
I shudder too! There may be simpler possible rules, but the number of sapience-permitting rules much simpler than ours is probably very small. (And it’s certainly vanishingly small relative to the number of sapience-permitting rules more complex than ours.)
For now, we can combine ‘exceptionless’ with ‘uniform across space and time’, unless someone has a thought about how to distinguish the two.
Yes. My expectation is that we live in a bubble of simplicity within a more complex structure. I think the koan is meant to be a generalization about our own observable universe (hence its temporal character), not speculation about our world’s metaphyical substrate. Though it’s obviously at least a clue.
I agree there are costs to saving local determinism (and serious unsolved questions in the neighborhood), but it’s still an extremely plausible model. And when you combine it with non-local determinism, we’ll have accounted for all the plausible hypotheses. Determinism rather than locality is the point I want to emphasize, since it only requires that we deny Collapse interpretations.
EPR doesn’t challenge MWI-style local determinism. (Though it does limit the usefulness of that knowledge, since we don’t know which part of the wave function we’re in.)
The purpose of the rules is actually to change reality, at least as we perceive it.
I’m not seeing why that disagrees with the wiki. One goal is just more proximate than the other, and more specific to the case at hand. The purpose of a hammer is to improve human life; but the purpose of a hammer is also to put nails in stuff.
The lagrangian in that PDF is about three transformations away from the most compact specification of the SM, which would be “the most general renormalizable field theory with these local symmetries and with fermions transforming in these representations”. If you then wrote down the lagrangian immediately implied by that specification, then changed field variables to incorporate the effects of the Higgs, and finally chose a gauge and included “ghost” fields, then you could get that long expression.
That doesn’t appear to be a list, or a rule, or anything meaningful, since it has no equality or inequality symbols.
Your main point seem basically correct. I think RobbBB is trying to get at something meaningful, but is heading in the wrong direction with his demand for definitive, exceptionless, deterministic rules. It’s all about information, and information accommodates exceptions and non-determinism.
That doesn’t appear to be a list, or a rule, or anything meaningful, since it has no equality or inequality symbols.
Yeah, sorry, this is just the Lagrangian of the Standard Model of particle physics, it’s used to calculate probability amplitudes. That’s where you get the equal sign.
I’m glad you raised this issue, because I don’t think it’s a simple task to unpack ‘not one unusual thing has ever happened’ in a way that is neither trivial nor false. (It’s also quite difficult to do this with Egan’s Law.)
A trivial reading of the usualness koan: ‘Since the beginning, nothing that never happens has ever happened.’ (Here ‘unusual’ means ‘violating a Law of Nature, in the sense of violating a true generalization about the universe’.)
A false reading of the usualness koan: ‘Since the beginning, nothing that infrequently happens has ever happened.’
A non-trivial and true reading will be somewhat sophisticated, and will (I think) narrow down what sort of universe we live in a lot more than ‘reality is normal’ or Egan’s Law do. What the koan means is that we live in a cosmos whose structure and dynamics are determined by
a short list of
simple
exceptionless rules
that are uniform across space and time
and deterministic. (Or, more strongly, locally deterministic, allowing one to use spacetime regions to predict the properties of their neighbors.)
A universe in which (cosmically) “unusual” things sometimes happen would be one where the rules vary substantially across spacetime regions, where things sometimes happen for no reason (indeterminism), or where the universal rules are extremely convoluted and gerrymandered.
Interestingly, there is one candidate event for a cosmically unusual thing that happened in our own universe. To our knowledge, only one unusual thing has ever happened — at t=0, a state of low entropy occurred. But since then, every spacetime region has followed the same rules as every other spacetime region. Knowing that our universe is lawful, and knowing everything about any two contiguous spacetime regions, would together (I think) allow one to immediately infer the causal dynamics of all other spacetime regions in the observable universe.
How do you know the list is short and the rules are simple?
What do the words “short” and “simple” mean here?
I don’t think that most high-complexity algorithms for building a life-permitting observable universe would allow a theory as simple as human physics to approximate the algorithm as well as our observable universe does.
Do you think the observable universe is a lot more complicated than it appears?
This is trivially false. Imagine, for the sake of argument, that there is a short, simple set of rules for building a life permitting observable universe. Now add an arbitrary, small, highly complex perturbation to that set of rules. Voila, infinitely many high complexity algorithms which can be well-approximated by low complexity algorithms.
How does demonstrating ‘infinitely many algorithms have property X’ help falsify ‘most algorithms lack property X’? Infinitely many integers end with the string …30811, but that does nothing to suggest that most integers do.
Maybe most random life-permitting algorithms beyond a certain level of complexity have lawful regions where all one’s immediate observations are predictable by simple rules. But in that case I’d want to know the proportion of observers in such universes that are lucky enough to end up in an island of simplicity. (As opposed to being, say, Boltzmann brains.)
The observable universe is enormously complicated, not in its rules but in its configuration (“indexical” complexity = complexity).
I have no idea what these algorithms might be are and neither do you. Accordingly I don’t see any basis for speculating what will they allow.
I think the observable universe appears to be very complicated.
I am still interested in what do you mean by “short” and “simple”. The default rule is that “man is the measure of all things” so presumably you are using these words in the context of what is short and simple for the human brain.
Requiring the universe to be constructed in a way that is short and simple for the brains of a single species on a planet in a provincial star system in some galaxy seems to be carrying the anthropic principle a bit too far.
Well, let’s think about whether we have a proof of concept. What’s an example of a generalization about high-complexity algorithms that might show most of them to be easily usefully compressed, for an observer living inside one? At this point it’s OK if we don’t know that the generalization holds; I just want to know what it could even look like to discover that a universe that looks like ours (as opposed to, say, one that looks like a patchwork or a Boltzmann Braintopia) is the norm for high-complexity sapience-permitting worlds.
ETA: Since most conceivable universes are very very complicated, I’d agree that we probably live in a very very complicated universe, if it could be shown that our empirical data doesn’t strongly support nomic simplicity.
No, I’m saying it’s short and simple relative to the number of ways a universe could be, and short and simple relative to the number of ways a life-bearing universe could be. There’s no upper bound on how complicated a universe could in principle be, but there is a lower bound, and our physics is, even in human terms, not far off from that lower bound.
Humans have a preference for simple laws because those are the ones we can understand and reason about. The history of physics is a history of coming up with gradually more complex laws that are better approximations to reality.
Why not expect this trend to continue with our best model of reality becoming more and more complex?
In this case I can apply the “short & simple” descriptor to anything at all in the observable universe. That makes it not very useful.
Uh, most of these are demonstrably false, so the koan does not seem overly useful.
The list of rules has been growing awfully quick, and there is no guarantee that it is finite. Here is the latest list of the basic rules, already quite compressed. And it does not even describe any macroscopic phenomena, which have their own rules, some more ad hoc than others. Thus there is no indication that “the territory” can be fully described by a map with “a short list of simple rules”, though some subsets of it certainly can.
If you think that the above is simple, I shudder to think what you consider complex.
“exceptionless rules” is either vacuous or false. We observe plenty of phenomena we don’t know the rules for. If the koan means that “but the rules are still there even for these exceptions, we just don’t have them on the map yet”, this meta-model seems to contradict 1.
Uniformity across space and time does seem to apply, roughly, to the observed universe, but plenty of models in the modern high-energy physics suggest that the laws and values we observe and deduce might be an accident of some local false vacuum state, or just one of many options in the chaotically inflating multiverse.
Local determinism is only saved in QM by postulating that “everything possible happens, even if we can never observe it”, and even this locality is severely challenged by the EPR/Bell. Even if “the cosmos” were deterministic, it is still not necessarily predictable, both due to chaotic effects and due to potential inherent Knightean uncertainty related to the uninteracted parts of the Big Bang.
Anyway, I agree that the “weird rules” are there first to explain the (weird and non-weird) observations, but I disagree with the narrow interpretation in the wiki
The purpose of the rules is actually to change reality, at least as we perceive it. While we are not (yet) able to change some “fundamental” laws, we certainly affect reality by learning (and changing) some of the others.
Nearly all possible lists of rules would be too lengthy and complex to be encoded in a space the size of the observable universe. By comparison, our universe’s rules seem likely to be able to be written on a t-shirt or two, especially when you consider how many of the rules are structurally similar. So, yes, I consider that simple. Simple to build into a universe simulator, if not simple for a human to inutit. (I’m not sure what you mean by “macroscopic phenomena… have their own rules”.)
I shudder too! There may be simpler possible rules, but the number of sapience-permitting rules much simpler than ours is probably very small. (And it’s certainly vanishingly small relative to the number of sapience-permitting rules more complex than ours.)
For now, we can combine ‘exceptionless’ with ‘uniform across space and time’, unless someone has a thought about how to distinguish the two.
Yes. My expectation is that we live in a bubble of simplicity within a more complex structure. I think the koan is meant to be a generalization about our own observable universe (hence its temporal character), not speculation about our world’s metaphyical substrate. Though it’s obviously at least a clue.
I agree there are costs to saving local determinism (and serious unsolved questions in the neighborhood), but it’s still an extremely plausible model. And when you combine it with non-local determinism, we’ll have accounted for all the plausible hypotheses. Determinism rather than locality is the point I want to emphasize, since it only requires that we deny Collapse interpretations.
EPR doesn’t challenge MWI-style local determinism. (Though it does limit the usefulness of that knowledge, since we don’t know which part of the wave function we’re in.)
I’m not seeing why that disagrees with the wiki. One goal is just more proximate than the other, and more specific to the case at hand. The purpose of a hammer is to improve human life; but the purpose of a hammer is also to put nails in stuff.
The lagrangian in that PDF is about three transformations away from the most compact specification of the SM, which would be “the most general renormalizable field theory with these local symmetries and with fermions transforming in these representations”. If you then wrote down the lagrangian immediately implied by that specification, then changed field variables to incorporate the effects of the Higgs, and finally chose a gauge and included “ghost” fields, then you could get that long expression.
That doesn’t appear to be a list, or a rule, or anything meaningful, since it has no equality or inequality symbols.
Your main point seem basically correct. I think RobbBB is trying to get at something meaningful, but is heading in the wrong direction with his demand for definitive, exceptionless, deterministic rules. It’s all about information, and information accommodates exceptions and non-determinism.
Yeah, sorry, this is just the Lagrangian of the Standard Model of particle physics, it’s used to calculate probability amplitudes. That’s where you get the equal sign.