My gut instinct on metacosmology is that if were a simpler computation than our universe that produced intelligent life, we’d probably be there instead of here. I’m not sure that’s valid anthropics, but it still surprises me to see apparent assumption (in posts like this of the opposite conclusion—that there are many universes simpler than ours capable of producing intelligence. (EDIT: that post doesn’t actually make that assumption, and I don’t have another example ready. Still turned out to be a fruitful question, though)
(Yes, we know cellular automata can implement intelligence, but AFAIK we don’t know that they can do it more simply than by implementing a Turing machine stimulating our universe).
Is there an argument I’ve missed?
I don’t know if we live in the simplest universe giving rise to life for some philosophically and deeply “correct” notion of simplicity. However, I don’t think this is needed for the argument in my post you are asking about. In fact while writing that post I was implicitly assuming that the attacker’s universe is about as complex as our own, in order to make my argument harder.
First, I think that if we write down a particular universal prior (e.g. by choosing a universal turing machine like a Python interpreter), then we probably won’t be the simplest:
The actual simplest universes will be different across different programming languages. This is plausible because those universes are themselves simpler than implementations of universal Turing machines without preprocessing. But it’s very hard to really know.
If you believe this, then it’s unlikely that we have the simplest universe according to say the Python prior, or any other concrete prior we write down. There’s just one “right” prior according to which we are the simplest.
Even if that prior were philosophically distinguished, that doesn’t help us unless we do that philosophy and pick out the right universal prior.
Even if we did live in the simplest universe according to the chosen universal prior, it seems like we’d probably get a simulation:
The anthropic update (including the inference from the language choice) is a huge advantage for simulators.
It doesn’t change the story that much if we get a simulation from a universe with our physical laws vs other physical laws.
The awkwardness of reading out bits, and ensuring that evolved life has maximal control over that channel, probably puts you somewhere other than the absolute simplest universe.
Aside from the arguments I had in mind while writing the post, there is a more philosophical reason (that I’ve thought less about) to think that most early civilizations we care about aren’t living in the simplest universe:
I expect that “most” early civilizations are in “dense” universes, at least if you try to weight them by their intrinsic moral worth.
I expect that it’s simpler to create truly humungous simple universes with a lower density of life. Note that the complexity differences we are talking about here are very very small.
Some of those universes will still allow consequentialists to control arbitrary output locations, despite starting from a very low density (e.g. faster than light travel would be very helpful).
That said, I do think that a lot of future influence comes from huge simple universes with easy travel, even if most early civilizations (by moral weight) aren’t in such universes. And if you care about the moral weight of our civilization itself then I think it is plausibly dominated by the simulations, such that weighting by “future influence” is the only real weighting that’s meaningful to apply to early civilizations.
Thanks for the reply, that makes sense.
Maybe our universe isn’t the simplest but the most “productive”, in the way that mind patterns are amplified by splitting into many different quantum time-streams.
For an answer that follows a very different intuition, take a look at Does Cosmological Evolution Select for Technology? by Jeffery Shainline. This is up there with aestivation and infinite ethics on the fun idea scale. He gives a nice summary at 2:13:23 on Lex Fridman’s podcast to around 2:38. Highly recommend listening to the relevant clip on Fridman, it’s pretty great. The entire episode is really interesting, and also contains some other supporting context for Shainline’s arguement. Caveat: I haven’t read the paper yet. The abstract is:
I guess I’ve always had a vague intuition along the lines that, if you built a game of life of ~ the scale of our universe and started it in a random initial configuration, that there would be many rulesets that are:
Simpler than our laws of physics.
Have a high probability of producing self-preserving and self-replicating patterns after enough time.
Then, I’d expect intelligence to arise convergently as a useful strategy for the patterns to perpetuate / replicate themselves in the game of life’s selection environment.
I would also guess that a large enough game of life world would eventually give rise to intelligent civilization (especially after observing recent progress on designing ash-clearing machines in some random game of life hobbyist forum that I can’t find now; not sure if that should be a real update but I hadn’t realized that this was probably possible).
It’s not at all clear to me whether the game of life rules are actually simpler than our physics. I agree it does casually seem that way, but it seems incredibly hard to say right now.
See this comment and its links on what the long-term future of an infinite randomly-initialized GoL grid looks like. In brief, an infinite field of “ash”, random oscillating or fixed patterns, which would likely eventually(after an exponentially long time?) be invaded by self-replicators.
I conjecture that it will take longer for these patterns to appear in Life than in our universe, though. In our universe we got intelligent by bootstrapping off of simpler replicators, I’m not sure if Life is set up to make that possible/likely...
Doesn’t intelligence require a low-entropy setting to be useful? If your surroundings are all random noise then no-free-lunch theorem applies.
My initial thought was that this universe would have low complexity. It has simple rules, and a simple initialization process. However, I suppose that, for a deterministic GoL rule set, the simple initialization process might not result in simple dynamics going forward. I think it depends on whether low-level noise in the exact cell patterns “washes out” for the higher level patterns.
Maybe we need some sort of low entropy initialization or a non-deterministic rule set?
Entropy is less of a problem in GoL than our universe because the ruleset isn’t reversible, so you don’t need a free energy source to erase errors.
Are there any problems with an irreversible ruleset?
Not necessarily, it would just be very different from our world. One potential problem is that it can be easier for an irreversible universe to slip into an inert ‘dead’ state, since information can be globally erased.
There is also no possibility for a Penrose-style return to form due to extremely unlikely random fluctuations over extreme lengths of time.
I’m not sure I agree with this. For instance, changing one’s “velocity” in a controlled manner seems nearly impossible in practically all cellular automata for various reasons, partly because they lack Poincare invariance. Could one have intelligent life without this?
I’m pretty sure you can have intelligent life arise in computational environments that lack any sort of notion of velocity. E.g., the computational environments of the brain and current DL systems seem able to support intelligence, but they don’t have straightforward notions of velocity.
They are created by other intelligent minds, though. What I mean is, would it be adaptive for intelligence to evolve without velocity?
I would analogize it to plants vs animals. Animals tend to be much more intelligent than plants, presumably because their ability to move around means that they have to deal with much more varied conditions, or because they can have much more complex influences on the world. These seem difficult to achieve without varying one’s velocity. There’s also stuff like social relations; inanimate organisms might help or hurt each other, but they probably have to do so in much simpler ways, since their positions relative to each other are fixed, while animals can more easily interact with others in more complex ways and have more varying relations.
This is also known as Simplicity Assumption: “If we randomly select the simulation of a civilization in the space of all possible simulations of that civilization that have ever been run, the likelihood of picking a given simulation is inversely correlated to the computational complexity of the simulation.”
In a nutshell, the amount of computation needed to perform simulations matters (if resources are somewhat finite in base reality, which is fair to imagine), and over the long term simple simulations will dominate the space of sims.
See here for more info.
Questions like this highlight how misguided the current state of anthropic reasoning is.
When one spends enough time thinking about the anthropic principle it would seem quite reasonable to raise this question. But take a step back, and consider it a physical/scientific statement: “The universe is likely in the simplest form that could support intelligent life”. It is oddly specific. Why not say “the universe is likely the simplest that could support black holes?”, or hydrogen atoms, or Very large-scale integrations? Each hypothesis results in vastly different predictions of what the universe is like. Why favor “intelligent life” above anything else?
People may provide different justifications for this preferential treatment. But it always boils down to this: We are intelligent lives. And it is intuitively obvious that the physical existence of oneself is important for their reasoning about the universe. But the Copernican science paradigm has no place for the first-person perspective. It requires one to “Zoom-Out” and think from an impartial outsider’s view. This conflict leads to awkward attempts to mix the outsider’s impartiality and the first-person self-focus. It gives rise to teleological conclusions like the fine-tuned universe which is often used as proof of God’s existence. And less jarringly but far more deceivingly, regarding oneself as equivalent to the result of imaginary random sampling. It is perhaps unsurprising that anthropic problems often end up as paradoxes.
To resolve the paradoxes we have to at least be aware of which viewpoint we are taking in reasoning and make a conscious effort not to mix first-person perspective with the impartial outsider’s view. To lay all the debate to rest we perhaps need to develop a framework incorporating the first-person perspective into the scientific paradigm.