A steelmanned version of Egan’s counterargument can be found in what Tegmark calls the (cosmological) measure problem. Egan’s original counterargument is too weak because we can simply postulate that there is an appropriate measure over the worlds of interest; we already do that for the many-worlds interpretation!
One such issue is the above-mentioned measure problem, which is in essence the problem of how to deal with annoying infinities and predict conditional probabilities for what an observer should perceive given past observations.
[...]
A second testable prediction of the MUH [Mathematical Universe Hypothesis] is that the Level IV multiverse [the multiverse of all mathematical structures] exists, so that out of all universes containing observers like us, we should expect to find ourselves in a rather typical one. Rigorously carrying out this test requires solving the measure problem, i.e., computing conditional probabilities for observable quantities given other observations (such as our existence) and an assumed theory (such as the MUH, or the hypothesis that only some specific mathematical structure like string theory or the Lie superalgebra mb(3|8) [142] exists). Further work on all aspects of the measure problem is urgently needed regardless of whether the MUH is correct, as this is necessary for observationally testing any theory that involves parallel universes at any level, including cosmological inflation and the string theory landscape [67–71]. Although we are still far from understanding selection effects linked to the requirements for life, we can start testing multiverse predictions by assessing how typical our universe is as regards dark matter, dark energy and neutrinos, because these substances affect only better understood processes like galaxy formation. Early such tests have suggested (albeit using questionable assumptions) that the observed abundance of these three substances is indeed rather typical of what you might measure from a random stable solar system in a multiverse where these abundances vary from universe to universe [42, 134–139].
Tegmark makes a few remarks on using algorithmic complexity as the measure:
It is unclear whether some sort of measure over the Level IV multiverse is required to fully resolve the measure problem, but if this is the case and the CUH [Computable Universe Hypothesis] is correct, then the measure could depend on the algorithmic complexity of the mathematical structures, which would be finite. Labeling them all by finite bit strings s interpreted as real numbers on the unit interval [0, 1) (with the bits giving the binary decimals), the most obvious measure for a given structure S would be the fraction of the unit interval covered by real numbers whose bit strings begin with strings s defining S. A string of length n bits thus gets weight 2^(−n), which means that the measure rewards simpler structures. The analogous measure for computer programs is advocated in [16]. A major concern about such measures is of course that they depend on the choice of representation of structures or computations as bit strings, and no obvious candidate currently exists for which representation to use.
Each of the analogous problems in eternal inflation and the string theory landscape is also called the measure problem (in eternal inflation: how to assign measure over the potentially infinite number of inflationary bubbles; in the string theory landscape: how to assign measure over the astronomical number of false vacua).
In the many-worlds interpretation, the analogous measure problem is resolved by the Born probabilities.
An example of a measure in this context would be the complexity measure that Tegmark mentioned, as long as we agree on a way to encode mathematical structures (the nonuniqueness of representation is one of the issues that Tegmark brought up).
Whether this is an appropriate measure (i.e., whether it correctly “predicts conditional probabilities for what an observer should perceive given past observations”) is unknown; if we knew how to find out, then we could directly resolve the measure problem!
An example of a context where we can give the explicit measure is in the many-words interpretation, where as I mentioned, the Born probabilities resolve the analogous measure problem.
An example of a context where we can give the explicit measure is in the many-words interpretation, where as I mentioned, the Born probabilities resolve the analogous measure problem.
So you are saying that the “Born probabilities” are an example of an “appropriate measure” which, if “postulated,” rebuts Egan’s argument?
The Born probabilities apply to a different context—the multiple Everett branches of MWI, rather than the interpretative universes available under dust theory. If we had an equivalent of the Born probabilities—a measure—for dust theory, then we’d be able to resolve Egan’s argument one way or another (depending on which way the numbers came out under this measure).
Since we don’t yet know what the measure is, it’s not clear whether Egan’s argument holds—under the “Tengmark computational complexity measure” Egan would be wrong, under the “naive measure” Egan is right. But we need some external evidence to know which measure to use. (By contrast in the QM case we know the Born probabilities are the correct ones to use, because they correspond to experimental results (and also because e.g. they’re preserved under a QM system’s unitary evolution)).
I would guess you are probably correct that Egan’s argument hinges on this point. In essence, Egan seems to be making an informal claim about the relatively likelihood of an orderly dust universe versus a chaotic one.
Boiled down to its essentials, VincentYu’s argument seems to be that if Egan’s informal claim is incorrect, then Egan’s argument fails. Well duh.
A steelmanned version of Egan’s counterargument can be found in what Tegmark calls the (cosmological) measure problem. Egan’s original counterargument is too weak because we can simply postulate that there is an appropriate measure over the worlds of interest; we already do that for the many-worlds interpretation!
In Tegmark (2008) (see my other comment):
Tegmark makes a few remarks on using algorithmic complexity as the measure:
Each of the analogous problems in eternal inflation and the string theory landscape is also called the measure problem (in eternal inflation: how to assign measure over the potentially infinite number of inflationary bubbles; in the string theory landscape: how to assign measure over the astronomical number of false vacua).
In the many-worlds interpretation, the analogous measure problem is resolved by the Born probabilities.
I don’t understand this at all. Can you give an example of such an appropriate measure?
An example of a measure in this context would be the complexity measure that Tegmark mentioned, as long as we agree on a way to encode mathematical structures (the nonuniqueness of representation is one of the issues that Tegmark brought up).
Whether this is an appropriate measure (i.e., whether it correctly “predicts conditional probabilities for what an observer should perceive given past observations”) is unknown; if we knew how to find out, then we could directly resolve the measure problem!
An example of a context where we can give the explicit measure is in the many-words interpretation, where as I mentioned, the Born probabilities resolve the analogous measure problem.
So you are saying that the “Born probabilities” are an example of an “appropriate measure” which, if “postulated,” rebuts Egan’s argument?
Is that correct?
The Born probabilities apply to a different context—the multiple Everett branches of MWI, rather than the interpretative universes available under dust theory. If we had an equivalent of the Born probabilities—a measure—for dust theory, then we’d be able to resolve Egan’s argument one way or another (depending on which way the numbers came out under this measure).
Since we don’t yet know what the measure is, it’s not clear whether Egan’s argument holds—under the “Tengmark computational complexity measure” Egan would be wrong, under the “naive measure” Egan is right. But we need some external evidence to know which measure to use. (By contrast in the QM case we know the Born probabilities are the correct ones to use, because they correspond to experimental results (and also because e.g. they’re preserved under a QM system’s unitary evolution)).
I would guess you are probably correct that Egan’s argument hinges on this point. In essence, Egan seems to be making an informal claim about the relatively likelihood of an orderly dust universe versus a chaotic one.
Boiled down to its essentials, VincentYu’s argument seems to be that if Egan’s informal claim is incorrect, then Egan’s argument fails. Well duh.