I’m digging into this a little bit, but I’m not following your reasoning. UDT from what I see doesn’t mandate the procedure you outline. (perhaps you can show an article where it does) I also don’t see how which decision theory is best should play a strong role here.
Unfortunately a lot of the knowledge on UDT is scattered in discussions and it’s difficult to locate good references. The UDT point of view is that subjective probabilities are meaningless (the third horn of the anthropic trilemma) thus the only questions it make sense to ask are decision-theoretic questions. Therefore decision theory does play a strong role in any question involving anthropics. See also this.
But anyways I think the heart of your objection seems to be “Fragile universes will be strongly discounted in the expected utility because of the amount of coincidences required to create them”. So I’ll free admit to not understanding how this discounting process works...
The weight of a hypothesis in the Solomonoff prior equals N 2^{-(K + C)} where K is its Kolomogorov complexity, C is the number of coin flips needed to produce the given observation and N is the number of different coin flip outcomes compatible with the given observation. Your fragile universes have high C and low N.
...but I will note that current theoretical structures (standard model inflation cosmology/string theory) have a large amount of constants that are considered coincidences and also produce a large amount of universes like ours in terms of physical law but different in terms of outcome.
Right. But these are weak points of the theory, not strong points. That is, if we find an equally simple theory which doesn’t require these coincidences it will receive substantially higher weight. Anyway your fragile universes have a lot more coincidences than any conventional physical theory.
I would also note that fragile universe “coincidences” don’t seem to me to be more coincidental in character than the fact we happen to live on a planet suitable for life.
In principle hypotheses with more planets suitable for life also get higher weight, but the effect levels off when reaching O(1) civilizations per current cosmological horizon because it is offset by the high utility of having the entire future light cone to yourself. This is essentially the anthropic argument for a late filter in the Fermi paradox, and the reason this argument doesn’t work in UDT.
Lastly I would also note that at this point we don’t have a good H1 or H2.
All of the physical theories we have so far are not fragile, therefore they are vastly superior to any fragile physics you might invent.
A “coincidence” is an a priori improbable event in your model that has to happen in order to create a situation containing a “copy” of the observer (which roughly means any agent with a similar utility function and similar decision algorithm).
Imagine two universe clusters in the multiverse: one cluster consists of universe running on fragile physics, another cluster consists of universes running on normal physics. The fragile cluster will contain much less agent-copies than the normal cluster (weighted by probability). Imagine you have to make a decision which produces different utilities depending on whether you are in the fragile cluster or the normal cluster. According to UDT, you have to think as even you are deciding for all copies. In other words, if you make decisions under the assumption you are in the fragile cluster, all copies make decisions under this assumption, if you make decisions under the assumption you are in the normal cluster, all copies make decisions under this assumption. Since the normal cluster is much more “copy-dense”, it pays off much more to make decisions as if you are in the normal cluster (since utility is aggregated over the entire multiverse).
The weighting comes from the Solomonoff prior. For example, see the paper by Legg.
Unfortunately a lot of the knowledge on UDT is scattered in discussions and it’s difficult to locate good references. The UDT point of view is that subjective probabilities are meaningless (the third horn of the anthropic trilemma) thus the only questions it make sense to ask are decision-theoretic questions. Therefore decision theory does play a strong role in any question involving anthropics. See also this.
The weight of a hypothesis in the Solomonoff prior equals N 2^{-(K + C)} where K is its Kolomogorov complexity, C is the number of coin flips needed to produce the given observation and N is the number of different coin flip outcomes compatible with the given observation. Your fragile universes have high C and low N.
Right. But these are weak points of the theory, not strong points. That is, if we find an equally simple theory which doesn’t require these coincidences it will receive substantially higher weight. Anyway your fragile universes have a lot more coincidences than any conventional physical theory.
In principle hypotheses with more planets suitable for life also get higher weight, but the effect levels off when reaching O(1) civilizations per current cosmological horizon because it is offset by the high utility of having the entire future light cone to yourself. This is essentially the anthropic argument for a late filter in the Fermi paradox, and the reason this argument doesn’t work in UDT.
All of the physical theories we have so far are not fragile, therefore they are vastly superior to any fragile physics you might invent.
I’ll dig a little deeper but let me first ask these questions:
What do you define as a coincidence?
Where can I find an explanation of the N 2^{-(K + C)} weighting?
A “coincidence” is an a priori improbable event in your model that has to happen in order to create a situation containing a “copy” of the observer (which roughly means any agent with a similar utility function and similar decision algorithm).
Imagine two universe clusters in the multiverse: one cluster consists of universe running on fragile physics, another cluster consists of universes running on normal physics. The fragile cluster will contain much less agent-copies than the normal cluster (weighted by probability). Imagine you have to make a decision which produces different utilities depending on whether you are in the fragile cluster or the normal cluster. According to UDT, you have to think as even you are deciding for all copies. In other words, if you make decisions under the assumption you are in the fragile cluster, all copies make decisions under this assumption, if you make decisions under the assumption you are in the normal cluster, all copies make decisions under this assumption. Since the normal cluster is much more “copy-dense”, it pays off much more to make decisions as if you are in the normal cluster (since utility is aggregated over the entire multiverse).
The weighting comes from the Solomonoff prior. For example, see the paper by Legg.