Clarification: The probability is orders of magnitude less. This is a difference more than maintained under exponential growth. Example: if p=0.1, q=0.01, then p^n=1/10^n, while q^n is 1/10^(2n). Thus for all n>0, p is at least 10 times q, and in fact is 10^n times q, a difference that rapidly grows as n grows. As you can see, far from making short work of it, exponential growth only broadens the gap.
It feels to me like you’re assuming that P(the universe is increasing in measure) is a function of the universe’s current measure, which seems odd. But if it’s not, then (I believe Stuart’s claim is) no matter how small the probability, an increasing universe eventually has enough value to make it a dominant hypothesis in terms of EV.
I am working on the assumption that we have a theory (of low probability) that posits that the universe is continually increasing its measure, rather than having an independent low probability of measure increase at every moment.
Clarification: The probability is orders of magnitude less. This is a difference more than maintained under exponential growth. Example: if p=0.1, q=0.01, then p^n=1/10^n, while q^n is 1/10^(2n). Thus for all n>0, p is at least 10 times q, and in fact is 10^n times q, a difference that rapidly grows as n grows. As you can see, far from making short work of it, exponential growth only broadens the gap.
What are analogs of p, q and n here?
It feels to me like you’re assuming that P(the universe is increasing in measure) is a function of the universe’s current measure, which seems odd. But if it’s not, then (I believe Stuart’s claim is) no matter how small the probability, an increasing universe eventually has enough value to make it a dominant hypothesis in terms of EV.
I am working on the assumption that we have a theory (of low probability) that posits that the universe is continually increasing its measure, rather than having an independent low probability of measure increase at every moment.