Well, I wouldn’t be surprised if a bunch of people have come up with similar ideas, but in the post you link to, you apply it only to a rather strange scenario in which the universe is the output of a program, which is allowed to simply generate all possible bit strings, and then decide that in this context the idea has absurd consequences. So I’m not sure that counts as coming up with it as an idea to take seriously...
But I took it seriously enough to come up with a counter-argument against it. Doesn’t that count for something? :)
To be clear I’m referring to the second post in that thread, where I wrote:
Let me try to generalize the argument that under the universal prior the
1UH gives really wierd results. The idea is simply that any sufficiently
large and/or long universe that doesn’t repeat has a good chance of
including a person with mind state M, so knowing that at least one person
with mind state M exists in the real universe doesn’t allow you to
eliminate most of them from the set of possible universes. If we want to
get a result that says the real universe is likely to be in a class of
intuitively acceptable universes, we would have to build that directly
into our prior. That is, make them a priori more likely to be real than
all other large/long universes.
Several questions follow if this argument is sound. First, is it
acceptable to consciously construct priors with a built in preference for
intuitively acceptable universes? If so how should this be done? If not
the 1UH is not as intuitive as we thought. We would have to either reject
the 1UH or accept the conclusion that the real universe is likely to be
really weird.
(In that post 1UH refers to the hypothesis that only one universe exists, and I was apparently assuming that what you call FNC is the only way to do Bayesian updating under 1UH so I was thinking this is an argument against 1UH, but looking at it now, it’s really more of an argument against FNC.)
Rather than abandon FNC for the reason you describe, I make the meta-argument that we don’t know that the universe is actually large enough for FNC to have problems, and it seems strange that local issues (like Doomsday or Sleeping Beauty) should depend on this. So whatever modifications to FNC might be needed to make it work in a very large universe should in the end not actually change the answers FNC gives for such problems when a not-incredibly-large universe is assumed.
Do you see your “not updating” scheme as the appropriate new theory applicable to very large universes? If so, does it in fact give the same result as applying FNC while assuming the universe is not so large?
Do you see your “not updating” scheme as the appropriate new theory applicable to very large universes?
It doesn’t fully solve problems associated with very large universes, but I think it likely provides a framework in which those problems will eventually be solved. See this post for more details.
See also this post which explains my current views on the nature of probabilities, which may be needed to understand the “not updating” approach.
If so, does it in fact give the same result as applying FNC while assuming the universe is not so large?
Sort of. As I explained in a linked comment, when you apply FNC you assign zero probability to the universes not containing someone with your memories and then renormalize the rest, but if your decisions have no consequences in the universes not containing someone with your memories, you end up making the same decisions whether you do this “updating” computation or not. So “not updating” gives the same result in this sense.
Well, I wouldn’t be surprised if a bunch of people have come up with similar ideas, but in the post you link to, you apply it only to a rather strange scenario in which the universe is the output of a program, which is allowed to simply generate all possible bit strings, and then decide that in this context the idea has absurd consequences. So I’m not sure that counts as coming up with it as an idea to take seriously...
But I took it seriously enough to come up with a counter-argument against it. Doesn’t that count for something? :)
To be clear I’m referring to the second post in that thread, where I wrote:
(In that post 1UH refers to the hypothesis that only one universe exists, and I was apparently assuming that what you call FNC is the only way to do Bayesian updating under 1UH so I was thinking this is an argument against 1UH, but looking at it now, it’s really more of an argument against FNC.)
Rather than abandon FNC for the reason you describe, I make the meta-argument that we don’t know that the universe is actually large enough for FNC to have problems, and it seems strange that local issues (like Doomsday or Sleeping Beauty) should depend on this. So whatever modifications to FNC might be needed to make it work in a very large universe should in the end not actually change the answers FNC gives for such problems when a not-incredibly-large universe is assumed.
Do you see your “not updating” scheme as the appropriate new theory applicable to very large universes? If so, does it in fact give the same result as applying FNC while assuming the universe is not so large?
It doesn’t fully solve problems associated with very large universes, but I think it likely provides a framework in which those problems will eventually be solved. See this post for more details.
See also this post which explains my current views on the nature of probabilities, which may be needed to understand the “not updating” approach.
Sort of. As I explained in a linked comment, when you apply FNC you assign zero probability to the universes not containing someone with your memories and then renormalize the rest, but if your decisions have no consequences in the universes not containing someone with your memories, you end up making the same decisions whether you do this “updating” computation or not. So “not updating” gives the same result in this sense.