Rather than abandon FNC for the reason you describe, I make the meta-argument that we don’t know that the universe is actually large enough for FNC to have problems, and it seems strange that local issues (like Doomsday or Sleeping Beauty) should depend on this. So whatever modifications to FNC might be needed to make it work in a very large universe should in the end not actually change the answers FNC gives for such problems when a not-incredibly-large universe is assumed.
Do you see your “not updating” scheme as the appropriate new theory applicable to very large universes? If so, does it in fact give the same result as applying FNC while assuming the universe is not so large?
Do you see your “not updating” scheme as the appropriate new theory applicable to very large universes?
It doesn’t fully solve problems associated with very large universes, but I think it likely provides a framework in which those problems will eventually be solved. See this post for more details.
See also this post which explains my current views on the nature of probabilities, which may be needed to understand the “not updating” approach.
If so, does it in fact give the same result as applying FNC while assuming the universe is not so large?
Sort of. As I explained in a linked comment, when you apply FNC you assign zero probability to the universes not containing someone with your memories and then renormalize the rest, but if your decisions have no consequences in the universes not containing someone with your memories, you end up making the same decisions whether you do this “updating” computation or not. So “not updating” gives the same result in this sense.
Rather than abandon FNC for the reason you describe, I make the meta-argument that we don’t know that the universe is actually large enough for FNC to have problems, and it seems strange that local issues (like Doomsday or Sleeping Beauty) should depend on this. So whatever modifications to FNC might be needed to make it work in a very large universe should in the end not actually change the answers FNC gives for such problems when a not-incredibly-large universe is assumed.
Do you see your “not updating” scheme as the appropriate new theory applicable to very large universes? If so, does it in fact give the same result as applying FNC while assuming the universe is not so large?
It doesn’t fully solve problems associated with very large universes, but I think it likely provides a framework in which those problems will eventually be solved. See this post for more details.
See also this post which explains my current views on the nature of probabilities, which may be needed to understand the “not updating” approach.
Sort of. As I explained in a linked comment, when you apply FNC you assign zero probability to the universes not containing someone with your memories and then renormalize the rest, but if your decisions have no consequences in the universes not containing someone with your memories, you end up making the same decisions whether you do this “updating” computation or not. So “not updating” gives the same result in this sense.