Do you see your “not updating” scheme as the appropriate new theory applicable to very large universes?
It doesn’t fully solve problems associated with very large universes, but I think it likely provides a framework in which those problems will eventually be solved. See this post for more details.
See also this post which explains my current views on the nature of probabilities, which may be needed to understand the “not updating” approach.
If so, does it in fact give the same result as applying FNC while assuming the universe is not so large?
Sort of. As I explained in a linked comment, when you apply FNC you assign zero probability to the universes not containing someone with your memories and then renormalize the rest, but if your decisions have no consequences in the universes not containing someone with your memories, you end up making the same decisions whether you do this “updating” computation or not. So “not updating” gives the same result in this sense.
It doesn’t fully solve problems associated with very large universes, but I think it likely provides a framework in which those problems will eventually be solved. See this post for more details.
See also this post which explains my current views on the nature of probabilities, which may be needed to understand the “not updating” approach.
Sort of. As I explained in a linked comment, when you apply FNC you assign zero probability to the universes not containing someone with your memories and then renormalize the rest, but if your decisions have no consequences in the universes not containing someone with your memories, you end up making the same decisions whether you do this “updating” computation or not. So “not updating” gives the same result in this sense.