With regards to your SIA objection, I think it is important to clarify exactly what we mean by evidence conservation here. The usual formulation is something like “If I expect to assign credence X to proposition P at future time T, then I should assign credence X to proposition P right now, unless by time T I expect to have lost information in a predictable way”. Now if you are going to be duplicated, then it is not exactly clear what you mean by “I expect to assign … at future time T”, since there will be multiple copies of you that exist at time T. So, maybe you want to get around this by saying that you are referring to the “original” version of you that exists at time T, rather than any duplicates. But then the problem seems to be that by waiting, you will actually lose information in a predictable way! Namely, right now you know that you are not a duplicate, but the future version of you will not know that it is not a duplicate. Since you are losing information, it is not surprising that your probability will predictably change. So, I don’t think SIA violates evidence conservation.
Incidentally, here is an intuition pump that I think supports SIA: suppose I flip a coin and if it is heads then I kill you, tails I keep you alive. Then if you are alive at the end of the experiment, surely you should assign 100% probability to tails (discounting model uncertainty of course). But you could easily reason that this violates evidence conservation: you predictably know that all future agents descended from you will assign 100% probability to tails, while you currently only assign 50% to tails. This points to the importance of precisely defining and analyzing evidence conservation as I have done in the previous paragraph. Additionally, if we generalize to the setting where I make/keep X copies of you if the coin lands heads and Y copies if tails, then SIA gives the elegant formula X/(X+Y) as the probability for heads after the experiment, and it is nice that our straightforward intuitions about the cases X=0 and Y=0 provide a double-check for this formula.
Remember that this is about a coin flip that is in the past and known to be in the past. And that the future duplicates can remember everything their past potential-non-duplicate knew. So they might believe “now I’m not sure I’m not a duplicate, but it used to be the case that I thought that being a non-duplicate was more likely”. So if that information was relevant, they can just put themselves in the shoes of their past selves.
They can’t put themselves in the shoes of their past selves, because in some sense they are not really sure whether they have past selves at all, rather than merely being duplicates of someone. Just because your brain is copied from someone else doesn’t mean that you are in the same epistemological state as them. And the true descendants are also not in the same epistemological state, because they do not know whether they are copies or not.
With regards to your SIA objection, I think it is important to clarify exactly what we mean by evidence conservation here. The usual formulation is something like “If I expect to assign credence X to proposition P at future time T, then I should assign credence X to proposition P right now, unless by time T I expect to have lost information in a predictable way”. Now if you are going to be duplicated, then it is not exactly clear what you mean by “I expect to assign … at future time T”, since there will be multiple copies of you that exist at time T. So, maybe you want to get around this by saying that you are referring to the “original” version of you that exists at time T, rather than any duplicates. But then the problem seems to be that by waiting, you will actually lose information in a predictable way! Namely, right now you know that you are not a duplicate, but the future version of you will not know that it is not a duplicate. Since you are losing information, it is not surprising that your probability will predictably change. So, I don’t think SIA violates evidence conservation.
Incidentally, here is an intuition pump that I think supports SIA: suppose I flip a coin and if it is heads then I kill you, tails I keep you alive. Then if you are alive at the end of the experiment, surely you should assign 100% probability to tails (discounting model uncertainty of course). But you could easily reason that this violates evidence conservation: you predictably know that all future agents descended from you will assign 100% probability to tails, while you currently only assign 50% to tails. This points to the importance of precisely defining and analyzing evidence conservation as I have done in the previous paragraph. Additionally, if we generalize to the setting where I make/keep X copies of you if the coin lands heads and Y copies if tails, then SIA gives the elegant formula X/(X+Y) as the probability for heads after the experiment, and it is nice that our straightforward intuitions about the cases X=0 and Y=0 provide a double-check for this formula.
Remember that this is about a coin flip that is in the past and known to be in the past. And that the future duplicates can remember everything their past potential-non-duplicate knew. So they might believe “now I’m not sure I’m not a duplicate, but it used to be the case that I thought that being a non-duplicate was more likely”. So if that information was relevant, they can just put themselves in the shoes of their past selves.
They can’t put themselves in the shoes of their past selves, because in some sense they are not really sure whether they have past selves at all, rather than merely being duplicates of someone. Just because your brain is copied from someone else doesn’t mean that you are in the same epistemological state as them. And the true descendants are also not in the same epistemological state, because they do not know whether they are copies or not.