This doesn’t require faster than light signaling. If you and the copy are sent way with identical letters, that you open after crossing each other’s event horizons. You learn want was packed with your clone when you open your letter. Which lets you predict what your clone will find.
Nothing here would require the event of your clone seeing the letter to affect you. You are affected by the initial set up.
Another example would be if you learn a star that has crossed your cosmic event horizon was 100 solar masses, it’s fair to infer that it will become a black hole and not a white dwarf.
If you can send a probe to a location, radiation, gravitational waves, etc. from that location will also (in normal conditions) be intercepting you, allowing you to theoretically make pretty solid inferences about certain future phenomena at that location. However, we let the probe fall out of our cosmological horizon- information is reaching it that couldn’t/can’t have reached the other probes, or even the starting position of that probe.
In this setup, you’re gaining information about arbitrary phenomena. If you send a probe out beyond your cosmological horizon, there’s no way to infer the results of, for example, non-entangled quantum experiments.
I think we may eventually determine the complete list of rules and starting conditions for the universe/multiverse/etc. Using our theory of everything and (likely) unobtainable amounts of computing power, we could (perhaps) uniquely locate our branch of the universal wave function (or similar) and draw conclusions about the outcomes of distant quantum experiments (and similar). That’s a serious maybe- I expect that a complete theory of everything would predict infinitely many different instances of us in a way that doesn’t allow for uniquely locating ourselves.
However… this type of reasoning doesn’t look anything like that. If SSA/SSSA require us to have a complete working theory of everything in order to be usable, that’s still invalidating for my current purposes.
For the record, I ran into a more complicated problem which turns out to be incoherent for similar reasons- namely, information can only propagate in specific ways, and it turns out that SSA/SSA allows you to draw conclusions about what your reference class looks like in ways that defy the ways in which information can propagate.
You are affected by the initial set up. If the clone counterfactually saw something else, this wouldn’t affect you according to SIA.
This specific hypothetical doesn’t directly apply to the SIA- it relies on adjusting the relative frequencies of different types of observers in your reference class, which isn’t possible using SIA. SIA still suffers from the similar problem of allowing you to draw conclusions about what the space of all possible observers looks like.
The space that you can affect is your light cone, and your goals can be “simplified” to “applying your values over the space that you can affect”, therefore your goal is to apply your values over your light cone. It’s you’re “only job”.
There is, of course, a specific notion that I intended to evoke by using this rephrasing: the idea that your values apply strongly over humanity’s vast future. It’s possible to value present-day things, people, and so on- and I do. However… whenever I hear that fact in response to my suggestions that the future is large and it matters more than today, I interpret it as playing defense for their preexisting strategies. Everyone was aware of this before the person said it, and it doesn’t address the central point- it’s...
“There are 4 * 10^20 stars out there. You’re in a prime position to make sure they’re used for something valuable to you- as in, you’re currently experiencing the top 10^-30% most influential hours of human experience because of your early position in human history, etc. Are you going to change your plans and leverage your unique position?”
“No, I think I’ll spend most of my effort doing the things I was already going to do.”
Really- Is that your final answer? What position would you need to be in to decide that planning for the long term future is worth most of your effort?
“Seeing as how a couple’s baby does not yet exist, it makes very little sense to say that saving money for their clothes and crib is something that they would be doing ‘for’ them.” No, wait, that’s ridiculous- It does make sense to say that you’re doing things “for” people who don’t exist.
We could rephrase these things in terms of doing them for yourself- “you’re only saving for their clothes and crib because you want them to get what they want”. But, what are we gaining from this rephrasing? The thing you want is for them to get what they want/need. It seems fair to say that you’re doing it for them.
There’s some more complicated discussion to be had on the specific merits of making sure that people exist, but I’m not (currently) interested in having that discussion. My point isn’t really related to that- it’s that we should be spending most of our effort on planning for the long term future.
Also, in the context of artificial intelligence research, it’s an open question as to what the border of “Future Humanity” is. “Existing humans” and “Future Humanity” probably have significant overlap, or so the people at MIRI, DeepMind, OpenAI, FHI, etc. tend to argue- and I agree.