But EY’s statement is about terminal values, not injunctions.
Say that at time t=0, you don’t care about any other entities that exist at t=0, including close copy-siblings; and that you do care about all your copy-descendants; and that your implementation is such that if you’re copied at t=1, by default, at t=2 each of the copies will only care about itself. However, since you care about both of those copies, their utility functions differ from yours. As a general principle, your goals will be better fulfilled if other agents have them, so you want to modify yourself so that your copy-descendants will care about their copy-siblings.
I disagree with your first claim (The statement is too brief and ambiguous to say what definitively what it “is about”), but I don’t want to argue it. Let’s leave that kind of interpretationism to the scholastic philosophers, who spend vast amounts of effort figuring out what various famous ancients “really meant”.
The principle “Your goals will be better fulfilled if other agents have them” is very interesting, and I’ll have to think about it.
But EY’s statement is about terminal values, not injunctions.
Say that at time t=0, you don’t care about any other entities that exist at t=0, including close copy-siblings; and that you do care about all your copy-descendants; and that your implementation is such that if you’re copied at t=1, by default, at t=2 each of the copies will only care about itself. However, since you care about both of those copies, their utility functions differ from yours. As a general principle, your goals will be better fulfilled if other agents have them, so you want to modify yourself so that your copy-descendants will care about their copy-siblings.
I disagree with your first claim (The statement is too brief and ambiguous to say what definitively what it “is about”), but I don’t want to argue it. Let’s leave that kind of interpretationism to the scholastic philosophers, who spend vast amounts of effort figuring out what various famous ancients “really meant”.
The principle “Your goals will be better fulfilled if other agents have them” is very interesting, and I’ll have to think about it.