I don’t understand your first paragraph. For the second, I see my future self as morally equivalent to myself, all else being equal. So I defer to their preferences about how the future world is organized, because they’re the one who will live in it and be affected by it. It’s the same reason that my present self doesn’t defer to the preferences of my past self.
Your preferences are by definition the things you want to happen. So, you want your future self to be happy iff your future self’s happiness is your preference. Your ideas about moral equivalence are your preferences. Et cetera. If you prefer X to happen and your preferences are changed so that you no longer prefer X to happen, the chance X will happen becomes lower. So this change of preferences goes against your preference for X. There might be upsides to the change of preferences which compensate the loss of X. Or not. Decide on a case by case basis, but ceteris paribus you don’t want your preferences to change.
I don’t understand your first paragraph. For the second, I see my future self as morally equivalent to myself, all else being equal. So I defer to their preferences about how the future world is organized, because they’re the one who will live in it and be affected by it. It’s the same reason that my present self doesn’t defer to the preferences of my past self.
Your preferences are by definition the things you want to happen. So, you want your future self to be happy iff your future self’s happiness is your preference. Your ideas about moral equivalence are your preferences. Et cetera. If you prefer X to happen and your preferences are changed so that you no longer prefer X to happen, the chance X will happen becomes lower. So this change of preferences goes against your preference for X. There might be upsides to the change of preferences which compensate the loss of X. Or not. Decide on a case by case basis, but ceteris paribus you don’t want your preferences to change.