With regards to the singularity, and given that we haven’t solved ‘morality’ yet, one might just value “human well-being” or “human flourishing” without referring to a long-term self concept. I.e. you just might care about a future ‘you’, even if that person is actually a different person. As a side effect you might also equally care about everyone else in to future too.
If we haven’t decided what morality to use yet, then how are we making moral decisions now, and how are we going to decide this later? I think that what you might call “the function that we’ll use to decide our morality later on” is what I call “my morality both now and later”.
Or you might simply mean our morality will keep changing over time (because we will change, and the environment and its moral challenges will also change). That’s certainly true.
With regards to the singularity, and given that we haven’t solved ‘morality’ yet, one might just value “human well-being” or “human flourishing” without referring to a long-term self concept. I.e. you just might care about a future ‘you’, even if that person is actually a different person. As a side effect you might also equally care about everyone else in to future too.
I’m bothered by the apparent assumption that morality is something that can be “solved”.
What about “decided on”?
If we haven’t decided what morality to use yet, then how are we making moral decisions now, and how are we going to decide this later? I think that what you might call “the function that we’ll use to decide our morality later on” is what I call “my morality both now and later”.
Or you might simply mean our morality will keep changing over time (because we will change, and the environment and its moral challenges will also change). That’s certainly true.