There are TONS of moments I forget, but they _do_ leave residue. Either in income, effect on other people, or environmental improvements (the lightbulb I changed continues to work). Not sure if this scenario removes or carries forward unconscious changes in habits or mental pathways, but for real memory loss, victims tend to retain some amount of such changes, even if they don’t consciously remember doing so.
I also value human joy in the abstract. Whether some other person, or some un-remembered version of me experiences it, there is value.
If you give a very very large value, do you also believe that all mortal lives are very-low-value, as they won’t have any memory once they die?
I recognize that time-value-of-utility is unsolved, and generally ignored for this kind of question. But I’m not sure I follow the reasoning that current-you must value future experiences based on what farther-future-you values.
Specifically, why would you require a very large X? Shouldn’t you value value both possibilities at 0, because you’re dead either way?
There are TONS of moments I forget, but they _do_ leave residue. Either in income, effect on other people, or environmental improvements (the lightbulb I changed continues to work). Not sure if this scenario removes or carries forward unconscious changes in habits or mental pathways, but for real memory loss, victims tend to retain some amount of such changes, even if they don’t consciously remember doing so.
I also value human joy in the abstract. Whether some other person, or some un-remembered version of me experiences it, there is value.
If you give a very very large value, do you also believe that all mortal lives are very-low-value, as they won’t have any memory once they die?
They are of no value to them, because they’re dead. They may be of great value to others.
I recognize that time-value-of-utility is unsolved, and generally ignored for this kind of question. But I’m not sure I follow the reasoning that current-you must value future experiences based on what farther-future-you values.
Specifically, why would you require a very large X? Shouldn’t you value value both possibilities at 0, because you’re dead either way?
No, because I’m alive now, and will be until I’m dead. Until then, I have the preferences and values that I have.
Those are instrumental. They are important to consider, but for the purpose of this post I’m mostly interested in fundamental values.
It does for the purpose of making the thought experiment clearcut. But yeah, that’s something I wonder as well in practice.
Two (mutually exclusive) moral theories I find plausible are:
(All else equal) someone’s life is as valuable as it’s longest instantiation (all shorter instantiations are irrelevant)
Finite lives are value-less