One additional point regarding subjective time. You say:
Strange but true. (If subjective time is slower, the fact that t=20 matters more to us is balanced out by the fact that t=2 and t=.2 also matter more to us.)
But even if I temporally discount by my subjective sense of time, if I can halt subjective time (e.g. by going into digital or cryonic storage) then the thing to do on your analysis is to freeze up as long as possible while the colonization wave proceeds (via other agents, e.g. Von Neumann probes or the rest of society).
Now, in fact, I wouldn’t care for this strategy at all. If we’re talking about distant galaxies being colonized with happy people that I do not then interact with, I don’t care if they are 5 years in the future or a billion. I don’t care additively and unboundedly about them, but temporal discounting is a bad way to represent my bounded concern. For instance, the possibility that physics might surprisingly turn out to allow indefinite exponential growth (maybe by creating baby universes, or it turning out that we are simulations in a universe with different physics than we see) isn’t unboundedly motivating to me.
Nick Bostrom discusses, in his “Infinite Ethics” paper and “Astronomical Waste” paper, discusses this general phenomenon of time discounting being proposed as a patch to create a framework for cost-benefit analysis that does not recommend big current sacrifices for future people (better representing folk’s behavioral preferences), but in fact failing to do so because of uncertainty about the growth possibilities. [Edited per Steven’s request].
I second Manfred and gjm’s comments.
One additional point regarding subjective time. You say:
But even if I temporally discount by my subjective sense of time, if I can halt subjective time (e.g. by going into digital or cryonic storage) then the thing to do on your analysis is to freeze up as long as possible while the colonization wave proceeds (via other agents, e.g. Von Neumann probes or the rest of society).
Now, in fact, I wouldn’t care for this strategy at all. If we’re talking about distant galaxies being colonized with happy people that I do not then interact with, I don’t care if they are 5 years in the future or a billion. I don’t care additively and unboundedly about them, but temporal discounting is a bad way to represent my bounded concern. For instance, the possibility that physics might surprisingly turn out to allow indefinite exponential growth (maybe by creating baby universes, or it turning out that we are simulations in a universe with different physics than we see) isn’t unboundedly motivating to me.
Nick Bostrom discusses, in his “Infinite Ethics” paper and “Astronomical Waste” paper, discusses this general phenomenon of time discounting being proposed as a patch to create a framework for cost-benefit analysis that does not recommend big current sacrifices for future people (better representing folk’s behavioral preferences), but in fact failing to do so because of uncertainty about the growth possibilities. [Edited per Steven’s request].
Is there a better word for what you call “fanaticism”? Too many connotations.