It’s unclear to me how rationality and life extension are related. Are you thinking about the following, or something different?
Lots of philosophical / cultural effort has been put into accepting the inevitability of death, but this is mistakenly used to accept the nearness of death despite changing technology meaning that’s in play. Rationality helps carve out the parts of that which are no longer appropriate.
Life extension is one of the generic instrumental goods, in that whatever specific goals you have, you can probably get more of them with a longer life than a shorter one. This makes it a candidate as a common interest of many causes.
Rationality habits are especially useful in life extension research, because of the deep importance of reasoning from uncertain data; 30-year olds can’t quite wait for a 60-year study of intermittent fasting to complete in order to determine whether or not they should do intermittent fasting starting when they are 30.
I have been thinking about all three things. I have strong connections with life extension community and we often discuss such topics.
I am planning to write about how much time you could buy by spending money on life extension, on personal level and on social level. I want to show that fighting aging is underestimated from effective altruistic point of view. I would name it second most effective way to prevent sufferings after x-risks prevention.
I have a feeling that as most EA-people are young they are less interested in fighting aging, as it is remote to them, and they also will survive until Strong AI anyway, which will either kill them or make immortal (or even something better, which we can’t guess).
I have a feeling that as most EA-people are young they are less interested in fighting aging, as it is remote to them, and they also will survive until Strong AI anyway, which will either kill them or make immortal (or even something better, which we can’t guess).
There’s a general point that lots of futurists are the sort of people who would normally be very low time preference (that is, they have a low internal interest rate) but who behave in high time preference ways because of their beliefs about the world, and this causes lots of predictable problems and is not obviously the right way to cash out their beliefs about the world. (For example, consider the joke of ‘the Singularity is my retirement plan,’ which is not entirely a joke if you expect AI to hit in, say, 2040 but for you to be able to start collecting from an IRA in 2050.)
Maybe the right approach is that it’s worth explicitly handling the short, medium, and long time horizons and investing effort along each of those lines. Things like life extension that make more sense in long time horizon worlds are probably still worth investing in, even if there’s only a 10-30% chance we actually have that long.
I want to show that fighting aging is underestimated from effective altruistic point of view. I would name it second most effective way to prevent sufferings after x-risks prevention.
It’s unclear to me how rationality and life extension are related. Are you thinking about the following, or something different?
Lots of philosophical / cultural effort has been put into accepting the inevitability of death, but this is mistakenly used to accept the nearness of death despite changing technology meaning that’s in play. Rationality helps carve out the parts of that which are no longer appropriate.
Life extension is one of the generic instrumental goods, in that whatever specific goals you have, you can probably get more of them with a longer life than a shorter one. This makes it a candidate as a common interest of many causes.
Rationality habits are especially useful in life extension research, because of the deep importance of reasoning from uncertain data; 30-year olds can’t quite wait for a 60-year study of intermittent fasting to complete in order to determine whether or not they should do intermittent fasting starting when they are 30.
I have been thinking about all three things. I have strong connections with life extension community and we often discuss such topics.
I am planning to write about how much time you could buy by spending money on life extension, on personal level and on social level. I want to show that fighting aging is underestimated from effective altruistic point of view. I would name it second most effective way to prevent sufferings after x-risks prevention.
I have a feeling that as most EA-people are young they are less interested in fighting aging, as it is remote to them, and they also will survive until Strong AI anyway, which will either kill them or make immortal (or even something better, which we can’t guess).
There’s a general point that lots of futurists are the sort of people who would normally be very low time preference (that is, they have a low internal interest rate) but who behave in high time preference ways because of their beliefs about the world, and this causes lots of predictable problems and is not obviously the right way to cash out their beliefs about the world. (For example, consider the joke of ‘the Singularity is my retirement plan,’ which is not entirely a joke if you expect AI to hit in, say, 2040 but for you to be able to start collecting from an IRA in 2050.)
Maybe the right approach is that it’s worth explicitly handling the short, medium, and long time horizons and investing effort along each of those lines. Things like life extension that make more sense in long time horizon worlds are probably still worth investing in, even if there’s only a 10-30% chance we actually have that long.
I’d be very interested in seeing this.