I think life extension should be discussed more here.
Many rationalists disappointment me with respect to life extension. Too many of them seem to recognize that physical conditioning is important, yet very few seem to do the right things. Most rationalists who understand that physical conditioning is important think they should do something, but that something tends to be almost exclusively lifting weights with little to no cardiovascular exercise. (I consider walking to barely qualify as cardiovascular exercise, by the way.) I think both are important, but if you only could do one, I’d pick cardio because it’s much easier to improve your cardiovascular capacity that way. (Cardiovascular capacity/VO2max correlates well with longevity, as discussed here.) I’m not alone in the belief that cardio is much more important; similar things have been said for a long time. I’d recommend Ken Cooper’s first book for more on this perspective.
The inability for rationalists to regularly do cardiovascular exercise probably stems from similar problems that cause cryocrastination. I’d like to see more on actually implementing cardiovascular exercise routines. I have some notes on this which could help. Off the top of my head I can remember that there’s evidence morning runners tend to maintain the habit better and that there’s evidence that exercising in a group helps with compliance. I personally find Beeminder to help a little bit, but not much.
It’s unclear to me how rationality and life extension are related. Are you thinking about the following, or something different?
Lots of philosophical / cultural effort has been put into accepting the inevitability of death, but this is mistakenly used to accept the nearness of death despite changing technology meaning that’s in play. Rationality helps carve out the parts of that which are no longer appropriate.
Life extension is one of the generic instrumental goods, in that whatever specific goals you have, you can probably get more of them with a longer life than a shorter one. This makes it a candidate as a common interest of many causes.
Rationality habits are especially useful in life extension research, because of the deep importance of reasoning from uncertain data; 30-year olds can’t quite wait for a 60-year study of intermittent fasting to complete in order to determine whether or not they should do intermittent fasting starting when they are 30.
I have been thinking about all three things. I have strong connections with life extension community and we often discuss such topics.
I am planning to write about how much time you could buy by spending money on life extension, on personal level and on social level. I want to show that fighting aging is underestimated from effective altruistic point of view. I would name it second most effective way to prevent sufferings after x-risks prevention.
I have a feeling that as most EA-people are young they are less interested in fighting aging, as it is remote to them, and they also will survive until Strong AI anyway, which will either kill them or make immortal (or even something better, which we can’t guess).
I have a feeling that as most EA-people are young they are less interested in fighting aging, as it is remote to them, and they also will survive until Strong AI anyway, which will either kill them or make immortal (or even something better, which we can’t guess).
There’s a general point that lots of futurists are the sort of people who would normally be very low time preference (that is, they have a low internal interest rate) but who behave in high time preference ways because of their beliefs about the world, and this causes lots of predictable problems and is not obviously the right way to cash out their beliefs about the world. (For example, consider the joke of ‘the Singularity is my retirement plan,’ which is not entirely a joke if you expect AI to hit in, say, 2040 but for you to be able to start collecting from an IRA in 2050.)
Maybe the right approach is that it’s worth explicitly handling the short, medium, and long time horizons and investing effort along each of those lines. Things like life extension that make more sense in long time horizon worlds are probably still worth investing in, even if there’s only a 10-30% chance we actually have that long.
I want to show that fighting aging is underestimated from effective altruistic point of view. I would name it second most effective way to prevent sufferings after x-risks prevention.
Rationality of life extension. Or may be I don’t know and it was already explored?
I think life extension should be discussed more here.
Many rationalists disappointment me with respect to life extension. Too many of them seem to recognize that physical conditioning is important, yet very few seem to do the right things. Most rationalists who understand that physical conditioning is important think they should do something, but that something tends to be almost exclusively lifting weights with little to no cardiovascular exercise. (I consider walking to barely qualify as cardiovascular exercise, by the way.) I think both are important, but if you only could do one, I’d pick cardio because it’s much easier to improve your cardiovascular capacity that way. (Cardiovascular capacity/VO2max correlates well with longevity, as discussed here.) I’m not alone in the belief that cardio is much more important; similar things have been said for a long time. I’d recommend Ken Cooper’s first book for more on this perspective.
The inability for rationalists to regularly do cardiovascular exercise probably stems from similar problems that cause cryocrastination. I’d like to see more on actually implementing cardiovascular exercise routines. I have some notes on this which could help. Off the top of my head I can remember that there’s evidence morning runners tend to maintain the habit better and that there’s evidence that exercising in a group helps with compliance. I personally find Beeminder to help a little bit, but not much.
It’s unclear to me how rationality and life extension are related. Are you thinking about the following, or something different?
Lots of philosophical / cultural effort has been put into accepting the inevitability of death, but this is mistakenly used to accept the nearness of death despite changing technology meaning that’s in play. Rationality helps carve out the parts of that which are no longer appropriate.
Life extension is one of the generic instrumental goods, in that whatever specific goals you have, you can probably get more of them with a longer life than a shorter one. This makes it a candidate as a common interest of many causes.
Rationality habits are especially useful in life extension research, because of the deep importance of reasoning from uncertain data; 30-year olds can’t quite wait for a 60-year study of intermittent fasting to complete in order to determine whether or not they should do intermittent fasting starting when they are 30.
I have been thinking about all three things. I have strong connections with life extension community and we often discuss such topics.
I am planning to write about how much time you could buy by spending money on life extension, on personal level and on social level. I want to show that fighting aging is underestimated from effective altruistic point of view. I would name it second most effective way to prevent sufferings after x-risks prevention.
I have a feeling that as most EA-people are young they are less interested in fighting aging, as it is remote to them, and they also will survive until Strong AI anyway, which will either kill them or make immortal (or even something better, which we can’t guess).
There’s a general point that lots of futurists are the sort of people who would normally be very low time preference (that is, they have a low internal interest rate) but who behave in high time preference ways because of their beliefs about the world, and this causes lots of predictable problems and is not obviously the right way to cash out their beliefs about the world. (For example, consider the joke of ‘the Singularity is my retirement plan,’ which is not entirely a joke if you expect AI to hit in, say, 2040 but for you to be able to start collecting from an IRA in 2050.)
Maybe the right approach is that it’s worth explicitly handling the short, medium, and long time horizons and investing effort along each of those lines. Things like life extension that make more sense in long time horizon worlds are probably still worth investing in, even if there’s only a 10-30% chance we actually have that long.
I’d be very interested in seeing this.
We have posts like http://lesswrong.com/lw/jrt/lifestyle_interventions_to_increase_longevity/