I’d be happy to be proven wrong, and existence is generally much easier to prove than non-existence. Can you point to any notable rationality-adjacent organizations focused on longevity research? Bloggers or curated sequences? When was the last rationalist event with focus on life extension (not counting cryonics, it was last Sunday)? Any major figures in the community focused on this area?
To be clear, I don’t mean “concerned about a war in Ukraine” level, I mean “concerned about AI alignment” level. Since these are the two most likely ways for the present day community members humans to die, with the exact proportion between them depending on one’s age and AI timelines estimate, I would expect a roughly comparable level of attention and that is very much not what I observe. Am I looking in the wrong places?
The now-defunct Longevity Research Institute and Daphnia Labs were founded and run by Sarah Constantin. Geroscience magazine was run by someone at a rationalist house. SENS is adjacent. At least one ACX grant went to support a longevity researcher. I also know of private projects that have never been announced publicly.
It is not AI-level attention, but it is much more than is given to Ukraine.
I agree, Ukraine was an exaggeration. I’ve checked the tags and grants before asking the question, and am well aware of SENS but never thought or heard of it being adjacent. Is it? Didn’t know of the three defunct institutions as well, so I should raise my estimate somewhat.
I’m not arguing that you’re wrong I’m just saying that you seem to have just assumed it was true without really setting out to prove it or line up convincing evidence. It just struck me that you seemed to be asking “why” before answering “if”.
I’m also not sure that the answers to your questions in this comment are as necessarily revealing as they might seem at first glance. For example, more of the low hanging fruit might be picked WRT mortality...not as much to be revealed. Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff, which gets discussed plenty.
It might be that you’re right but if I were you I’d like to determine that first.
I have indeed spent a certain amount of time figuring out whether it’s the case, and the answer I came to was “yep, definitely”. Edited the question to make it more clear. I didn’t lay out the reasoning behind it, because I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples (as Elizabeth and in a certain stretched sens Ben Pace did).
>low hanging fruit might be picked WRT mortality
I’m doubtful, but I can certainly see a strong argument for this! However my point is that, like with existential risks, it is a serious enough problem that it’s worth focusing on even after low hanging fruit has been picked up.
>Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff
Hmm, can you elaborate on what do you mean here? Are you talking about applying [non-drug] interventions? But the best interventions known today will give you 1-2 decades if you’re lucky.
I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples
Well, I’m not arguing in bad faith. In fact, I’m almost not arguing at all! If your premise is correct, I think it’s a very good question to ask!
To the extent I am arguing it’s with the assumption behind the premise. To me, it does not seem readily apparent that rationalists are less concerned with mortality than they are with AI risk. At least not so readily apparent that it can just be glossed over.
I’m doubtful, but I can certainly see a strong argument for this!
To be clear, here I’m not actually making the low-hanging fruit argument. I’m just pointing out one of the things that came to mind that make your premise not so readily apparent to me. Another thing I thought about is that hardly anyone outside of the rationalist community is, or has ever, thought about AI risk. Most people probably don’t even acknowledge that AI risk is a thing. Mortality is thought about by everyone, forever. It’s almost as if mortality risk concern is a different reference class than AI risk concern.
I think if you were to summarize my objection to just glossing over the premise of your question it’s that relative amounts of rationalist activity surrounding mortality and AI risk is, to me, not sufficiently indicative of concern so that you can just gloss over the basis for your question. If you are correct, I think it’s very important, but it’s not obvious to me that you are correct. If you are correct, I think it’s really important to make that argument rather than glossing it over.
I spend maybe 2 minutes per day ensuring my doors are locked and maybe an hour per day picking out clothes, getting dressed, washing my face, doing my hair, etc. I don’t think that means I’m less concerned about the physical security of my home relative to my physical appearance!
Hmm, can you elaborate on what do you mean here? Are you talking about applying [non-drug] interventions? But the best interventions known today will give you 1-2 decades if you’re lucky.
Yeah, I’m talking about exercise and “eating healthy” and all the stuff that everyone knows you should do but many don’t because it’s unpleasant and hard.
Anyway, I also think it’s likely that the questions I’d want answered are so adjacent to the question you want answered that a good answer to any of them will largely answer all of them.
Technically probably yes, but the specific position of “it is something we can and should do something about right now” is unfortunately nearly as fringe as AI risk: a bunch of vocal advocates with a small following pushing for it, plus some experts in the broader field and some public figures maybe kinda tentatively flirting with it. So, to me these are two really very comparable positions, very unconventional but also very obvious if you reason from the first principles and some basic background knowledge. Maybe that’s why I may sound a bit frustrated or negative, because it feels like the people who clearly should be able to make this conclusion, for some reason don’t. And that’s why I’m basically asking this question, to understand why don’t or what am I missing or whatever is going on.
By the way, can you clarify what’s your take on the premise of the question? I’m still not sure whether you think:
Rationalists are paying comparatively little attention to mortality and it is justified
Rationalists are paying comparatively little attention to mortality and it is not justified
Rationalists are paying comparatively lot attention to mortality and I’m just not looking in the right places
Something else
Yeah, I’m talking about exercise and “eating healthy” and all the stuff that everyone knows you should do but many don’t because it’s unpleasant and hard.
Ok, in that case akrasia etc debates are very relevant. But even so, not everybody knows. Maybe the facts that you should exercise and watch what you eat themselves are relatively uncontroversial (although I still remember the dark days when EY himself was advocating on facebook that “calories in / calories out” is bullshit). But exactly what kinds of diet and exercise are optimal for longevity is a hugely controversial topic, and it’s mainly not for the lack of data, it’s for the lack of interpretation, i.e. something that we could well try to do on lesswrong. So it’d be cool to see more posts e.g. like this.
By the way, can you clarify what’s your take on the premise of the question?
I lean towards little attention and it is not justified, but I’m really just feeling around in the dark here...and thus my bit of frustration at just jumping right past the step at determining if this is actually the case.
I can imagine plausible arguments for each of the options you give (and more) and I’m not entirely convinced by any of them.
Are you aware of SENS? There is massive overlap between them and the rationality community here in the Bay Area. They are, however, surprisingly underfunded and receive relatively little attention on sites like this compared with, say, AI alignment. So I see your point.
I’m well aware, but this comment section is the first time I hear there’s a non-trivial overlap! Are you saying many active rationalists are SENS supporters?
It is one of the most common charities donated to by effective altruists here. But what I’m also saying is that many of the people working at SENS have had some level of exposure to the less wrong / rationalist community.
I’m skeptical of the premise of the question.
I do not think your stated basis for thinking rationalists are not concerned with mortality is sufficient to grant you that it is true.
I’d be happy to be proven wrong, and existence is generally much easier to prove than non-existence. Can you point to any notable rationality-adjacent organizations focused on longevity research? Bloggers or curated sequences? When was the last rationalist event with focus on life extension (not counting cryonics, it was last Sunday)? Any major figures in the community focused on this area?
To be clear, I don’t mean “concerned about a war in Ukraine” level, I mean “concerned about AI alignment” level. Since these are the two most likely ways for the present day
community membershumans to die, with the exact proportion between them depending on one’s age and AI timelines estimate, I would expect a roughly comparable level of attention and that is very much not what I observe. Am I looking in the wrong places?Tags on LW: Longevity, Aging
The now-defunct Longevity Research Institute and Daphnia Labs were founded and run by Sarah Constantin. Geroscience magazine was run by someone at a rationalist house. SENS is adjacent. At least one ACX grant went to support a longevity researcher. I also know of private projects that have never been announced publicly.
It is not AI-level attention, but it is much more than is given to Ukraine.
I agree, Ukraine was an exaggeration. I’ve checked the tags and grants before asking the question, and am well aware of SENS but never thought or heard of it being adjacent. Is it? Didn’t know of the three defunct institutions as well, so I should raise my estimate somewhat.
I’m not arguing that you’re wrong I’m just saying that you seem to have just assumed it was true without really setting out to prove it or line up convincing evidence. It just struck me that you seemed to be asking “why” before answering “if”.
I’m also not sure that the answers to your questions in this comment are as necessarily revealing as they might seem at first glance. For example, more of the low hanging fruit might be picked WRT mortality...not as much to be revealed. Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff, which gets discussed plenty.
It might be that you’re right but if I were you I’d like to determine that first.
I have indeed spent a certain amount of time figuring out whether it’s the case, and the answer I came to was “yep, definitely”. Edited the question to make it more clear. I didn’t lay out the reasoning behind it, because I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples (as Elizabeth and in a certain stretched sens Ben Pace did).
>low hanging fruit might be picked WRT mortality
I’m doubtful, but I can certainly see a strong argument for this! However my point is that, like with existential risks, it is a serious enough problem that it’s worth focusing on even after low hanging fruit has been picked up.
>Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff
Hmm, can you elaborate on what do you mean here? Are you talking about applying [non-drug] interventions? But the best interventions known today will give you 1-2 decades if you’re lucky.
Well, I’m not arguing in bad faith. In fact, I’m almost not arguing at all! If your premise is correct, I think it’s a very good question to ask!
To the extent I am arguing it’s with the assumption behind the premise. To me, it does not seem readily apparent that rationalists are less concerned with mortality than they are with AI risk. At least not so readily apparent that it can just be glossed over.
To be clear, here I’m not actually making the low-hanging fruit argument. I’m just pointing out one of the things that came to mind that make your premise not so readily apparent to me. Another thing I thought about is that hardly anyone outside of the rationalist community is, or has ever, thought about AI risk. Most people probably don’t even acknowledge that AI risk is a thing. Mortality is thought about by everyone, forever. It’s almost as if mortality risk concern is a different reference class than AI risk concern.
I think if you were to summarize my objection to just glossing over the premise of your question it’s that relative amounts of rationalist activity surrounding mortality and AI risk is, to me, not sufficiently indicative of concern so that you can just gloss over the basis for your question. If you are correct, I think it’s very important, but it’s not obvious to me that you are correct. If you are correct, I think it’s really important to make that argument rather than glossing it over.
I spend maybe 2 minutes per day ensuring my doors are locked and maybe an hour per day picking out clothes, getting dressed, washing my face, doing my hair, etc. I don’t think that means I’m less concerned about the physical security of my home relative to my physical appearance!
Yeah, I’m talking about exercise and “eating healthy” and all the stuff that everyone knows you should do but many don’t because it’s unpleasant and hard.
Anyway, I also think it’s likely that the questions I’d want answered are so adjacent to the question you want answered that a good answer to any of them will largely answer all of them.
Technically probably yes, but the specific position of “it is something we can and should do something about right now” is unfortunately nearly as fringe as AI risk: a bunch of vocal advocates with a small following pushing for it, plus some experts in the broader field and some public figures maybe kinda tentatively flirting with it. So, to me these are two really very comparable positions, very unconventional but also very obvious if you reason from the first principles and some basic background knowledge. Maybe that’s why I may sound a bit frustrated or negative, because it feels like the people who clearly should be able to make this conclusion, for some reason don’t. And that’s why I’m basically asking this question, to understand why don’t or what am I missing or whatever is going on.
By the way, can you clarify what’s your take on the premise of the question? I’m still not sure whether you think:
Rationalists are paying comparatively little attention to mortality and it is justified
Rationalists are paying comparatively little attention to mortality and it is not justified
Rationalists are paying comparatively lot attention to mortality and I’m just not looking in the right places
Something else
Ok, in that case akrasia etc debates are very relevant. But even so, not everybody knows. Maybe the facts that you should exercise and watch what you eat themselves are relatively uncontroversial (although I still remember the dark days when EY himself was advocating on facebook that “calories in / calories out” is bullshit). But exactly what kinds of diet and exercise are optimal for longevity is a hugely controversial topic, and it’s mainly not for the lack of data, it’s for the lack of interpretation, i.e. something that we could well try to do on lesswrong. So it’d be cool to see more posts e.g. like this.
I lean towards little attention and it is not justified, but I’m really just feeling around in the dark here...and thus my bit of frustration at just jumping right past the step at determining if this is actually the case.
I can imagine plausible arguments for each of the options you give (and more) and I’m not entirely convinced by any of them.
Are you aware of SENS? There is massive overlap between them and the rationality community here in the Bay Area. They are, however, surprisingly underfunded and receive relatively little attention on sites like this compared with, say, AI alignment. So I see your point.
I’m well aware, but this comment section is the first time I hear there’s a non-trivial overlap! Are you saying many active rationalists are SENS supporters?
It is one of the most common charities donated to by effective altruists here. But what I’m also saying is that many of the people working at SENS have had some level of exposure to the less wrong / rationalist community.
Hmm that’s interesting, I need to find those people.