I’m not arguing that you’re wrong I’m just saying that you seem to have just assumed it was true without really setting out to prove it or line up convincing evidence. It just struck me that you seemed to be asking “why” before answering “if”.
I’m also not sure that the answers to your questions in this comment are as necessarily revealing as they might seem at first glance. For example, more of the low hanging fruit might be picked WRT mortality...not as much to be revealed. Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff, which gets discussed plenty.
It might be that you’re right but if I were you I’d like to determine that first.
I have indeed spent a certain amount of time figuring out whether it’s the case, and the answer I came to was “yep, definitely”. Edited the question to make it more clear. I didn’t lay out the reasoning behind it, because I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples (as Elizabeth and in a certain stretched sens Ben Pace did).
>low hanging fruit might be picked WRT mortality
I’m doubtful, but I can certainly see a strong argument for this! However my point is that, like with existential risks, it is a serious enough problem that it’s worth focusing on even after low hanging fruit has been picked up.
>Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff
Hmm, can you elaborate on what do you mean here? Are you talking about applying [non-drug] interventions? But the best interventions known today will give you 1-2 decades if you’re lucky.
I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples
Well, I’m not arguing in bad faith. In fact, I’m almost not arguing at all! If your premise is correct, I think it’s a very good question to ask!
To the extent I am arguing it’s with the assumption behind the premise. To me, it does not seem readily apparent that rationalists are less concerned with mortality than they are with AI risk. At least not so readily apparent that it can just be glossed over.
I’m doubtful, but I can certainly see a strong argument for this!
To be clear, here I’m not actually making the low-hanging fruit argument. I’m just pointing out one of the things that came to mind that make your premise not so readily apparent to me. Another thing I thought about is that hardly anyone outside of the rationalist community is, or has ever, thought about AI risk. Most people probably don’t even acknowledge that AI risk is a thing. Mortality is thought about by everyone, forever. It’s almost as if mortality risk concern is a different reference class than AI risk concern.
I think if you were to summarize my objection to just glossing over the premise of your question it’s that relative amounts of rationalist activity surrounding mortality and AI risk is, to me, not sufficiently indicative of concern so that you can just gloss over the basis for your question. If you are correct, I think it’s very important, but it’s not obvious to me that you are correct. If you are correct, I think it’s really important to make that argument rather than glossing it over.
I spend maybe 2 minutes per day ensuring my doors are locked and maybe an hour per day picking out clothes, getting dressed, washing my face, doing my hair, etc. I don’t think that means I’m less concerned about the physical security of my home relative to my physical appearance!
Hmm, can you elaborate on what do you mean here? Are you talking about applying [non-drug] interventions? But the best interventions known today will give you 1-2 decades if you’re lucky.
Yeah, I’m talking about exercise and “eating healthy” and all the stuff that everyone knows you should do but many don’t because it’s unpleasant and hard.
Anyway, I also think it’s likely that the questions I’d want answered are so adjacent to the question you want answered that a good answer to any of them will largely answer all of them.
Technically probably yes, but the specific position of “it is something we can and should do something about right now” is unfortunately nearly as fringe as AI risk: a bunch of vocal advocates with a small following pushing for it, plus some experts in the broader field and some public figures maybe kinda tentatively flirting with it. So, to me these are two really very comparable positions, very unconventional but also very obvious if you reason from the first principles and some basic background knowledge. Maybe that’s why I may sound a bit frustrated or negative, because it feels like the people who clearly should be able to make this conclusion, for some reason don’t. And that’s why I’m basically asking this question, to understand why don’t or what am I missing or whatever is going on.
By the way, can you clarify what’s your take on the premise of the question? I’m still not sure whether you think:
Rationalists are paying comparatively little attention to mortality and it is justified
Rationalists are paying comparatively little attention to mortality and it is not justified
Rationalists are paying comparatively lot attention to mortality and I’m just not looking in the right places
Something else
Yeah, I’m talking about exercise and “eating healthy” and all the stuff that everyone knows you should do but many don’t because it’s unpleasant and hard.
Ok, in that case akrasia etc debates are very relevant. But even so, not everybody knows. Maybe the facts that you should exercise and watch what you eat themselves are relatively uncontroversial (although I still remember the dark days when EY himself was advocating on facebook that “calories in / calories out” is bullshit). But exactly what kinds of diet and exercise are optimal for longevity is a hugely controversial topic, and it’s mainly not for the lack of data, it’s for the lack of interpretation, i.e. something that we could well try to do on lesswrong. So it’d be cool to see more posts e.g. like this.
By the way, can you clarify what’s your take on the premise of the question?
I lean towards little attention and it is not justified, but I’m really just feeling around in the dark here...and thus my bit of frustration at just jumping right past the step at determining if this is actually the case.
I can imagine plausible arguments for each of the options you give (and more) and I’m not entirely convinced by any of them.
I’m not arguing that you’re wrong I’m just saying that you seem to have just assumed it was true without really setting out to prove it or line up convincing evidence. It just struck me that you seemed to be asking “why” before answering “if”.
I’m also not sure that the answers to your questions in this comment are as necessarily revealing as they might seem at first glance. For example, more of the low hanging fruit might be picked WRT mortality...not as much to be revealed. Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff, which gets discussed plenty.
It might be that you’re right but if I were you I’d like to determine that first.
I have indeed spent a certain amount of time figuring out whether it’s the case, and the answer I came to was “yep, definitely”. Edited the question to make it more clear. I didn’t lay out the reasoning behind it, because I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples (as Elizabeth and in a certain stretched sens Ben Pace did).
>low hanging fruit might be picked WRT mortality
I’m doubtful, but I can certainly see a strong argument for this! However my point is that, like with existential risks, it is a serious enough problem that it’s worth focusing on even after low hanging fruit has been picked up.
>Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff
Hmm, can you elaborate on what do you mean here? Are you talking about applying [non-drug] interventions? But the best interventions known today will give you 1-2 decades if you’re lucky.
Well, I’m not arguing in bad faith. In fact, I’m almost not arguing at all! If your premise is correct, I think it’s a very good question to ask!
To the extent I am arguing it’s with the assumption behind the premise. To me, it does not seem readily apparent that rationalists are less concerned with mortality than they are with AI risk. At least not so readily apparent that it can just be glossed over.
To be clear, here I’m not actually making the low-hanging fruit argument. I’m just pointing out one of the things that came to mind that make your premise not so readily apparent to me. Another thing I thought about is that hardly anyone outside of the rationalist community is, or has ever, thought about AI risk. Most people probably don’t even acknowledge that AI risk is a thing. Mortality is thought about by everyone, forever. It’s almost as if mortality risk concern is a different reference class than AI risk concern.
I think if you were to summarize my objection to just glossing over the premise of your question it’s that relative amounts of rationalist activity surrounding mortality and AI risk is, to me, not sufficiently indicative of concern so that you can just gloss over the basis for your question. If you are correct, I think it’s very important, but it’s not obvious to me that you are correct. If you are correct, I think it’s really important to make that argument rather than glossing it over.
I spend maybe 2 minutes per day ensuring my doors are locked and maybe an hour per day picking out clothes, getting dressed, washing my face, doing my hair, etc. I don’t think that means I’m less concerned about the physical security of my home relative to my physical appearance!
Yeah, I’m talking about exercise and “eating healthy” and all the stuff that everyone knows you should do but many don’t because it’s unpleasant and hard.
Anyway, I also think it’s likely that the questions I’d want answered are so adjacent to the question you want answered that a good answer to any of them will largely answer all of them.
Technically probably yes, but the specific position of “it is something we can and should do something about right now” is unfortunately nearly as fringe as AI risk: a bunch of vocal advocates with a small following pushing for it, plus some experts in the broader field and some public figures maybe kinda tentatively flirting with it. So, to me these are two really very comparable positions, very unconventional but also very obvious if you reason from the first principles and some basic background knowledge. Maybe that’s why I may sound a bit frustrated or negative, because it feels like the people who clearly should be able to make this conclusion, for some reason don’t. And that’s why I’m basically asking this question, to understand why don’t or what am I missing or whatever is going on.
By the way, can you clarify what’s your take on the premise of the question? I’m still not sure whether you think:
Rationalists are paying comparatively little attention to mortality and it is justified
Rationalists are paying comparatively little attention to mortality and it is not justified
Rationalists are paying comparatively lot attention to mortality and I’m just not looking in the right places
Something else
Ok, in that case akrasia etc debates are very relevant. But even so, not everybody knows. Maybe the facts that you should exercise and watch what you eat themselves are relatively uncontroversial (although I still remember the dark days when EY himself was advocating on facebook that “calories in / calories out” is bullshit). But exactly what kinds of diet and exercise are optimal for longevity is a hugely controversial topic, and it’s mainly not for the lack of data, it’s for the lack of interpretation, i.e. something that we could well try to do on lesswrong. So it’d be cool to see more posts e.g. like this.
I lean towards little attention and it is not justified, but I’m really just feeling around in the dark here...and thus my bit of frustration at just jumping right past the step at determining if this is actually the case.
I can imagine plausible arguments for each of the options you give (and more) and I’m not entirely convinced by any of them.