I think your spreadsheet’s calculation is not quite right.
Your column “Cum Prob Death (Natural)” is computed correctly. For each marginal increment, you take the probability of natural death at that specific age (“Prob Death (Natural)”), and discount it by the probability of survival until that specific age.
However, you don’t discount like this when computing “Cum Prob Death (AGI).” So it includes probability mass from timelines where you’ve already died before year Y, and treats you as “dying from AGI in year Y” in some of these timelines.
Once this is corrected, the ratio you compute goes down a bit, though it’s not dramatically different. (See my modified sheet here.)
More importantly, I don’t think this statistic should have any motivating force.
Dying young is unlikely, even granting your assumptions about AGI.
Conditional on the unlikely event of dying young, you are (of course) more likely to have died in the one of the ways young people tend to die, when they do.
So if you die young, your cause of death is unusually likely to be “AGI,” or “randomly hit by a bus,” as opposed to, say, “Alzheimer’s disease.” But why does this matter?
The same reasoning could be used to produce a surprising-looking statistic about dying-by-bus vs. dying-by-Alzheimer’s, but that statistic should not motivate you to care more about buses and less about Alzheimer’s. Likewise, your statistic should not motivate you to care more about dying-by-AGI.
Another phrasing: your calculation would be appropriate (for eg doing expected utility) if you placed no value on your life after 2052, while placing constant value on your life from now until 2052. But these are (presumably) not your true preferences.
Re cumulative probability calculations, I just copied the non-cumulative probabilities column from Ajeya Cotra’s spreadsheet, where she defines it as the difference between successive cumulative probabilities (I haven’t dug deeply enough to know whether she calculates cumulative probabilities correctly). Either way, it makes fairly little difference, given how small the numbers are.
Re your second point, I basically agree that you should not work on AI Safety from a personal expected utility standpoint, as I address in the caveats. My main crux for this is just the marginal impact of any one person is miniscule. Though I do think that dying young is significantly worse than dying old, just in terms of QALY loss—if I avoid dying of Alzheimers, something will kill me soon after, but if I avoid dying in a bus today, I probably have a good 60 years left. I haven’t run the numbers, but expect that it does notably reduce life expectancy for a young person today.
My goal was just to demonstrate that AI Safety is a real and pressing problem for people alive today, and that discussion around longtermism elides this, in a way that I think is misleading and harmful. And I think ‘most likely reason for me to die young’ is an emotionally visceral way to demonstrate that. The underlying point is just kind of obvious if you buy the claims in the reports, and so my goal here is not to give a logical argument for it, just to try driving that point home in a different way.
Note that if AI risk doesn’t kill you, but you survive to see AGI plus a few years, then you probably get to live however long you want, at much higher quality, so the QALY loss from AI risk in this scenario is not bounded by the no-AGI figure.
So then it is a long-termist cause, isn’t it? It’s something that some people (long-termists) want to collaborate on, because it’s worth the effort, and that some people don’t. I mean, there can be other reasons to work on it, like wanting your grandchildren to exist, but still.
I think the point was that it’s a cause you don’t have to be a longtermist in order to care about. Saying it’s a “longtermist cause” can be interpreted either as saying that there are strong reasons for caring about it if you’re a longtermist, or that there are not strong reasons for caring about it if you’re not a longtermist. OP is disagreeing with the second of these (i.e. OP thinks there are strong reasons for caring about AI risk completely apart from longtermism).
The whole point of EA is to be effective by analyzing the likely effects of actions. It’s in the name. OP writes:
Is this enough to justify working on AI X-risk from a purely selfish perspective?
Probably not—in the same way that it’s not selfish to work on climate change. The effect any one person can have on the issue is tiny, even if the magnitude that it affects any individual is fairly high.
But this does help it appeal to my deontological/virtue ethics side [...]
I don’t think one shouldn’t follow one’s virtue ethics, but I note that deontology / virtue ethics, on a consequentialist view, are good for when you don’t have clear models of things and ability to compare possible actions. E.g. you’re supposed to not murder people because you should know perfectly well that people who conclude they should murder people are mistaken empirically; so you should know that you don’t actually have a clear analysis of things. So as I said, there’s lots of reasons, such as virtue ethics, to want to work on AI risk. But the OP explicitly mentioned “longtermist cause” in the context of introducing AI risk as an EA cause; in terms of the consequentialist reasoning, longtermism is highly relevant! If you cared about your friends and family in addition to yourself, but didn’t care about your hypothetical future great-grandchildren and didn’t believe that your friends and family have a major stake in the long future, then it still wouldn’t be appealing to work on, right?
If by “virtue ethics” the OP means “because I also care about other people”, to me that seems like a consequentialist thing, and it might be useful for the OP to know that their behavior is actually consequentialist!
To be clear, I work on AI Safety for consequentialist reasons, and am aware that it seems overwhelmingly sensible from a longtermist perspective. I was trying to make the point that it also makes sense from a bunch of other perspectives, including perspectives that better feed in to my motivation system. It would still be worth working on even if this wasn’t the case, but I think it’s a point worth making.
I think your spreadsheet’s calculation is not quite right.
Your column “Cum Prob Death (Natural)” is computed correctly. For each marginal increment, you take the probability of natural death at that specific age (“Prob Death (Natural)”), and discount it by the probability of survival until that specific age.
However, you don’t discount like this when computing “Cum Prob Death (AGI).” So it includes probability mass from timelines where you’ve already died before year Y, and treats you as “dying from AGI in year Y” in some of these timelines.
Once this is corrected, the ratio you compute goes down a bit, though it’s not dramatically different. (See my modified sheet here.)
More importantly, I don’t think this statistic should have any motivating force.
Dying young is unlikely, even granting your assumptions about AGI.
Conditional on the unlikely event of dying young, you are (of course) more likely to have died in the one of the ways young people tend to die, when they do.
So if you die young, your cause of death is unusually likely to be “AGI,” or “randomly hit by a bus,” as opposed to, say, “Alzheimer’s disease.” But why does this matter?
The same reasoning could be used to produce a surprising-looking statistic about dying-by-bus vs. dying-by-Alzheimer’s, but that statistic should not motivate you to care more about buses and less about Alzheimer’s. Likewise, your statistic should not motivate you to care more about dying-by-AGI.
Another phrasing: your calculation would be appropriate (for eg doing expected utility) if you placed no value on your life after 2052, while placing constant value on your life from now until 2052. But these are (presumably) not your true preferences.
Re cumulative probability calculations, I just copied the non-cumulative probabilities column from Ajeya Cotra’s spreadsheet, where she defines it as the difference between successive cumulative probabilities (I haven’t dug deeply enough to know whether she calculates cumulative probabilities correctly). Either way, it makes fairly little difference, given how small the numbers are.
Re your second point, I basically agree that you should not work on AI Safety from a personal expected utility standpoint, as I address in the caveats. My main crux for this is just the marginal impact of any one person is miniscule. Though I do think that dying young is significantly worse than dying old, just in terms of QALY loss—if I avoid dying of Alzheimers, something will kill me soon after, but if I avoid dying in a bus today, I probably have a good 60 years left. I haven’t run the numbers, but expect that it does notably reduce life expectancy for a young person today.
My goal was just to demonstrate that AI Safety is a real and pressing problem for people alive today, and that discussion around longtermism elides this, in a way that I think is misleading and harmful. And I think ‘most likely reason for me to die young’ is an emotionally visceral way to demonstrate that. The underlying point is just kind of obvious if you buy the claims in the reports, and so my goal here is not to give a logical argument for it, just to try driving that point home in a different way.
Note that if AI risk doesn’t kill you, but you survive to see AGI plus a few years, then you probably get to live however long you want, at much higher quality, so the QALY loss from AI risk in this scenario is not bounded by the no-AGI figure.
So then it is a long-termist cause, isn’t it? It’s something that some people (long-termists) want to collaborate on, because it’s worth the effort, and that some people don’t. I mean, there can be other reasons to work on it, like wanting your grandchildren to exist, but still.
I think the point was that it’s a cause you don’t have to be a longtermist in order to care about. Saying it’s a “longtermist cause” can be interpreted either as saying that there are strong reasons for caring about it if you’re a longtermist, or that there are not strong reasons for caring about it if you’re not a longtermist. OP is disagreeing with the second of these (i.e. OP thinks there are strong reasons for caring about AI risk completely apart from longtermism).
The whole point of EA is to be effective by analyzing the likely effects of actions. It’s in the name. OP writes:
I don’t think one shouldn’t follow one’s virtue ethics, but I note that deontology / virtue ethics, on a consequentialist view, are good for when you don’t have clear models of things and ability to compare possible actions. E.g. you’re supposed to not murder people because you should know perfectly well that people who conclude they should murder people are mistaken empirically; so you should know that you don’t actually have a clear analysis of things. So as I said, there’s lots of reasons, such as virtue ethics, to want to work on AI risk. But the OP explicitly mentioned “longtermist cause” in the context of introducing AI risk as an EA cause; in terms of the consequentialist reasoning, longtermism is highly relevant! If you cared about your friends and family in addition to yourself, but didn’t care about your hypothetical future great-grandchildren and didn’t believe that your friends and family have a major stake in the long future, then it still wouldn’t be appealing to work on, right?
If by “virtue ethics” the OP means “because I also care about other people”, to me that seems like a consequentialist thing, and it might be useful for the OP to know that their behavior is actually consequentialist!
To be clear, I work on AI Safety for consequentialist reasons, and am aware that it seems overwhelmingly sensible from a longtermist perspective. I was trying to make the point that it also makes sense from a bunch of other perspectives, including perspectives that better feed in to my motivation system. It would still be worth working on even if this wasn’t the case, but I think it’s a point worth making.