I’m pretty concerned, I’m trying to prevent the AI catastrophe happening that will likely kill me.
On a personal level, it seems quite unlikely that any individual can meaningfully alter the risk of an existential catastrophe enough for their own efforts to be justified selfishly. Put another way, I think it makes sense to focus on preventing existential risks, but not as a means of preventing one’s own death.
One optimistic explanation is that rationalists care more about AI risk because it’s an altruistic pursuit. That’s one possible way of answering OP’s question.
I decide both my actions and, to varying extents, the actions of people like me.
On a gut level, I also refuse to live in a world where people like me do nothing about AI risk for your reason of low expected individual impact, because that feels cowardly. (TBC this is a rebuke of that reason, not of you)
A high enough P(death from AI) screens off the benefits of many other interventions. If I thought myself 90% likely to die to AI before age 50, then I wouldn’t care much about living to 90 instead of 80.
On a personal level, it seems quite unlikely that any individual can meaningfully alter the risk of an existential catastrophe enough for their own efforts to be justified selfishly.
I think this depends a lot on 1) time discounting 2) whether you think there will be anything like impact certificates / rewards for helping in the future. That is, it may be the case that increasing chance of positive singularity by 1/million is worth more than your natural lifespan in EV terms (while, of course, mattering very little for most discount rates). And if you think the existence of Earth is currently worth like 2 quadrillion dollars (annual world GDP * 20), and you can increase probability of survival by a millionth, and you’ll be compensated something like a thousandth of the value you provided, then you’re looking at $2M in present value.
On a personal level, it seems quite unlikely that any individual can meaningfully alter the risk of an existential catastrophe enough for their own efforts to be justified selfishly. Put another way, I think it makes sense to focus on preventing existential risks, but not as a means of preventing one’s own death.
One optimistic explanation is that rationalists care more about AI risk because it’s an altruistic pursuit. That’s one possible way of answering OP’s question.
I decide both my actions and, to varying extents, the actions of people like me.
On a gut level, I also refuse to live in a world where people like me do nothing about AI risk for your reason of low expected individual impact, because that feels cowardly. (TBC this is a rebuke of that reason, not of you)
A high enough P(death from AI) screens off the benefits of many other interventions. If I thought myself 90% likely to die to AI before age 50, then I wouldn’t care much about living to 90 instead of 80.
I think this depends a lot on 1) time discounting 2) whether you think there will be anything like impact certificates / rewards for helping in the future. That is, it may be the case that increasing chance of positive singularity by 1/million is worth more than your natural lifespan in EV terms (while, of course, mattering very little for most discount rates). And if you think the existence of Earth is currently worth like 2 quadrillion dollars (annual world GDP * 20), and you can increase probability of survival by a millionth, and you’ll be compensated something like a thousandth of the value you provided, then you’re looking at $2M in present value.