Speculation here, but if we grant your premise, then the answer to your question might be something like:
Rationalists largely come from engineering backgrounds. Rightly or wrongly, AI is mostly framed in an engineering context and mortality is mostly framed in the context of biologists and medical doctors.
That being said, I think it’s really important to suss out if the premise of your question is correct. If it is so, and the signals we are getting about AI risk organizations having almost too much cash, we should be directing some portion of our funding to organizations like SENS instead of AI risk.
Speculation here, but if we grant your premise, then the answer to your question might be something like:
Rationalists largely come from engineering backgrounds. Rightly or wrongly, AI is mostly framed in an engineering context and mortality is mostly framed in the context of biologists and medical doctors.
That being said, I think it’s really important to suss out if the premise of your question is correct. If it is so, and the signals we are getting about AI risk organizations having almost too much cash, we should be directing some portion of our funding to organizations like SENS instead of AI risk.