One way of testing this is to whether people are willing to discuss existential risk threats that cannot be solved by giving them money. Such comments do exist (see for example Stephen Hawking’s comments about the danger of aliens). It is however interesting to note that he’s made similar remarks about the threat of AI. (See e.g. here). I’m not sure whether such evaluations are relevant.
Also, I don’t think it follows that people like Yudkowsky and Hellman necessarily decide to study the existential risks they do because they have a higher than average estimate for the threats in question. They may just have internalized the threats more. Most humans simply don’t internalize the risks of existential risk in a way that alters their actions, even if they are willing to acknowledge high probabilities of problems.
I’d prefer humanity choose to cooperate with aliens if we are in the stronger position. But I agree that we shouldn’t expect them to do the same, and that this does argue for generic importance of developing technology faster. (On the other hand, intelligent life seems to be really rare, so trying to outrace others might be a bad idea if there isn’t much else, or if the reason there’s so little is because of some future filtration event.)
One way of testing this is to whether people are willing to discuss existential risk threats that cannot be solved by giving them money. Such comments do exist (see for example Stephen Hawking’s comments about the danger of aliens). It is however interesting to note that he’s made similar remarks about the threat of AI. (See e.g. here). I’m not sure whether such evaluations are relevant.
Also, I don’t think it follows that people like Yudkowsky and Hellman necessarily decide to study the existential risks they do because they have a higher than average estimate for the threats in question. They may just have internalized the threats more. Most humans simply don’t internalize the risks of existential risk in a way that alters their actions, even if they are willing to acknowledge high probabilities of problems.
An attitude of “faster” might help a little to deal with the threat from aliens.
Our actions can probably affect the issue—at least a little—so money might help.
Hawking’s comments are pretty transparently more about publicity than fundraising, though.
I’d prefer humanity choose to cooperate with aliens if we are in the stronger position. But I agree that we shouldn’t expect them to do the same, and that this does argue for generic importance of developing technology faster. (On the other hand, intelligent life seems to be really rare, so trying to outrace others might be a bad idea if there isn’t much else, or if the reason there’s so little is because of some future filtration event.)