There are maybe two or three people in the entire world who spend only the bare possible minimum on themselves, and contribute everything else to a rationally effective charity. They have an excuse for not signing up. No one else does.
I guess I agree that only the specified people can be said to have made consistently rational decisions when it comes to allocating money between benefiting themselves and benefiting others (at least of those who know something about the issues). I don’t think this implies that all but these people should sign up for cryonics. General point: [Your actions cannot be described as motivated by coherent utility function unless you do A] does not imply [you ought to do A].
Simple example: Tom cares about the welfare of others as much as his own, but biases lead him to consistently act as if he cared about his welfare 1,000 times as much as the welfare of others. Tom could overcome these biases, but he has not in the past. In a moment when he is unaffected by these biases, Tom sacrifices his life to save the lives of 900 other people.
[All that said, I take your point that it may be rational for you to advocate signing up for cryonics, since cryonics money and charity money may not be substitutes.]
I’m not exactly an SIAI true believer, but I think they might be right. Here are some questions I’ve thought about that might help you out. I think it would help others out if you told us exactly where you’d be interested in getting off the boat.
How much of your energy are you willing to spend on benefiting others, if the expected benefits to others will be very great? (It needn’t be great for you to support SIAI.)
Are you willing to pursue a diversified altruistic strategy if it saves fewer expected lives (it almost always will for donors giving less than $1 million or so)?
Do you think mitigating x-risk is more important than giving to down-to-earth charities (GiveWell style)? (This will largely turn on how you feel about supporting causes with key probabilities that are tough to estimate, and how you feel about low-probability, high expected utility prospects.)
Do you think that trying to negotiate a positive singularity is the best way to mitigate x-risk?
Is any known organization likely to do better than SIAI in terms of negotiating a positive singularity (in terms of decreasing x-risk) on the margin?
Are you likely to find an organization that beats SIAI in the future?
Judging from your post, you seem most skeptical about putting your efforts into causes whose probability of success is very difficult to estimate, and perhaps low.