That’s another main possibility. I don’t buy the reasoning in general though—integrity is just super valuable. (Separately I’m aware of projects that are very important and neglected (legibly so) without being funded, so I don’t overall believe that there are a bunch of people strategically capitulating to anti-integrity systems in order to fund key projects.) Anyway, my main interest here is to say that there is a real, large-scale, ongoing problem(s) with the social world, which increases X-risk; it would be good for some people to think clearly about that; and it’s not good to be satisfied with false / vague / superficial stories about what’s happening.
Isn’t the central one “you want to spend money to make a better long term future more likely, e.g. by donating it to fund AI safety work now”?
Fair enough if you think the marginal value of money is negligable, but this isn’t exactly obvious.
That’s another main possibility. I don’t buy the reasoning in general though—integrity is just super valuable. (Separately I’m aware of projects that are very important and neglected (legibly so) without being funded, so I don’t overall believe that there are a bunch of people strategically capitulating to anti-integrity systems in order to fund key projects.) Anyway, my main interest here is to say that there is a real, large-scale, ongoing problem(s) with the social world, which increases X-risk; it would be good for some people to think clearly about that; and it’s not good to be satisfied with false / vague / superficial stories about what’s happening.