Thanks for the reply. When I wrote “Many people would have more useful things to say about this than I do”, you were one of the people I was thinking of.
AI Impacts wants to think about AI sentience and OP cannot fund orgs that do that kind of work
Related to this, I think GW/OP has always been too unwilling to fund weird causes, but it’s generally gotten better over time: originally recommending US charities over global poverty b/c global poverty was too weird, taking years to remove their recommendations for US charities that were ~100x less effective than their global poverty recs, then taking years to start funding animal welfare and x-risk, then still not funding weirder stuff like wild animal welfare and AI sentience. I’ve criticized them for this in the past but I liked that they were moving in the right direction. Now I get the sense that recently they’ve gotten worse on AI safety (and weird causes in general).
Thanks for the reply. When I wrote “Many people would have more useful things to say about this than I do”, you were one of the people I was thinking of.
Related to this, I think GW/OP has always been too unwilling to fund weird causes, but it’s generally gotten better over time: originally recommending US charities over global poverty b/c global poverty was too weird, taking years to remove their recommendations for US charities that were ~100x less effective than their global poverty recs, then taking years to start funding animal welfare and x-risk, then still not funding weirder stuff like wild animal welfare and AI sentience. I’ve criticized them for this in the past but I liked that they were moving in the right direction. Now I get the sense that recently they’ve gotten worse on AI safety (and weird causes in general).