My statement [..] was in support of “better to help SI grow and improve rather than start a new, similar AI risk reduction organization”, not in support of “SI is capable of mitigating x-risk given money.”
Ah, OK. I misunderstood that; thanks for the clarification. For what it’s worth, I think the case for “support SI >> start a new organization on a similar model” is pretty compelling.
And, yes, the “How to Purchase AI Risk Reduction” series is an excellent step in the direction of making SI’s current and planned activities, and how they relate to your mission, more concrete and transparent. Yay you!
Ah, OK. I misunderstood that; thanks for the clarification.
For what it’s worth, I think the case for “support SI >> start a new organization on a similar model” is pretty compelling.
And, yes, the “How to Purchase AI Risk Reduction” series is an excellent step in the direction of making SI’s current and planned activities, and how they relate to your mission, more concrete and transparent. Yay you!