My statement “SI has successfully concentrated lots of attention, donor support, and human capital [and also] has learned many lessons [and] has lots of experience with [these unusual, complicated] issues” was in support of “better to help SI grow and improve rather than start a new, similar AI risk reduction organization”, not in support of “SI is capable of mitigating x-risk given money.”
However, if I didn’t also think SI was capable of reducing x-risk given money, then I would leave SI and go do something else, and indeed will do so in the future if I come to believe that SI is no longer capable of reducing x-risk given money. How to Purchase AI Risk Reduction is a list of things that (1) SI is currently doing to reduce AI risk, or that (2) SI could do almost immediately (to reduce AI risk) if it had sufficient funding.
My statement [..] was in support of “better to help SI grow and improve rather than start a new, similar AI risk reduction organization”, not in support of “SI is capable of mitigating x-risk given money.”
Ah, OK. I misunderstood that; thanks for the clarification. For what it’s worth, I think the case for “support SI >> start a new organization on a similar model” is pretty compelling.
And, yes, the “How to Purchase AI Risk Reduction” series is an excellent step in the direction of making SI’s current and planned activities, and how they relate to your mission, more concrete and transparent. Yay you!
Thank you for understanding. :)
My statement “SI has successfully concentrated lots of attention, donor support, and human capital [and also] has learned many lessons [and] has lots of experience with [these unusual, complicated] issues” was in support of “better to help SI grow and improve rather than start a new, similar AI risk reduction organization”, not in support of “SI is capable of mitigating x-risk given money.”
However, if I didn’t also think SI was capable of reducing x-risk given money, then I would leave SI and go do something else, and indeed will do so in the future if I come to believe that SI is no longer capable of reducing x-risk given money. How to Purchase AI Risk Reduction is a list of things that (1) SI is currently doing to reduce AI risk, or that (2) SI could do almost immediately (to reduce AI risk) if it had sufficient funding.
Ah, OK. I misunderstood that; thanks for the clarification.
For what it’s worth, I think the case for “support SI >> start a new organization on a similar model” is pretty compelling.
And, yes, the “How to Purchase AI Risk Reduction” series is an excellent step in the direction of making SI’s current and planned activities, and how they relate to your mission, more concrete and transparent. Yay you!