I think the main problem is that society-at-large doesn’t significantly value AI safety research, and hence that the funding is severely constrained. I’d be surprised if the consideration you describe in the last paragraph plays a significant role.
I think it’s more of a side effect of the FTX disaster, where people are no longer willing to donate to EA, which means that AI safety got particularly hard hit as a result.
I suspect the (potentially much) bigger factor than ‘people are no longer willing to donate to EA’ is OpenPhil’s reluctancy to spend more and faster on AI risk mitigation. Don’t know how much this has to do with FTX, it might have more to do with differences of opinion in timelines, conservativeness, incompetence (especially when it comes to scaling up grantmaking capacity) or (other) less transparent internal factors.
(Tbc, I think OpenPhil is still doing much, much better than the vast majority of actors, but I could bet by the end of the decade them not having moved faster with respect to AI risk mitigation will look like a huge missed opportunity).
I think the main problem is that society-at-large doesn’t significantly value AI safety research, and hence that the funding is severely constrained. I’d be surprised if the consideration you describe in the last paragraph plays a significant role.
I think it’s more of a side effect of the FTX disaster, where people are no longer willing to donate to EA, which means that AI safety got particularly hard hit as a result.
I suspect the (potentially much) bigger factor than ‘people are no longer willing to donate to EA’ is OpenPhil’s reluctancy to spend more and faster on AI risk mitigation. Don’t know how much this has to do with FTX, it might have more to do with differences of opinion in timelines, conservativeness, incompetence (especially when it comes to scaling up grantmaking capacity) or (other) less transparent internal factors.
(Tbc, I think OpenPhil is still doing much, much better than the vast majority of actors, but I could bet by the end of the decade them not having moved faster with respect to AI risk mitigation will look like a huge missed opportunity).