So why not that funding model? Mostly a money thing.
*seems like mostly a funding deployment issue, probably due to some structural problems, AFAICT, without having any great inside info (within the traditional AI safety funding space; the rest of the world seems much less on the ball than the traiditional AI safety funding space).
I don’t understand what you mean. Do you mean there is lots of potential funding for AI alignment in eg governments, but that funding is only going to university researchers?
No, I mean that EA + AI safety funders probably would have a lot of money earmarked for AI risk mitigation, but they don’t seem able/willing to deploy it fast enough (according to my timelines, at least, but probably also according to many of theirs).
Governments mostly just don’t seem on the ball almost at all w.r.t. AI, even despite the recent progress (e.g. the AI safety summits, establishment of AISIs, etc.).
*seems like mostly a funding deployment issue, probably due to some structural problems, AFAICT, without having any great inside info (within the traditional AI safety funding space; the rest of the world seems much less on the ball than the traiditional AI safety funding space).
I don’t understand what you mean. Do you mean there is lots of potential funding for AI alignment in eg governments, but that funding is only going to university researchers?
No, I mean that EA + AI safety funders probably would have a lot of money earmarked for AI risk mitigation, but they don’t seem able/willing to deploy it fast enough (according to my timelines, at least, but probably also according to many of theirs).
Governments mostly just don’t seem on the ball almost at all w.r.t. AI, even despite the recent progress (e.g. the AI safety summits, establishment of AISIs, etc.).