FWIW, I am also very worried about this and it feels pretty plausible to me. I don’t have any great reassurances, besides me thinking about this a lot and trying somewhat hard to counteract it in my own grant evaluations, but I only do a small minority of grant evaluations on the LTFF these days.
I do want to clarify that I think it’s unlikely that AI Safety is a front for advancing AI capabilities. I think the framing that’s more plausibly true is that AI Safety is a memespace that has undergone regulatory capture by capability companies and people in the EA network to primarily build out their own influence over the world.
Their worldviews is of course heavily influenced by concerns about the future of humanity and how it will interact with AI, but in a way that primarily leverages symmetric weapons and does not involve much of any accountability or public reasoning about their risk models, which seem substantially skewed by the fact that people are making billions of dollars off of advances in AI capabilities, and are substantially worried that people they don’t like will get to control AI.
I do also think this is just one framing, and there are a lot of other things going on.
FWIW, I am also very worried about this and it feels pretty plausible to me. I don’t have any great reassurances, besides me thinking about this a lot and trying somewhat hard to counteract it in my own grant evaluations, but I only do a small minority of grant evaluations on the LTFF these days.
I do want to clarify that I think it’s unlikely that AI Safety is a front for advancing AI capabilities. I think the framing that’s more plausibly true is that AI Safety is a memespace that has undergone regulatory capture by capability companies and people in the EA network to primarily build out their own influence over the world.
Their worldviews is of course heavily influenced by concerns about the future of humanity and how it will interact with AI, but in a way that primarily leverages symmetric weapons and does not involve much of any accountability or public reasoning about their risk models, which seem substantially skewed by the fact that people are making billions of dollars off of advances in AI capabilities, and are substantially worried that people they don’t like will get to control AI.
I do also think this is just one framing, and there are a lot of other things going on.