If I had a magic policy wand I would probably wish for something like Anthropic’s RSPs as an early warning system combined with tons of micro grants to anyone willing with current SOTA models in an empirically guided way.
Do you have a way to do that that doesn’t route through compute governance?
I don’t necessarily disagree with these things (I don’t have a super strong opinion), but the thing that seems very likely to me is that we need more time to make lots of bets and see research play out. The point of pauses, and compute governance, is to get time for those bets to play out. (I think it’s plausibly reasonable position that “shut it all down” would be counterproductive, but the other things you listed frustration with seem completely compatible with everything you said)
the PauseAI people have been trying to pause since GPT2. It’s not “buying time” if you freeze research at some state where it’s impossible to make progress. It’s also not “buying time” if you ban open-sourcing models (like llama-4) that are obviously not existentially dangerous and have been a huge boon for research.
Obviously once we have genuinely dangerous models (e.g. capable of building nuclear weapons undetected) they will need to be restricted but the actual limits being proposed are arbitrary and way too low.
Limits need to be based on contact with reality, which means engineers making informed decisions, not politicians making arbitrary ones.
Do you have a way to do that that doesn’t route through compute governance?
I don’t necessarily disagree with these things (I don’t have a super strong opinion), but the thing that seems very likely to me is that we need more time to make lots of bets and see research play out. The point of pauses, and compute governance, is to get time for those bets to play out. (I think it’s plausibly reasonable position that “shut it all down” would be counterproductive, but the other things you listed frustration with seem completely compatible with everything you said)
the PauseAI people have been trying to pause since GPT2. It’s not “buying time” if you freeze research at some state where it’s impossible to make progress. It’s also not “buying time” if you ban open-sourcing models (like llama-4) that are obviously not existentially dangerous and have been a huge boon for research.
Obviously once we have genuinely dangerous models (e.g. capable of building nuclear weapons undetected) they will need to be restricted but the actual limits being proposed are arbitrary and way too low.
Limits need to be based on contact with reality, which means engineers making informed decisions, not politicians making arbitrary ones.