the PauseAI people have been trying to pause since GPT2. It’s not “buying time” if you freeze research at some state where it’s impossible to make progress. It’s also not “buying time” if you ban open-sourcing models (like llama-4) that are obviously not existentially dangerous and have been a huge boon for research.
Obviously once we have genuinely dangerous models (e.g. capable of building nuclear weapons undetected) they will need to be restricted but the actual limits being proposed are arbitrary and way too low.
Limits need to be based on contact with reality, which means engineers making informed decisions, not politicians making arbitrary ones.
the PauseAI people have been trying to pause since GPT2. It’s not “buying time” if you freeze research at some state where it’s impossible to make progress. It’s also not “buying time” if you ban open-sourcing models (like llama-4) that are obviously not existentially dangerous and have been a huge boon for research.
Obviously once we have genuinely dangerous models (e.g. capable of building nuclear weapons undetected) they will need to be restricted but the actual limits being proposed are arbitrary and way too low.
Limits need to be based on contact with reality, which means engineers making informed decisions, not politicians making arbitrary ones.