Quite the opposite, I think superintelligence is going to be substantially different than other complex systems, so regulating GPT-5 like it’s superintelligence is a non-starter for me.
What we really need at the moment are smart people deeply engaging the details of how current models work. In terms of large labs, Anthropic has probably done the best (e.g. Golden Gate Claude). But I also think there’s a ton of value coming from people like Janus who are genuinely curious about how these models behave.
If I had a magic policy wand I would probably wish for something like Anthropic’s RSPs as an early warning system combined with tons of micro grants to anyone willing with current SOTA models in an empirically guided way. Given that the Transformer architecture seems inherently myopic/harmless, I also think we should open source much more than we have (certainly up to and including GPT-5).
The fact that we don’t know how to solve alignment means that we don’t know where a solution will come from, so we should be making as many bets as possible (especially while the technology is still passively safe).
I’m much happier that someone is building e.g. chaosGPT now rather than in 3-5 years when we will have wide-scale deployment of potentially lethal robots in every home/street in America.
If I had a magic policy wand I would probably wish for something like Anthropic’s RSPs as an early warning system combined with tons of micro grants to anyone willing with current SOTA models in an empirically guided way.
Do you have a way to do that that doesn’t route through compute governance?
I don’t necessarily disagree with these things (I don’t have a super strong opinion), but the thing that seems very likely to me is that we need more time to make lots of bets and see research play out. The point of pauses, and compute governance, is to get time for those bets to play out. (I think it’s plausibly reasonable position that “shut it all down” would be counterproductive, but the other things you listed frustration with seem completely compatible with everything you said)
the PauseAI people have been trying to pause since GPT2. It’s not “buying time” if you freeze research at some state where it’s impossible to make progress. It’s also not “buying time” if you ban open-sourcing models (like llama-4) that are obviously not existentially dangerous and have been a huge boon for research.
Obviously once we have genuinely dangerous models (e.g. capable of building nuclear weapons undetected) they will need to be restricted but the actual limits being proposed are arbitrary and way too low.
Limits need to be based on contact with reality, which means engineers making informed decisions, not politicians making arbitrary ones.
Quite the opposite, I think superintelligence is going to be substantially different than other complex systems, so regulating GPT-5 like it’s superintelligence is a non-starter for me.
What sort of things do you expect to work better?
What we really need at the moment are smart people deeply engaging the details of how current models work. In terms of large labs, Anthropic has probably done the best (e.g. Golden Gate Claude). But I also think there’s a ton of value coming from people like Janus who are genuinely curious about how these models behave.
If I had a magic policy wand I would probably wish for something like Anthropic’s RSPs as an early warning system combined with tons of micro grants to anyone willing with current SOTA models in an empirically guided way. Given that the Transformer architecture seems inherently myopic/harmless, I also think we should open source much more than we have (certainly up to and including GPT-5).
The fact that we don’t know how to solve alignment means that we don’t know where a solution will come from, so we should be making as many bets as possible (especially while the technology is still passively safe).
I’m much happier that someone is building e.g. chaosGPT now rather than in 3-5 years when we will have wide-scale deployment of potentially lethal robots in every home/street in America.
Do you have a way to do that that doesn’t route through compute governance?
I don’t necessarily disagree with these things (I don’t have a super strong opinion), but the thing that seems very likely to me is that we need more time to make lots of bets and see research play out. The point of pauses, and compute governance, is to get time for those bets to play out. (I think it’s plausibly reasonable position that “shut it all down” would be counterproductive, but the other things you listed frustration with seem completely compatible with everything you said)
the PauseAI people have been trying to pause since GPT2. It’s not “buying time” if you freeze research at some state where it’s impossible to make progress. It’s also not “buying time” if you ban open-sourcing models (like llama-4) that are obviously not existentially dangerous and have been a huge boon for research.
Obviously once we have genuinely dangerous models (e.g. capable of building nuclear weapons undetected) they will need to be restricted but the actual limits being proposed are arbitrary and way too low.
Limits need to be based on contact with reality, which means engineers making informed decisions, not politicians making arbitrary ones.