Concrete suggestion: OpenAI should allow the Safety Advisory Group Chair and the head of the Preparedness Team to have “veto power” on model development and deployment decisions.
Quite possibly a good idea, but I think it’s less obvious than it seems at first glance: Remember that a position’s having veto power will tend to have a large impact on selection for that position.
The comparison isn’t [x with veto power] vs [x without veto power]. It’s [x with veto power] vs [y without veto power]. If y would tend to have deeper understanding, more independence or more caution than x, it’s not obvious that giving the position veto power helps. Better to have someone who’ll spot problems and need to use persuasion, than someone who can veto but spots no problems.
Quite possibly a good idea, but I think it’s less obvious than it seems at first glance:
Remember that a position’s having veto power will tend to have a large impact on selection for that position.
The comparison isn’t [x with veto power] vs [x without veto power].
It’s [x with veto power] vs [y without veto power].
If y would tend to have deeper understanding, more independence or more caution than x, it’s not obvious that giving the position veto power helps. Better to have someone who’ll spot problems and need to use persuasion, than someone who can veto but spots no problems.