“One problem is that due to algorithmic improvements, any FLOP threshold we set now is going to be less effective at reducing risk to acceptable levels in the future.”
And this goes doubly so if we explicitly incentivize low-FLOP models. When models are non-negligibly FLOP-limited by law, then FLOP-optimization will become a major priority for AI researchers.
This reminds me of Goodhart’s Law, which states “when a measure becomes a target, it ceases to be a good measure.”
I.e., if FLOPs are supposed to be a measure of an AI’s danger, and we then limit/target FLOPs in order to limit AGI danger, then that targeting itself interferes or nullifies the effectiveness of FLOPs as a measure of danger.
It is (unfortunately) self-defeating. At a minimum, you need to re-evaluate regularly the connection between FLOPs and danger: it will be a moving target. Is our regulatory system up to that task?
“One problem is that due to algorithmic improvements, any FLOP threshold we set now is going to be less effective at reducing risk to acceptable levels in the future.”
And this goes doubly so if we explicitly incentivize low-FLOP models. When models are non-negligibly FLOP-limited by law, then FLOP-optimization will become a major priority for AI researchers.
This reminds me of Goodhart’s Law, which states “when a measure becomes a target, it ceases to be a good measure.”
I.e., if FLOPs are supposed to be a measure of an AI’s danger, and we then limit/target FLOPs in order to limit AGI danger, then that targeting itself interferes or nullifies the effectiveness of FLOPs as a measure of danger.
It is (unfortunately) self-defeating. At a minimum, you need to re-evaluate regularly the connection between FLOPs and danger: it will be a moving target. Is our regulatory system up to that task?