This is the threshold for the government has the ability to say no to, and is deliberately set well before catastrophe.
There are disadvantages to giving the government “the ability to say no” to models used by thousands of people. There are disadvantages even in a frame where AI-takeover is the only thing you care about!
For instance, if you give the government too expansive a concern such that it must approve many models “well before the threshold”, then it will have thousands of requests thrown at it regularly, and it could (1) either try to scrutinize each, and become an invasive thorn everyone despises and which will be eliminated as soon as possible, because 99.99% of what it concerns itself will have (evidently to everyone) nothing to do with x-risk or (2) it might become a rubber-stamp factory that just lets all these thousands of requests through. (It could even simultaneously do both, like the FDA, which lets through probably useless things while prohibiting safe and useful things! This is likely; the government is not going to deliberate over what models are good like an intelligent person; it’s going to just follow procedure.)
(I don’t think LLMs of the scale you’re concerned with run an AI takeover risk. I have yet to read a takeover story about LLMs—of any size whatsoever—which makes sense to me. If there’s a story you think makes sense, by all means please give a link.)
But—I think a frame where AI takeover is the only thing you care about is manifestly the wrong frame for someone concerned with policy. Like—if you just care about one thing in a startup, and ignore other harms, you go out of business; but if you care about just one thing in policy, and ignore other things.… you… just can pass a policy, have it in place, and then cause a disaster because law doesn’t give feedback. The loop of feedback is about 10,000% worse, and so you need to be about 10,000% more paranoid that your ostensibly good actions are actually good.
And I don’t see evidence you’re seeking out possible harms of your proposed actions. Your website doesn’t talk about them; you don’t talk about possible bad effects and how you’d mitigate them—other than, as Quintin points out, in a basically factually incorrect manner.
There are disadvantages to giving the government “the ability to say no” to models used by thousands of people. There are disadvantages even in a frame where AI-takeover is the only thing you care about!
For instance, if you give the government too expansive a concern such that it must approve many models “well before the threshold”, then it will have thousands of requests thrown at it regularly, and it could (1) either try to scrutinize each, and become an invasive thorn everyone despises and which will be eliminated as soon as possible, because 99.99% of what it concerns itself will have (evidently to everyone) nothing to do with x-risk or (2) it might become a rubber-stamp factory that just lets all these thousands of requests through. (It could even simultaneously do both, like the FDA, which lets through probably useless things while prohibiting safe and useful things! This is likely; the government is not going to deliberate over what models are good like an intelligent person; it’s going to just follow procedure.)
(I don’t think LLMs of the scale you’re concerned with run an AI takeover risk. I have yet to read a takeover story about LLMs—of any size whatsoever—which makes sense to me. If there’s a story you think makes sense, by all means please give a link.)
But—I think a frame where AI takeover is the only thing you care about is manifestly the wrong frame for someone concerned with policy. Like—if you just care about one thing in a startup, and ignore other harms, you go out of business; but if you care about just one thing in policy, and ignore other things.… you… just can pass a policy, have it in place, and then cause a disaster because law doesn’t give feedback. The loop of feedback is about 10,000% worse, and so you need to be about 10,000% more paranoid that your ostensibly good actions are actually good.
And I don’t see evidence you’re seeking out possible harms of your proposed actions. Your website doesn’t talk about them; you don’t talk about possible bad effects and how you’d mitigate them—other than, as Quintin points out, in a basically factually incorrect manner.