Even with its flaws, this study is sufficient evidence for us to want to enact temporary regulation at the same time as we work to provide more robust evaluations.
Note—that if I thought regulations would be temporary, or had a chance of loosening over time after evals found that the risks from models at compute size X would not be catastrophic, I would be much less worried about all the things I’m worried about re. open source and power and and banning open source
But I just don’t think that most regulations will be temporary. A large number of people want to move compute limits down over time. Some orgs (like PauseAI or anything Leahy touches) want much lower limits than are implied by the EO. And of course regulators in general are extremely risk averse, and the trend is almost always for regulations to increase.
If the AI safety movement could creditably promise in some way that they would actively push for laws whose limits raised over time, in the default case, I’d be less worried. But given (1) the conflict on this issue within AI safety itself (2) the default way that regulations work, I cannot make myself believe that “temporary regulation” is ever going to happen.
Oh, I don’t think it actually would end up being temporary, because I expect with high probability that the empirical results of more robust evaluations would confirm that open-source AI is indeed dangerous. I meant temporary in the sense that the initial restrictions might either a) have a time-limit b) be subjective to re-evaluation at a specified point.
Note—that if I thought regulations would be temporary, or had a chance of loosening over time after evals found that the risks from models at compute size X would not be catastrophic, I would be much less worried about all the things I’m worried about re. open source and power and and banning open source
But I just don’t think that most regulations will be temporary. A large number of people want to move compute limits down over time. Some orgs (like PauseAI or anything Leahy touches) want much lower limits than are implied by the EO. And of course regulators in general are extremely risk averse, and the trend is almost always for regulations to increase.
If the AI safety movement could creditably promise in some way that they would actively push for laws whose limits raised over time, in the default case, I’d be less worried. But given (1) the conflict on this issue within AI safety itself (2) the default way that regulations work, I cannot make myself believe that “temporary regulation” is ever going to happen.
Oh, I don’t think it actually would end up being temporary, because I expect with high probability that the empirical results of more robust evaluations would confirm that open-source AI is indeed dangerous. I meant temporary in the sense that the initial restrictions might either a) have a time-limit b) be subjective to re-evaluation at a specified point.