So regulatory capture is a thing that can happen. I don’t think I got a complete picture of your image of how oversight for dominant companies is scary. You mentioned two possible mechanisms: rubber stamping things, and enforcing sharing of data. It’s not clear to me that either of these are obviously contra the goal of slowing things down. Like, maybe sharing of data (I’m imagining you mean to smaller competitors, as in the case of competition regulation) - but data isn’t really useful alone, you need to compute and technical capability to use it. More likely would be forced sharing of the models themselves, but this is isn’t the giving of an ongoing capability, although it could still be misused. Mandating sharing of data is less likely under regulatory capture though. And then the rubber stamping, well, maybe sometimes something would be stamped that shouldn’t have been, but surely some stamping process is better than none? It at least slows things down. I don’t think receiving a stamp wrongly makes an AI system more likely to go haywire—if it was going to it would anyway. AI labs don’t just think, hm, this model doesn’t have any stamp, let me check its safety. Maybe you think companies will do less self-regulation if external regulation happens? I don’t think this is true.
So regulatory capture is a thing that can happen. I don’t think I got a complete picture of your image of how oversight for dominant companies is scary. You mentioned two possible mechanisms: rubber stamping things, and enforcing sharing of data. It’s not clear to me that either of these are obviously contra the goal of slowing things down. Like, maybe sharing of data (I’m imagining you mean to smaller competitors, as in the case of competition regulation) - but data isn’t really useful alone, you need to compute and technical capability to use it. More likely would be forced sharing of the models themselves, but this is isn’t the giving of an ongoing capability, although it could still be misused. Mandating sharing of data is less likely under regulatory capture though. And then the rubber stamping, well, maybe sometimes something would be stamped that shouldn’t have been, but surely some stamping process is better than none? It at least slows things down. I don’t think receiving a stamp wrongly makes an AI system more likely to go haywire—if it was going to it would anyway. AI labs don’t just think, hm, this model doesn’t have any stamp, let me check its safety. Maybe you think companies will do less self-regulation if external regulation happens? I don’t think this is true.