Interesting idea. I think a problem not brought up is that of https://en.wikipedia.org/wiki/Regulatory_capture . One of the reasons the FDA is so effective (at impeding progress) is that it’s stated mission is different from it’s actual supporters and constituents. If you see it through the lens of “ensuring monopoly and profit for established members of the drug industry”, it’s amazingly good at what it does.
The concept applied to AI oversight gets a lot scarier. It won’t take long for AI creators (assisted by their fledgling tool AIs) to take over so it’s actually helping them (and probably slowing down their competitors, which might or might not be valuable to us, the victims), either by giving them cover for not-obviously-dangerous-so-now-officially-permitted activities, or by enforced sharing of data, or by other things that sound wise for a regulatory agency but are contra to OUR goals of slowing things down.
So regulatory capture is a thing that can happen. I don’t think I got a complete picture of your image of how oversight for dominant companies is scary. You mentioned two possible mechanisms: rubber stamping things, and enforcing sharing of data. It’s not clear to me that either of these are obviously contra the goal of slowing things down. Like, maybe sharing of data (I’m imagining you mean to smaller competitors, as in the case of competition regulation) - but data isn’t really useful alone, you need to compute and technical capability to use it. More likely would be forced sharing of the models themselves, but this is isn’t the giving of an ongoing capability, although it could still be misused. Mandating sharing of data is less likely under regulatory capture though. And then the rubber stamping, well, maybe sometimes something would be stamped that shouldn’t have been, but surely some stamping process is better than none? It at least slows things down. I don’t think receiving a stamp wrongly makes an AI system more likely to go haywire—if it was going to it would anyway. AI labs don’t just think, hm, this model doesn’t have any stamp, let me check its safety. Maybe you think companies will do less self-regulation if external regulation happens? I don’t think this is true.
Interesting idea. I think a problem not brought up is that of https://en.wikipedia.org/wiki/Regulatory_capture . One of the reasons the FDA is so effective (at impeding progress) is that it’s stated mission is different from it’s actual supporters and constituents. If you see it through the lens of “ensuring monopoly and profit for established members of the drug industry”, it’s amazingly good at what it does.
The concept applied to AI oversight gets a lot scarier. It won’t take long for AI creators (assisted by their fledgling tool AIs) to take over so it’s actually helping them (and probably slowing down their competitors, which might or might not be valuable to us, the victims), either by giving them cover for not-obviously-dangerous-so-now-officially-permitted activities, or by enforced sharing of data, or by other things that sound wise for a regulatory agency but are contra to OUR goals of slowing things down.
So regulatory capture is a thing that can happen. I don’t think I got a complete picture of your image of how oversight for dominant companies is scary. You mentioned two possible mechanisms: rubber stamping things, and enforcing sharing of data. It’s not clear to me that either of these are obviously contra the goal of slowing things down. Like, maybe sharing of data (I’m imagining you mean to smaller competitors, as in the case of competition regulation) - but data isn’t really useful alone, you need to compute and technical capability to use it. More likely would be forced sharing of the models themselves, but this is isn’t the giving of an ongoing capability, although it could still be misused. Mandating sharing of data is less likely under regulatory capture though. And then the rubber stamping, well, maybe sometimes something would be stamped that shouldn’t have been, but surely some stamping process is better than none? It at least slows things down. I don’t think receiving a stamp wrongly makes an AI system more likely to go haywire—if it was going to it would anyway. AI labs don’t just think, hm, this model doesn’t have any stamp, let me check its safety. Maybe you think companies will do less self-regulation if external regulation happens? I don’t think this is true.