I’ve been following Zvi’s coverage on SB 1047 and I don’t think I have a good understanding of why large companies like OpenAI and Meta are so against it. I don’t understand why they would spend so much capital to even get Nanci Pelosi to oppose it. I can’t find a good source for describing their opposition. Obviously we can’t trust what the companies themselves are saying because as Zvi has pointed out they’re mostly distorting the facts or making bogus claims.
But beyond the public PR there probably still are actual reasons why they oppose it so strongly—my assumption is that: - SB 1047 will force large AI companies to guide large model development and “safety certification” in a way that would sound good to non-technical users reading media headlines. So this would end up having model development have a large PR but technically useless aspect to it. - SB 1047 will be a compliance nightmare where large companies will have to add significant friction in model development and “safety” for not much real-world benefit to what they think they would be doing anyway.
Anyone have a source for a good analysis on what large companies are actually against this?
My expectation is that it’s the same reason there was so much outcry about the completely toothless Executive Order. To quote myself:
The EO might be a bit more important than seems at a glance. My sense is that the main thing it does isn’t its object-level demands, but the fact that it introduces concepts like “AI models” and “model weights” and “compute thresholds” and “datacenters suitable for training runs” and so on into the framework of the legislation.
That it doesn’t do much with these variables is a secondary matter. What’s important is that it defines them at all, which should considerably lower the bar for defining further functions on these variables, i. e. new laws and regulations.
I think our circles may be greatly underappreciating this factor, accustomed as we are to thinking in such terms. But to me, at least, it seems a bit unreal to see actual government documents talking about “foundation models” in terms of how many “floating-point operations” are used to “train” them.
Coming up with a new fire safety standard for restaurants and passing it is relatively easy if you already have a lot of legislation talking about “restaurants” — if “a restaurant” is a familiar concept to the politicians, if there’s extant infrastructure for tracking the existence of “restaurants” nationwide, etc. It is much harder if your new standard needs to front-load the explanation of what the hell a “restaurant” even is.
By analogy, it’s not unlike academic publishing, where any conclusion (even the extremely obvious ones) that isn’t yet part of some paper can’t be referred to.
Similarly, SB 1047 would’ve introduced a foundational framework on which other regulations could’ve later been built. Keeping the-government-as-a-system unable to even comprehend the potential harms AGI Labs could unleash is a goal in itself.
I’ve been following Zvi’s coverage on SB 1047 and I don’t think I have a good understanding of why large companies like OpenAI and Meta are so against it. I don’t understand why they would spend so much capital to even get Nanci Pelosi to oppose it. I can’t find a good source for describing their opposition. Obviously we can’t trust what the companies themselves are saying because as Zvi has pointed out they’re mostly distorting the facts or making bogus claims.
But beyond the public PR there probably still are actual reasons why they oppose it so strongly—my assumption is that:
- SB 1047 will force large AI companies to guide large model development and “safety certification” in a way that would sound good to non-technical users reading media headlines. So this would end up having model development have a large PR but technically useless aspect to it.
- SB 1047 will be a compliance nightmare where large companies will have to add significant friction in model development and “safety” for not much real-world benefit to what they think they would be doing anyway.
Anyone have a source for a good analysis on what large companies are actually against this?
My expectation is that it’s the same reason there was so much outcry about the completely toothless Executive Order. To quote myself:
Similarly, SB 1047 would’ve introduced a foundational framework on which other regulations could’ve later been built. Keeping the-government-as-a-system unable to even comprehend the potential harms AGI Labs could unleash is a goal in itself.