My expectation is that it’s the same reason there was so much outcry about the completely toothless Executive Order. To quote myself:
The EO might be a bit more important than seems at a glance. My sense is that the main thing it does isn’t its object-level demands, but the fact that it introduces concepts like “AI models” and “model weights” and “compute thresholds” and “datacenters suitable for training runs” and so on into the framework of the legislation.
That it doesn’t do much with these variables is a secondary matter. What’s important is that it defines them at all, which should considerably lower the bar for defining further functions on these variables, i. e. new laws and regulations.
I think our circles may be greatly underappreciating this factor, accustomed as we are to thinking in such terms. But to me, at least, it seems a bit unreal to see actual government documents talking about “foundation models” in terms of how many “floating-point operations” are used to “train” them.
Coming up with a new fire safety standard for restaurants and passing it is relatively easy if you already have a lot of legislation talking about “restaurants” — if “a restaurant” is a familiar concept to the politicians, if there’s extant infrastructure for tracking the existence of “restaurants” nationwide, etc. It is much harder if your new standard needs to front-load the explanation of what the hell a “restaurant” even is.
By analogy, it’s not unlike academic publishing, where any conclusion (even the extremely obvious ones) that isn’t yet part of some paper can’t be referred to.
Similarly, SB 1047 would’ve introduced a foundational framework on which other regulations could’ve later been built. Keeping the-government-as-a-system unable to even comprehend the potential harms AGI Labs could unleash is a goal in itself.
My expectation is that it’s the same reason there was so much outcry about the completely toothless Executive Order. To quote myself:
Similarly, SB 1047 would’ve introduced a foundational framework on which other regulations could’ve later been built. Keeping the-government-as-a-system unable to even comprehend the potential harms AGI Labs could unleash is a goal in itself.