To what extent do you think the $100M threshold will weaken the bill “in practice?” I feel like “severely weakened” might overstate the amount of weakenedness. I would probably say “mildly weakened.”
I think the logic along the lines of “the frontier models are going to be the ones where the dangerous capabilities are discovered first, so maybe it seems fine (for now) to exclude non-frontier models” makes some amount of sense.
In the long-run, this approach fails because you might be able to hit dangerous capabilities with <$100M. But in the short-run, it feels like the bill covers the most relevant actors (Microsoft, Meta, Google, OpenAI, Anthropic).
Maybe I always thought the point of the bill was to cover frontier AI systems (which are still covered) as opposed to any systems that could have hazardous capabilities, so I see the $100M threshold as more of a “compromise consistent with the spirit of the bill” as opposed to a “substantial weakening of the bill.” What do you think?
To what extent do you think the $100M threshold will weaken the bill “in practice?” I feel like “severely weakened” might overstate the amount of weakenedness. I would probably say “mildly weakened.”
I think the logic along the lines of “the frontier models are going to be the ones where the dangerous capabilities are discovered first, so maybe it seems fine (for now) to exclude non-frontier models” makes some amount of sense.
In the long-run, this approach fails because you might be able to hit dangerous capabilities with <$100M. But in the short-run, it feels like the bill covers the most relevant actors (Microsoft, Meta, Google, OpenAI, Anthropic).
Maybe I always thought the point of the bill was to cover frontier AI systems (which are still covered) as opposed to any systems that could have hazardous capabilities, so I see the $100M threshold as more of a “compromise consistent with the spirit of the bill” as opposed to a “substantial weakening of the bill.” What do you think?
See also: