I think on alternative here that isn’t just “trust AI companies” is “wait until we have a good Danger Eval, and then get another bit of legislation that specifically focuses on that, rather than hoping that the bureaucratic/political process shakes out with a good set of SSP industry standards.”
I don’t know that that’s the right call, but I don’t think it’s a crazy position from a safety perspective.
I think on alternative here that isn’t just “trust AI companies” is “wait until we have a good Danger Eval, and then get another bit of legislation that specifically focuses on that, rather than hoping that the bureaucratic/political process shakes out with a good set of SSP industry standards.”
I don’t know that that’s the right call, but I don’t think it’s a crazy position from a safety perspective.