It feels to me like one of the biggest changes has been something like “governments seem much more concerned about AI risks than they were last year, and this shift happened somewhat suddenly and unexpectedly”.
A more subjective take is something like “the major labs do not seem to be pushing for policies that would meaningfully curb race dynamics. Instead, they seem to be rallying around voluntary commitments to engage in dangerous capability evaluations & apply safeguards in which lab leadership determines if such safeguards are sufficient.” (I think the steelman of this is “but this could help us get binding legislation in which EG governments or third-party auditors end up evaluating safe cases”, but I think in the absence of any public calls for this, my default assumption is that labs would oppose such a scheme.)
It’s somewhat interesting to me that no one mentioned these (though in fairness the sample size is pretty low). I wonder if part of this reflects the fact that Constellation is geographically/culturally in the Bay Area (whereas the major centers of “government governance” are DC and London), and also that Constellation has (I think?) maintained more of a “work with labs, maintain good relationships with lab, and focus on plans that could inform labs” vibe.
I agree that that’s the most important change and that there’s reason to think people in Constellation/the Bay Area in general might systematically under-attend to policy developments, but I think the most likely explanation for the responses concentrating on other things is that I explicitly asked about technical developments that I missed because I wasn’t in the Bay, and the respondents generally have the additional context that I work in policy and live in DC, so responses that centered policy change would have been off-target.
It feels to me like one of the biggest changes has been something like “governments seem much more concerned about AI risks than they were last year, and this shift happened somewhat suddenly and unexpectedly”.
A more subjective take is something like “the major labs do not seem to be pushing for policies that would meaningfully curb race dynamics. Instead, they seem to be rallying around voluntary commitments to engage in dangerous capability evaluations & apply safeguards in which lab leadership determines if such safeguards are sufficient.” (I think the steelman of this is “but this could help us get binding legislation in which EG governments or third-party auditors end up evaluating safe cases”, but I think in the absence of any public calls for this, my default assumption is that labs would oppose such a scheme.)
It’s somewhat interesting to me that no one mentioned these (though in fairness the sample size is pretty low). I wonder if part of this reflects the fact that Constellation is geographically/culturally in the Bay Area (whereas the major centers of “government governance” are DC and London), and also that Constellation has (I think?) maintained more of a “work with labs, maintain good relationships with lab, and focus on plans that could inform labs” vibe.
I agree that that’s the most important change and that there’s reason to think people in Constellation/the Bay Area in general might systematically under-attend to policy developments, but I think the most likely explanation for the responses concentrating on other things is that I explicitly asked about technical developments that I missed because I wasn’t in the Bay, and the respondents generally have the additional context that I work in policy and live in DC, so responses that centered policy change would have been off-target.