Perhaps this isn’t in scope, but if I were designing a reading list on “lab governance”, I would try to include at least 1-2 perspectives that highlight the limitations of lab governance, criticisms of focusing too much on lab governance, etc.
Specific examples might include criticisms of RSPs, Kelsey’s coverage of the OpenAI NDA stuff, alleged instances of labs or lab CEOs misleading the public/policymakers, and perspectives from folks like Tegmark and Leahy (who generally see a lot of lab governance as safety-washing and probably have less trust in lab CEOs than the median AIS person).
(Perhaps such perspectives get covered in other units, but part of me still feels like it’s pretty important for a lab governance reading list to include some of these more “fundamental” critiques of lab governance. Especially insofar as, broadly speaking, I think a lot of AIS folks were more optimistic about lab governance 1-3 years ago than they are now.)
Specific examples might include criticisms of RSPs, Kelsey’s coverage of the OpenAI NDA stuff, alleged instances of labs or lab CEOs misleading the public/policymakers, and perspectives from folks like Tegmark and Leahy (who generally see a lot of lab governance as safety-washing and probably have less trust in lab CEOs than the median AIS person).
Isn’t much of that criticism also forms of lab governance? I’ve always understood the field of “lab governance” as something like “analysing and suggesting improvements for practices, policies, and organisational structures in AI labs”. By that definition, many critiques of RSPs would count as lab governance, as could the coverage of OpenAI’s NDAs. But arguments of the sort “labs aren’t responsive to outside analyses/suggestions, dooming such analyses/suggestions” would indeed be criticisms of lab governance as a field or activity.
(ETA: Actually, I suppose there’s no reason why a piece of X research cannot critique X (the field it’s a part of). So my whole comment may be superfluous. But eh, maybe it’s worth pointing out that the stuff you propose adding can also be seen as a natural part of the field.)
Yeah, I think there’s a useful distinction between two different kinds of “critiques:”
Critique #1: I have reviewed the preparedness framework and I think the threshold for “high-risk” in the model autonomy category is too high. Here’s an alternative threshold.
Critique #2: The entire RSP/PF effort is not going to work because [they’re too vague//labs don’t want to make them more specific//they’re being used for safety-washing//labs will break or weaken the RSPs//race dynamics will force labs to break RSPs//labs cannot be trusted to make or follow RSPs that are sufficiently strong/specific/verifiable].
I feel like critique #1 falls more neatly into “this counts as lab governance” whereas IMO critique #2 falls more into “this is a critique of lab governance.” In practice the lines blur. For example, I think last year there was a lot more “critique #1” style stuff, and then over time as the list of specific object-level critiques grew, we started to see more support for things in the “critique #2″ bucket.
Perhaps this isn’t in scope, but if I were designing a reading list on “lab governance”, I would try to include at least 1-2 perspectives that highlight the limitations of lab governance, criticisms of focusing too much on lab governance, etc.
Specific examples might include criticisms of RSPs, Kelsey’s coverage of the OpenAI NDA stuff, alleged instances of labs or lab CEOs misleading the public/policymakers, and perspectives from folks like Tegmark and Leahy (who generally see a lot of lab governance as safety-washing and probably have less trust in lab CEOs than the median AIS person).
(Perhaps such perspectives get covered in other units, but part of me still feels like it’s pretty important for a lab governance reading list to include some of these more “fundamental” critiques of lab governance. Especially insofar as, broadly speaking, I think a lot of AIS folks were more optimistic about lab governance 1-3 years ago than they are now.)
Isn’t much of that criticism also forms of lab governance? I’ve always understood the field of “lab governance” as something like “analysing and suggesting improvements for practices, policies, and organisational structures in AI labs”. By that definition, many critiques of RSPs would count as lab governance, as could the coverage of OpenAI’s NDAs. But arguments of the sort “labs aren’t responsive to outside analyses/suggestions, dooming such analyses/suggestions” would indeed be criticisms of lab governance as a field or activity.
(ETA: Actually, I suppose there’s no reason why a piece of X research cannot critique X (the field it’s a part of). So my whole comment may be superfluous. But eh, maybe it’s worth pointing out that the stuff you propose adding can also be seen as a natural part of the field.)
Yeah, I think there’s a useful distinction between two different kinds of “critiques:”
Critique #1: I have reviewed the preparedness framework and I think the threshold for “high-risk” in the model autonomy category is too high. Here’s an alternative threshold.
Critique #2: The entire RSP/PF effort is not going to work because [they’re too vague//labs don’t want to make them more specific//they’re being used for safety-washing//labs will break or weaken the RSPs//race dynamics will force labs to break RSPs//labs cannot be trusted to make or follow RSPs that are sufficiently strong/specific/verifiable].
I feel like critique #1 falls more neatly into “this counts as lab governance” whereas IMO critique #2 falls more into “this is a critique of lab governance.” In practice the lines blur. For example, I think last year there was a lot more “critique #1” style stuff, and then over time as the list of specific object-level critiques grew, we started to see more support for things in the “critique #2″ bucket.