I’m not sure what the theory of change for listing such questions is.
In the context of policy advocacy, think it’s sometimes fine/good for labs to say somewhat different things publicly vs privately. Like, if I was in charge of a lab and believed (1) the EU AI Act will almost certainly pass and (2) it has some major bugs that make my life harder without safety benefits, I might publicly say “I support (the goals of) the EU AI Act” and privately put some effort into removing those bugs, which is technically lobbying to weaken the Act.
(^I’m not claiming that particular labs did ~this rather than actually lobby against the Act. I just think it’s messy and regulation isn’t a one-dimensional thing that you’re for or against.)
Edit: this comment was misleading and partially replied to a strawman. I agree it would be good for the labs and their leaders to publicly say some things about recommended regulation (beyond what they already do) and their lobbying. I’m nervous about trying to litigate rumors for reasons I haven’t explained.
Right now, I think one of the most credible ways for a lab to show its committment to safety is through its engagement with governments.
I didn’t mean to imply that a lab should automatically be considered “bad” if its public advocacy and its private advocacy differ.
However, when assessing how “responsible” various actors are, I think investigating questions relating to their public comms, engagement with government, policy proposals, lobbying efforts, etc would be valuable.
If Lab A had slightly better internal governance but lab B had better effects on “government governance”, I would say that lab B is more “responsible” on net.
Thanks. Briefly:
I’m not sure what the theory of change for listing such questions is.
In the context of policy advocacy, think it’s sometimes fine/good for labs to say somewhat different things publicly vs privately. Like, if I was in charge of a lab and believed (1) the EU AI Act will almost certainly pass and (2) it has some major bugs that make my life harder without safety benefits, I might publicly say “I support (the goals of) the EU AI Act” and privately put some effort into removing those bugs, which is technically lobbying to weaken the Act.
(^I’m not claiming that particular labs did ~this rather than actually lobby against the Act. I just think it’s messy and regulation isn’t a one-dimensional thing that you’re for or against.)
Edit: this comment was misleading and partially replied to a strawman. I agree it would be good for the labs and their leaders to publicly say some things about recommended regulation (beyond what they already do) and their lobbying. I’m nervous about trying to litigate rumors for reasons I haven’t explained.
Edit 2: based on https://corporateeurope.org/en/2023/11/byte-byte, https://time.com/6288245/openai-eu-lobbying-ai-act/, and background information, I believe that OpenAI, Microsoft, Google, and Meta privately lobbied to make the EU AI Act worse—especially by lobbying against rules for foundation models—and that this is inconsistent with OpenAI’s and Altman’s public statements.
Right now, I think one of the most credible ways for a lab to show its committment to safety is through its engagement with governments.
I didn’t mean to imply that a lab should automatically be considered “bad” if its public advocacy and its private advocacy differ.
However, when assessing how “responsible” various actors are, I think investigating questions relating to their public comms, engagement with government, policy proposals, lobbying efforts, etc would be valuable.
If Lab A had slightly better internal governance but lab B had better effects on “government governance”, I would say that lab B is more “responsible” on net.