@Zac Hatfield-Dodds do you have any thoughts on official comms from Anthropic and Anthropic’s policy team?
For example, I’m curious if you have thoughts on this anecdote– Jack Clark was asked an open-ended question by Senator Cory Booker and he told policymakers that his top policy priority was getting the government to deploy AI successfully. There was no mention of AGI, existential risks, misalignment risks, or anything along those lines, even though it would’ve been (IMO) entirely appropriate for him to bring such concerns up in response to such an open-ended question.
I was left thinking that either Jack does not care much about misalignment risks or he was not being particularly honest/transparent with policymakers. Both of these raise some concerns for me.
(Noting that I hold Anthropic’s comms and policy teams to higher standards than individual employees. I don’t have particularly strong takes on what Anthropic employees should be doing in their personal capacity– like in general I’m pretty in favor of transparency, but I get it, it’s hard and there’s a lot that you have to do. Whereas the comms and policy teams are explicitly hired/paid/empowered to do comms and policy, so I feel like it’s fair to have higher expectations of them.)
very powerful systems [] may have national security uses or misuses. And for that I think we need to come up with tests that make sure that we don’t put technologies into the market which could—unwittingly to us—advantage someone or allow some nonstate actor to commit something harmful. Beyond that I think we can mostly rely on existing regulations and law and existing testing procedures . . . and we don’t need to create some entirely new infrastructure.
At Anthropic we discover that the more ways we find to use this technology the more ways we find it could help us. And you also need a testing and measurement regime that closely looks at whether the technology is working—and if it’s not how you fix it from a technological level, and if it continues to not work whether you need some additional regulation—but . . . I think the greatest risk is us [viz. America] not using it [viz. AI]. Private industry is making itself faster and smarter by experimenting with this technology . . . and I think if we fail to do that at the level of the nation, some other entrepreneurial nation will succeed here.
@Zac Hatfield-Dodds do you have any thoughts on official comms from Anthropic and Anthropic’s policy team?
For example, I’m curious if you have thoughts on this anecdote– Jack Clark was asked an open-ended question by Senator Cory Booker and he told policymakers that his top policy priority was getting the government to deploy AI successfully. There was no mention of AGI, existential risks, misalignment risks, or anything along those lines, even though it would’ve been (IMO) entirely appropriate for him to bring such concerns up in response to such an open-ended question.
I was left thinking that either Jack does not care much about misalignment risks or he was not being particularly honest/transparent with policymakers. Both of these raise some concerns for me.
(Noting that I hold Anthropic’s comms and policy teams to higher standards than individual employees. I don’t have particularly strong takes on what Anthropic employees should be doing in their personal capacity– like in general I’m pretty in favor of transparency, but I get it, it’s hard and there’s a lot that you have to do. Whereas the comms and policy teams are explicitly hired/paid/empowered to do comms and policy, so I feel like it’s fair to have higher expectations of them.)
Source: Hill & Valley Forum on AI Security (May 2024):
https://www.youtube.com/live/RqxE3ub7wWA?t=13338s:
https://www.youtube.com/live/RqxE3ub7wWA?t=13551