Anthropic considers itself the expert on AI safety and security and believes that it can develop better SSPs than the California government.
Anthropic thinks that the California government is too political and does not have the expertise to effectively regulate frontier labs.
But, what they propose in return just seems to be at odds with their stated purpose and view of the future. If AGI is 2-3 years away then various governmental bodies need to be creating administration around AI safety now rather than in 2-3 years time, when it will take another 2-3 years to create the administrative organizations.
The idea that Anthropic or OpenAI or DeepMind should get to decide, on their own, the appropriate safety and security measures for frontier models, seems unrealistic. It’s going to end up being a set of regulations created by a government body—and Anthropic is probably better off participating in that process than trying to oppose its operation at the start.
I feel like some of this just comes from an unrealistic view of the future, where they don’t seem to understand that as AGI approaches, in certain respects they become less influential and important and not more influential and important—as AI ceases to be a niche thing, other power structures in society will exert more influence on its operation and distribution,
I feel like there are two things going on here:
Anthropic considers itself the expert on AI safety and security and believes that it can develop better SSPs than the California government.
Anthropic thinks that the California government is too political and does not have the expertise to effectively regulate frontier labs.
But, what they propose in return just seems to be at odds with their stated purpose and view of the future. If AGI is 2-3 years away then various governmental bodies need to be creating administration around AI safety now rather than in 2-3 years time, when it will take another 2-3 years to create the administrative organizations.
The idea that Anthropic or OpenAI or DeepMind should get to decide, on their own, the appropriate safety and security measures for frontier models, seems unrealistic. It’s going to end up being a set of regulations created by a government body—and Anthropic is probably better off participating in that process than trying to oppose its operation at the start.
I feel like some of this just comes from an unrealistic view of the future, where they don’t seem to understand that as AGI approaches, in certain respects they become less influential and important and not more influential and important—as AI ceases to be a niche thing, other power structures in society will exert more influence on its operation and distribution,