I’ve seen a number of people criticize Anthropic for releasing Claude 3 Opus, with arguments along the lines of:
Anthropic said they weren’t going to push the frontier, but this release is clearly better than GPT-4 in some ways! They’re betraying their mission statement!
I think that criticism takes too narrow a view. Consider the position of investors in AI startups. If OpenAI has a monopoly on the clearly-best version of a world-changing technology, that gives them a lot of pricing power on a large market. However, if there are several groups with comparable products, investors don’t know who the winner will be, and investment gets split between them. Not only that, but if they stay peers, then there will be more competition in the future, meaning less pricing power and less profitability.
The comparison isn’t just “GPT-4 exists” vs “GPT-4 and Claude Opus exist”—it’s more like “investors give X billion dollars to OpenAI” vs “investors give X/3 billion dollars to OpenAI and Anthropic”.
Now, you could argue that “more peer-level companies makes an agreement to stop development less likely”—but that wasn’t happening anyway, so any pauses would be driven by government action. If Anthropic was based in a country that previously had no notable AI companies, maybe that would be a reasonable argument, but it’s not.
If you’re concerned about social problems from widespread deployment of LLMs, maybe you should be unhappy about more good LLMs and more competition. But if you’re concerned about ASI, especially if you’re only concerned about future developments and not LLM hacks like BabyAGI, I think you should be happy about Anthropic releasing Claude 3 Opus.
Anthropic AI made the right call
I’ve seen a number of people criticize Anthropic for releasing Claude 3 Opus, with arguments along the lines of:
I think that criticism takes too narrow a view. Consider the position of investors in AI startups. If OpenAI has a monopoly on the clearly-best version of a world-changing technology, that gives them a lot of pricing power on a large market. However, if there are several groups with comparable products, investors don’t know who the winner will be, and investment gets split between them. Not only that, but if they stay peers, then there will be more competition in the future, meaning less pricing power and less profitability.
The comparison isn’t just “GPT-4 exists” vs “GPT-4 and Claude Opus exist”—it’s more like “investors give X billion dollars to OpenAI” vs “investors give X/3 billion dollars to OpenAI and Anthropic”.
Now, you could argue that “more peer-level companies makes an agreement to stop development less likely”—but that wasn’t happening anyway, so any pauses would be driven by government action. If Anthropic was based in a country that previously had no notable AI companies, maybe that would be a reasonable argument, but it’s not.
If you’re concerned about social problems from widespread deployment of LLMs, maybe you should be unhappy about more good LLMs and more competition. But if you’re concerned about ASI, especially if you’re only concerned about future developments and not LLM hacks like BabyAGI, I think you should be happy about Anthropic releasing Claude 3 Opus.