They’ve recently been hiring for a product team, in order to get more red-teaming of models and eventually have more independent revenue streams.
I think Anthropic believes that this is the most promising route to making AGI turn out well for humanity, so it’s worth taking the risk of being part of the competition and perhaps contributing to accelerating capabilities.
On a reread, I noticed that I don’t actually know what Anthropic’s strategy is. This is actually a question about a couple of things.
The first is what endpoint they’re targeting—“solve and implement alignment” is the ultimate goal, of course, but one can coherently imagine targeting something else, as with Encultured, which is explicitly not targeting “solve alignment” but a much smaller subset of what they expect will be a larger ecosystem adding up to a “solution”.
The second is what strategy they’re currently following in persuit of that endpoint.
There are some details that can be extracted based on the implied premises it relies on, but it would be great to hear from Anthropic directly what the current strategy is, in a way that either rules out substantial chunks of action-space, or requires very specific actions. (I think that in a very meaningful sense, a strategy is a special case of a prediction, which must constrain your expectations about your future actions.)
On a reread, I noticed that I don’t actually know what Anthropic’s strategy is. This is actually a question about a couple of things.
The first is what endpoint they’re targeting—“solve and implement alignment” is the ultimate goal, of course, but one can coherently imagine targeting something else, as with Encultured, which is explicitly not targeting “solve alignment” but a much smaller subset of what they expect will be a larger ecosystem adding up to a “solution”.
The second is what strategy they’re currently following in persuit of that endpoint.
There are some details that can be extracted based on the implied premises it relies on, but it would be great to hear from Anthropic directly what the current strategy is, in a way that either rules out substantial chunks of action-space, or requires very specific actions. (I think that in a very meaningful sense, a strategy is a special case of a prediction, which must constrain your expectations about your future actions.)