Anthropic is, ostensibly, an organization focused on safe and controllable AI. This arms race is concerning. We’ve already seen this route taken once with OpenAI. Seems like the easy route to take. This press release sure sounds like capabilities, not alignment/safety.
Over the past month, reinforced even more every time I read something like this, I’ve firmly come to believe that political containment is a more realistic strategy, with a much greater chance of success, than focusing purely on alignment. Even comparing the past month to the month of December 2022, things are accelerating dramatically. It only took a few weeks between the release of GPT-4 and the development of AutoGPT, which is crudely agentic. Capabilities is starting with a pool of people OOMs higher than alignment, and as money pours into the field at ever growing rates (toward capabilities, of course, because that’s where the money is), it’s going to be really hard for alignment folks (who I deeply respect) to keep pace. I believe that this year is the crucial moment for persuading the general populace that AI needs to be contained, and doing so effectively because if we use poor strategies and backfire, we may have missed our best chance.
Anthropic is, ostensibly, an organization focused on safe and controllable AI. This arms race is concerning. We’ve already seen this route taken once with OpenAI. Seems like the easy route to take. This press release sure sounds like capabilities, not alignment/safety.
Over the past month, reinforced even more every time I read something like this, I’ve firmly come to believe that political containment is a more realistic strategy, with a much greater chance of success, than focusing purely on alignment. Even comparing the past month to the month of December 2022, things are accelerating dramatically. It only took a few weeks between the release of GPT-4 and the development of AutoGPT, which is crudely agentic. Capabilities is starting with a pool of people OOMs higher than alignment, and as money pours into the field at ever growing rates (toward capabilities, of course, because that’s where the money is), it’s going to be really hard for alignment folks (who I deeply respect) to keep pace. I believe that this year is the crucial moment for persuading the general populace that AI needs to be contained, and doing so effectively because if we use poor strategies and backfire, we may have missed our best chance.