They are working on it partially because this gives Anthropic access to state of the art models to do alignment research on, but I think in even greater parts they are doing it because this gives them a seat at the table with the other AI capabilities orgs and makes their work seem legitimate to them, which enables them to both be involved in shaping how AI develops, and have influence over these other orgs.
...Am I crazy or is this discussion weirdly missing the third option of “They’re doing it because they want to build a God-AI and ‘beat the other orgs to the punch’”? That is completely distinct from signaling competence to other AGI orgs or getting yourself a “seat at the table” and it seems odd to categorize the majority of Anthropic’s aggslr8ing as such.
...Am I crazy or is this discussion weirdly missing the third option of “They’re doing it because they want to build a God-AI and ‘beat the other orgs to the punch’”? That is completely distinct from signaling competence to other AGI orgs or getting yourself a “seat at the table” and it seems odd to categorize the majority of Anthropic’s aggslr8ing as such.