To say the obvious thing: I think if Anthropic isn’t able to make at least somewhat-roughly-meaningful predictions about AI welfare, then their core current public research agendas have failed?
To say the obvious thing: I think if Anthropic isn’t able to make at least somewhat-roughly-meaningful predictions about AI welfare, then their core current public research agendas have failed?