I agree with all of this. It’s what I meant by “it’s up to all of us.”
It will be a signal of how things are going if I’m a year we still have only vague policies, or if there has been real progress in operationalizing the safety levels, detection, what the right reactions are, etc.
I guess our biggest differences are (i) I don’t think the takeaway depends so strongly on whether AI developers are trying to do the right thing—either way it’s up to all of us, and (ii) I think it’s already worth talking about ways which Anthropic’s RSP is good or bad or could be better, and so I disagree with “there’s probably not much to say at this point.”
I agree with all of this. It’s what I meant by “it’s up to all of us.”
It will be a signal of how things are going if I’m a year we still have only vague policies, or if there has been real progress in operationalizing the safety levels, detection, what the right reactions are, etc.
That’s fair, I think I misread you.
I guess our biggest differences are (i) I don’t think the takeaway depends so strongly on whether AI developers are trying to do the right thing—either way it’s up to all of us, and (ii) I think it’s already worth talking about ways which Anthropic’s RSP is good or bad or could be better, and so I disagree with “there’s probably not much to say at this point.”