RSPs offer a potential middle ground between (a) those who think AI could be extremely dangerous and seek things like moratoriums on AI development, and (b) who think that it’s too early to worry about capabilities with catastrophic potential. RSPs are pragmatic and threat-model driven: rather than arguing over the likelihood of future dangers, we can...
I think “extreme” was subjective and imprecise wording on my part, and I appreciate you catching this. I’ve edited the sentence to say “Instead, ARC implies that the moratorium folks are unrealistic, and tries to say they operate on an extreme end of the spectrum, on the opposite side of those who believe it’s too soon to worry about catastrophes whatsoever.”
I was thinking about this passage:
I think “extreme” was subjective and imprecise wording on my part, and I appreciate you catching this. I’ve edited the sentence to say “Instead, ARC implies that the moratorium folks are unrealistic, and tries to say they operate on an extreme end of the spectrum, on the opposite side of those who believe it’s too soon to worry about catastrophes whatsoever.”