the recent TIME article saying there’s no trade off between progress and safety
More generally, for having talked to many AI policy/safety members, I can say it’s a very common pattern. At the eve of the FLI open letter, one of the most senior persons in the AI governance & policy X risk community was explaining that it was stupid to write this letter and that it would make future policy efforts much more difficult etc.
A few other examples off the top of my head:
ARC graph on RSPs with the “safe zone” part
Anthropic calling ASL-4 accidental risks “speculative”
the recent TIME article saying there’s no trade off between progress and safety
More generally, for having talked to many AI policy/safety members, I can say it’s a very common pattern. At the eve of the FLI open letter, one of the most senior persons in the AI governance & policy X risk community was explaining that it was stupid to write this letter and that it would make future policy efforts much more difficult etc.