Agreed, I think of this like sending a signal that at least a limited concern for safety is important. I’m sure we’ll see a bunch of papers with sections addressing this that won’t be great, but over time it stands some chance of more regularizing considering concerns about safety and ethics of ML work in the field such that safety work will become more accepted as valuable. So even without a lot of guidance or strong evaluative criteria, this seems a small win to me that, at worst, causes some papers to just have extra fluff sections their authors wrote to pretend to care about safety rather than ignoring it completely.
Agreed, I think of this like sending a signal that at least a limited concern for safety is important. I’m sure we’ll see a bunch of papers with sections addressing this that won’t be great, but over time it stands some chance of more regularizing considering concerns about safety and ethics of ML work in the field such that safety work will become more accepted as valuable. So even without a lot of guidance or strong evaluative criteria, this seems a small win to me that, at worst, causes some papers to just have extra fluff sections their authors wrote to pretend to care about safety rather than ignoring it completely.