I don’t think the onus should be on the reader to infer x-risk motivations. In academic ML, it’s the author’s job to explain why the reader should care about the paper. I don’t see why this should be different in safety. If it’s hard to do that in the paper itself, you can always e.g. write a blog post explaining safety relevance (as mentioned by aogara, people are already doing this, which is great!).
There are often many different ways in which a paper might be intended to be useful for x-risks (and ways in which it might not be). Often the motivation for a paper (even in the groups mentioned above) may be some combination of it being an interesting ML problem, interests of the particular student, and various possible thoughts around AI safety. It’s hard to try to disentangle this from the outside by reading between the lines.
On the other hand there are a lot of reasons to belief the authors to be delusional about promises of their research and it’s theory for impact. I think the most I get personally out of posts like this is having this 3rd party perspective that I can compare with my own.
I don’t think the onus should be on the reader to infer x-risk motivations. In academic ML, it’s the author’s job to explain why the reader should care about the paper. I don’t see why this should be different in safety. If it’s hard to do that in the paper itself, you can always e.g. write a blog post explaining safety relevance (as mentioned by aogara, people are already doing this, which is great!).
There are often many different ways in which a paper might be intended to be useful for x-risks (and ways in which it might not be). Often the motivation for a paper (even in the groups mentioned above) may be some combination of it being an interesting ML problem, interests of the particular student, and various possible thoughts around AI safety. It’s hard to try to disentangle this from the outside by reading between the lines.
On the other hand there are a lot of reasons to belief the authors to be delusional about promises of their research and it’s theory for impact. I think the most I get personally out of posts like this is having this 3rd party perspective that I can compare with my own.