Just wanted to provide some context for this post.
Jay got in touch with SI about a month ago looking to get involved with our research, with the goal of becoming a Research Fellow. I have spent the last month corresponding with him and helping him get up to speed with our research agenda. To demonstrate his research chops, Jay is working on a publication from the “Forthcoming and Desired Articles on AI Risk” list. I asked him to post a ~1000 word preview/outline, as a first step in the process so that he could get some feedback from the community, and get an idea about whether he’s on the right track.
SI is alway on the lookout for people who are willing/able to contribute to our research efforts. Working on one of our desired publications is a great way to get started. If you are interesting in doing something similar, please get in touch with me!
If the community reacts positively (based on karma and comments) we’ll support the potential contributors’ effort to complete the paper
I don’t think you should put very much weight on the reaction from LW, given that much more polished papers often get low karma. E.g. both my “Responses to Catastrophic AGI Risk: A Survey” and my and Stuart’s “How We’re Predicting AI — or Failing to” are currently at only 11 upvotes and rather few comments. If even finished papers get that little of a reaction, I would expect that even many drafts that genuinely deserved a great reception would get little to no response.
Just wanted to provide some context for this post.
Jay got in touch with SI about a month ago looking to get involved with our research, with the goal of becoming a Research Fellow. I have spent the last month corresponding with him and helping him get up to speed with our research agenda. To demonstrate his research chops, Jay is working on a publication from the “Forthcoming and Desired Articles on AI Risk” list. I asked him to post a ~1000 word preview/outline, as a first step in the process so that he could get some feedback from the community, and get an idea about whether he’s on the right track.
SI is alway on the lookout for people who are willing/able to contribute to our research efforts. Working on one of our desired publications is a great way to get started. If you are interesting in doing something similar, please get in touch with me!
This comment is no longer relevant now that the article has been prefaced with my note.
From the post:
I don’t think you should put very much weight on the reaction from LW, given that much more polished papers often get low karma. E.g. both my “Responses to Catastrophic AGI Risk: A Survey” and my and Stuart’s “How We’re Predicting AI — or Failing to” are currently at only 11 upvotes and rather few comments. If even finished papers get that little of a reaction, I would expect that even many drafts that genuinely deserved a great reception would get little to no response.
Kaj,
Thank you. I had noticed that as well. It seems the LW group is focused on a much longer time horizon.