Agreed and thanks for pointing out here that each of these resources has different content, not just presentation, in addition to being aimed at different audiences. This seems important and not highlighted in the post.
We then get into what we want to do about that, where one of the major tricky things is the ongoing debate of “how much researchers need to be thinking in the frame of xrisk to make useful progress in alignment”, which seems like a pretty important crux, and another is “what do ML researchers think after consuming different kinds of content”, where Thomas has some hypotheses in the paragraph “I’d guess...” but we don’t actually have data on this and I can think of alternate hypotheses, which also seems quite cruxy.
(Just a comment on some of the above, not all)
Agreed and thanks for pointing out here that each of these resources has different content, not just presentation, in addition to being aimed at different audiences. This seems important and not highlighted in the post.
We then get into what we want to do about that, where one of the major tricky things is the ongoing debate of “how much researchers need to be thinking in the frame of xrisk to make useful progress in alignment”, which seems like a pretty important crux, and another is “what do ML researchers think after consuming different kinds of content”, where Thomas has some hypotheses in the paragraph “I’d guess...” but we don’t actually have data on this and I can think of alternate hypotheses, which also seems quite cruxy.