+1 to this, I feel like an important question to ask is “how much did this change your mind?”. I would probably swap the agree/disagree question for this?
I think the qualitative comments also bear this out as well:
dislike of a focus on existential risks or an emphasis on fears, a desire to be “realistic” and not “speculative”
This seems like people like AGI Safety arguments that don’t really cover AGI Safety concerns! I.e. the problem researchers have isn’t so much with the presentation but the content itself.
Agreed and thanks for pointing out here that each of these resources has different content, not just presentation, in addition to being aimed at different audiences. This seems important and not highlighted in the post.
We then get into what we want to do about that, where one of the major tricky things is the ongoing debate of “how much researchers need to be thinking in the frame of xrisk to make useful progress in alignment”, which seems like a pretty important crux, and another is “what do ML researchers think after consuming different kinds of content”, where Thomas has some hypotheses in the paragraph “I’d guess...” but we don’t actually have data on this and I can think of alternate hypotheses, which also seems quite cruxy.
+1 to this, I feel like an important question to ask is “how much did this change your mind?”. I would probably swap the agree/disagree question for this?
I think the qualitative comments also bear this out as well:
This seems like people like AGI Safety arguments that don’t really cover AGI Safety concerns! I.e. the problem researchers have isn’t so much with the presentation but the content itself.
(Just a comment on some of the above, not all)
Agreed and thanks for pointing out here that each of these resources has different content, not just presentation, in addition to being aimed at different audiences. This seems important and not highlighted in the post.
We then get into what we want to do about that, where one of the major tricky things is the ongoing debate of “how much researchers need to be thinking in the frame of xrisk to make useful progress in alignment”, which seems like a pretty important crux, and another is “what do ML researchers think after consuming different kinds of content”, where Thomas has some hypotheses in the paragraph “I’d guess...” but we don’t actually have data on this and I can think of alternate hypotheses, which also seems quite cruxy.