May be useful to include in the review with some of the comments, or with a postmortem and analysis by Ben (or someone).
I don’t think the discussion stands great on its own, but it may be helpful for:
people familiar with AI alignment who want to better understand some human factors behind ‘the field isn’t coordinating or converging on safety’.
people new to AI alignment who want to use the views of leaders in the field to help them orient.
May be useful to include in the review with some of the comments, or with a postmortem and analysis by Ben (or someone).
I don’t think the discussion stands great on its own, but it may be helpful for:
people familiar with AI alignment who want to better understand some human factors behind ‘the field isn’t coordinating or converging on safety’.
people new to AI alignment who want to use the views of leaders in the field to help them orient.