Yeah, that article was originally an attempt to “essay-ify” an earlier draft of this very dialogue. But I don’t think the essay version succeeded at communicating the idea very well.
The dialogue is at least better, I think, if you have the relevant context (“MIRI is a math research group that works on AI safety and likes silly analogies”) and know what the dialogue is trying to do (“better pinpoint the way MIRI thinks of our current understanding of AGI alignment, and the way MIRI thinks of its research as relevant to improving our understanding, without trying to argue for those models”).
Yeah, that article was originally an attempt to “essay-ify” an earlier draft of this very dialogue. But I don’t think the essay version succeeded at communicating the idea very well.
The dialogue is at least better, I think, if you have the relevant context (“MIRI is a math research group that works on AI safety and likes silly analogies”) and know what the dialogue is trying to do (“better pinpoint the way MIRI thinks of our current understanding of AGI alignment, and the way MIRI thinks of its research as relevant to improving our understanding, without trying to argue for those models”).