I didn’t see the post in those lights at all. I think it gave a short, interesting and relevant example about the dynamics of intellectual innovation in “intelligence research” (Jeff) and how this could help predict and explain the impact of current research(MIRI/FHI). I do agree the post is about “tribalism” and not about the truth, however, it seems that this was OP explicit intention and a worthwhile topic. It would be naive and unwise to overlook these sorts of societal considerations if your goal is to make AI development safer.
I didn’t see the post in those lights at all. I think it gave a short, interesting and relevant example about the dynamics of intellectual innovation in “intelligence research” (Jeff) and how this could help predict and explain the impact of current research(MIRI/FHI). I do agree the post is about “tribalism” and not about the truth, however, it seems that this was OP explicit intention and a worthwhile topic. It would be naive and unwise to overlook these sorts of societal considerations if your goal is to make AI development safer.