My impression is that Yudkowsky has harmed public epistemics in his podcast appearances by saying things forcefully and with rather poor spoken communication skills for novice audiences.
I recommend reading the Youtube comments on his recorded podcasts, rather than e.g. Twitter commentary from people with a pre-existing adversarial stance to him (or AI risk questions writ large).
I’m not commenting on those who are obviously just grinding an axe; I’m commenting on the stance toward “doomers” from otherwise reasonable people. From my limited survey the brand of x-risk concern isn’t looking good, and that isn’t mostly a result of the amazing rhetorical skills of the e/acc community ;)
I recommend reading the Youtube comments on his recorded podcasts, rather than e.g. Twitter commentary from people with a pre-existing adversarial stance to him (or AI risk questions writ large).
Good suggestion, thanks and I’ll do that.
I’m not commenting on those who are obviously just grinding an axe; I’m commenting on the stance toward “doomers” from otherwise reasonable people. From my limited survey the brand of x-risk concern isn’t looking good, and that isn’t mostly a result of the amazing rhetorical skills of the e/acc community ;)