This post raised some interesting points, and stimulated a bunch of interesting discussion in the comments. I updated a little bit away from foom-like scenarios and towards slow-takeoff scenarios. Thanks.
For that, I’d like to upvote this post.
On the other hand:
I think direct/non-polite/uncompromising argumentation against other arguments, models, or beliefs is (usually) fine and good.
And I think it’s especially important to counter-argue possible inaccuracies in key models that lots of people have about AI/ML/alignment.
However, in many places, the post reads like a personal attack on a person (Yudkowsky), rather than just on models/beliefs he has promulgated.
I think that style of discourse runs a risk of
politicizing the topic under discussion, and thereby making it harder for people to think clearly about it
creating a shitty culture where people are liable to get personally attacked for participating in that discourse
For that, I’d like to downvote this post.
(I ended up neither up- nor down-voting.)
This post raised some interesting points, and stimulated a bunch of interesting discussion in the comments. I updated a little bit away from foom-like scenarios and towards slow-takeoff scenarios. Thanks. For that, I’d like to upvote this post.
On the other hand: I think direct/non-polite/uncompromising argumentation against other arguments, models, or beliefs is (usually) fine and good. And I think it’s especially important to counter-argue possible inaccuracies in key models that lots of people have about AI/ML/alignment. However, in many places, the post reads like a personal attack on a person (Yudkowsky), rather than just on models/beliefs he has promulgated.
I think that style of discourse runs a risk of
politicizing the topic under discussion, and thereby making it harder for people to think clearly about it
creating a shitty culture where people are liable to get personally attacked for participating in that discourse
For that, I’d like to downvote this post. (I ended up neither up- nor down-voting.)