Of course there has been lots of ‘obvious output of this kind from the rest of the “AI safety” field’. It is not like people have been quiet about convergent instrumental goals. So what is going on here?
I read this line (and the paragraphs that follow it) as Eliezer talking smack about all other AI safety researchers. As observed by Paul here:
Eliezer frequently talks smack about how the real world is surprising to fools like Paul
I liked some of Eliezer’s earlier, more thoughtful writing better.
Of course there has been lots of ‘obvious output of this kind from the rest of the “AI safety” field’. It is not like people have been quiet about convergent instrumental goals. So what is going on here?
I read this line (and the paragraphs that follow it) as Eliezer talking smack about all other AI safety researchers. As observed by Paul here:
I liked some of Eliezer’s earlier, more thoughtful writing better.