You’re right, but. The whole story goes like this: Eliezer founded this forum to advancing the art of human rationality, so that people would stop making silly objections to the issue of AI safety like “intelligence would surely bring about morality” and things like that. The focus of LW is human rationality and of MIRI is AI safety, but as far as I can tell, we still haven’t found any valid objections to the orthogonality thesis. On the contrary, the issue of autonomous agents safety is gaining traction and recognition. I do agree that if we found a strong objections we should change perspective, but we still haven’t and indeed we are seeing more and more worrisome examples.
You’re right, but.
The whole story goes like this: Eliezer founded this forum to advancing the art of human rationality, so that people would stop making silly objections to the issue of AI safety like “intelligence would surely bring about morality” and things like that.
The focus of LW is human rationality and of MIRI is AI safety, but as far as I can tell, we still haven’t found any valid objections to the orthogonality thesis. On the contrary, the issue of autonomous agents safety is gaining traction and recognition.
I do agree that if we found a strong objections we should change perspective, but we still haven’t and indeed we are seeing more and more worrisome examples.