I think one of the reasons is that the LW/SIAI crowd thinks all other people are below their standards.
“Below their standards” is a bad way to describe this situation, it suggests some kind of presumption of social superiority, while the actual problem is just that the things almost all researchers write presumably on this topic are not helpful. They are either considering a different problem (e.g. practical ways of making real near-future robots not kill wrong people, where it’s perfectly reasonable to say that philosophy of consequentialism is useless, since there is no practical way to apply it; or applied ethics, where we ask how humans should act), or contemplate the confusingness of the problem, without making useful progress (a lot of philosophy).
This property doesn’t depend on whether we are making progress ourselves, so it’s perfectly possible (and to a large extent true) that progress that is up to the standard of being useful is not made by SIAI either.
A point where SIAI makes visible and useful progress is in communicating the difficulty of the problem, the very fact that most of what is purportedly progress on FAI is actually not.
A point where SIAI makes visible and useful progress is in communicating the difficulty of the problem...
This is, in fact, the main goal of my book on the subject. Except, I’ll do it in more detail, and spend more time citing the specific examples from the literature that are wrong. Eliezer has done some of this, but there’s lots more to do.
“Below their standards” is a bad way to describe this situation, it suggests some kind of presumption of social superiority, while the actual problem is just that the things almost all researchers write presumably on this topic are not helpful. They are either considering a different problem (e.g. practical ways of making real near-future robots not kill wrong people, where it’s perfectly reasonable to say that philosophy of consequentialism is useless, since there is no practical way to apply it; or applied ethics, where we ask how humans should act), or contemplate the confusingness of the problem, without making useful progress (a lot of philosophy).
This property doesn’t depend on whether we are making progress ourselves, so it’s perfectly possible (and to a large extent true) that progress that is up to the standard of being useful is not made by SIAI either.
A point where SIAI makes visible and useful progress is in communicating the difficulty of the problem, the very fact that most of what is purportedly progress on FAI is actually not.
This is, in fact, the main goal of my book on the subject. Except, I’ll do it in more detail, and spend more time citing the specific examples from the literature that are wrong. Eliezer has done some of this, but there’s lots more to do.