The “friendly AI” approach advocated by Eliezer Yudkowsky has several serious conceptual and theoretical problems, and is not accepted by most AGI researchers. The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it.
I don’t think Yudkowsky has been ignored through lack of criticism. It’s more that he heads a rival project that doesn’t seem too interested in collaboration with other teams, and instead spits out negative PR about them—e.g.:
AI: A Modern Approach seems to take the matter seriously.
I don’t think Yudkowsky has been ignored through lack of criticism. It’s more that he heads a rival project that doesn’t seem too interested in collaboration with other teams, and instead spits out negative PR about them—e.g.: