My current perception is that there are not many independent minds to be found here. I perceive there to be a strong tendency to jump if Yudkowsky tells people to jump. I’m virtually the only true critic of the SIAI, which is really sad and frightening.
I criticise Eliezer frequently. I manage to do so without being particularly negatively received by the alleged Yudkowsky hive mind.
Note: My criticisms of EY/SIAI are specific even if consistent. Like lukeprog I do not feel the need to repeat the thousands of things about which I agree with EY.
Further Note: There are enough distinct things that I disagree with Eliezer about that, given my metacognitive confidence levels I can expect that on at least one of them I am mistaken. Which is a curious epistemic state to be in but purely tangential. ;)
Yet another edit: A clear example of criticism of Eliezer is with respect to his discussion of his metaethics and CEV. I didn’t find his contribution in the linked conversation satisfactory and consider it representative of his other recent contributions on the subject. Everything except his sequence on the subject has been nowhere near the standard I would expect from someone dedicating their life to studying a subject that will rely reasoning flawlessly in the related area!
Like lukeprog I do not feel the need to repeat the thousands of things about which I agree with EY.
You think I don’t? I agree with almost everyone about thousands of things. I perceive myself to be an uneducated fool. If I read a few posts of someone like Yudkowsky and intuitively agree, that is very weak evidence to trust him or of his superior intellect.
I still think that he’s one of the smartest people though. But there is a limit to what I’ll just accept on mere reassurance. And I have seen nothing that would allow me to conclude that he could accomplish much regarding friendly AI without a billion dollars and a team of mathematicians and other specialists.
I criticise Eliezer frequently. I manage to do so without being particularly negatively received by the alleged Yudkowsky hive mind.
Note: My criticisms of EY/SIAI are specific even if consistent. Like lukeprog I do not feel the need to repeat the thousands of things about which I agree with EY.
Further Note: There are enough distinct things that I disagree with Eliezer about that, given my metacognitive confidence levels I can expect that on at least one of them I am mistaken. Which is a curious epistemic state to be in but purely tangential. ;)
Yet another edit: A clear example of criticism of Eliezer is with respect to his discussion of his metaethics and CEV. I didn’t find his contribution in the linked conversation satisfactory and consider it representative of his other recent contributions on the subject. Everything except his sequence on the subject has been nowhere near the standard I would expect from someone dedicating their life to studying a subject that will rely reasoning flawlessly in the related area!
You think I don’t? I agree with almost everyone about thousands of things. I perceive myself to be an uneducated fool. If I read a few posts of someone like Yudkowsky and intuitively agree, that is very weak evidence to trust him or of his superior intellect.
I still think that he’s one of the smartest people though. But there is a limit to what I’ll just accept on mere reassurance. And I have seen nothing that would allow me to conclude that he could accomplish much regarding friendly AI without a billion dollars and a team of mathematicians and other specialists.
No, that wasn’t for your benefit at all. Just disclaiming limits. Declarations of criticism are sometimes worth tempering just a tad. :)