But XiXiDu doesn’t understand SIAI’s actual object-level claims, let alone the arguments that link them, and he writes AI researchers in a style that looks crankish.
Agreed—some of his questions were cringe-inducing, but overall, I appreciated that series of posts because it’s interesting to hear what a broad range of AI researchers have to say about the topic; some of the answers were insightful and well-argued.
I agree that sounding crankish could be a problem, but I don’t think Xixidu was presenting himself as writing in LW/SIAI’s name. Crankiness from some lesswrongers tarring the reputation of Eliezer’s writings is hard to avoid anyway: the main problem is that there’s no clear way to refer to Eliezer’s writings; “The Sequences” is obscure and covers too much stuff, some of which isn’t Eliezer; “Overcoming Bias” worked at the time, and “Less Wrong” is a name that wasn’t even used when most of the core Sequences were written, and now mostly refers to the community.
Agreed—some of his questions were cringe-inducing, but overall, I appreciated that series of posts because it’s interesting to hear what a broad range of AI researchers have to say about the topic; some of the answers were insightful and well-argued.
I agree that sounding crankish could be a problem, but I don’t think Xixidu was presenting himself as writing in LW/SIAI’s name. Crankiness from some lesswrongers tarring the reputation of Eliezer’s writings is hard to avoid anyway: the main problem is that there’s no clear way to refer to Eliezer’s writings; “The Sequences” is obscure and covers too much stuff, some of which isn’t Eliezer; “Overcoming Bias” worked at the time, and “Less Wrong” is a name that wasn’t even used when most of the core Sequences were written, and now mostly refers to the community.