I suspect Lumifer’s getting downvoted for four reasons:
(1) A lot of his/her responses attack the weakest (or least clear) point in the original argument, even if it’s peripheral to the central argument, without acknowledging any updating on his/her part in response to the main argument. This results in the conversation spinning off in a lot of unrelated directions simultaneously. Steel-manning is a better strategy, because it also makes it clearer whether there’s a misunderstanding about what’s at issue.
(2) Lumifer is expressing consistently high confidence that appears disproportionate to his/her level of expertise and familiarity with the issues being discussed. In particular, s/he ’s unfamiliar with even the cursory summaries of Sequence points that could be found on the wiki. (This is more surprising, and less easy to justify, given how much karma s/he’s accumulated.)
(3) Lumifer’s tone comes off as cute and smirky and dismissive, even when the issues being debated are of enormous human importance and the claims being raised are at best not obviously correct, at worst obviously not correct.
(4) Lumifer is expressing unpopular views on LW without arguing for them. (In my experience, unpopular views receive polarizing numbers of votes on LW: They get disproportionately many up-votes if well-argued, disproportionately many down-votes if merely asserted. The most up-voted post in the history of LW is an extensive critique of MIRI.)
I didn’t downvote Lumifer’s “My prior that they were capable of building an actually dangerous AI cannot be distinguished from zero :-D”, but I think all four of those characteristics hold even for this relatively innocuous (and almost certainly correct) post. The response is glib and dismissive of the legitimate worry you raised, it reflects a lack of understanding of why this concern is serious (hence also lacks any relevant counter-argument; you already recognized that the people you were talking about weren’t going to succeed in building AI), and it changes the topic without demonstrating any updating in response to the previous argument.
I suspect Lumifer’s getting downvoted for four reasons:
(1) A lot of his/her responses attack the weakest (or least clear) point in the original argument, even if it’s peripheral to the central argument, without acknowledging any updating on his/her part in response to the main argument. This results in the conversation spinning off in a lot of unrelated directions simultaneously. Steel-manning is a better strategy, because it also makes it clearer whether there’s a misunderstanding about what’s at issue.
(2) Lumifer is expressing consistently high confidence that appears disproportionate to his/her level of expertise and familiarity with the issues being discussed. In particular, s/he ’s unfamiliar with even the cursory summaries of Sequence points that could be found on the wiki. (This is more surprising, and less easy to justify, given how much karma s/he’s accumulated.)
(3) Lumifer’s tone comes off as cute and smirky and dismissive, even when the issues being debated are of enormous human importance and the claims being raised are at best not obviously correct, at worst obviously not correct.
(4) Lumifer is expressing unpopular views on LW without arguing for them. (In my experience, unpopular views receive polarizing numbers of votes on LW: They get disproportionately many up-votes if well-argued, disproportionately many down-votes if merely asserted. The most up-voted post in the history of LW is an extensive critique of MIRI.)
I didn’t downvote Lumifer’s “My prior that they were capable of building an actually dangerous AI cannot be distinguished from zero :-D”, but I think all four of those characteristics hold even for this relatively innocuous (and almost certainly correct) post. The response is glib and dismissive of the legitimate worry you raised, it reflects a lack of understanding of why this concern is serious (hence also lacks any relevant counter-argument; you already recognized that the people you were talking about weren’t going to succeed in building AI), and it changes the topic without demonstrating any updating in response to the previous argument.