(2) My mistake was not in asserting something false.
It was. What you asserted, depending on interpretation, is either ill-formed or false. A counterexample to your claim is that a Paperclip AI won’t, in any meaningful sense, love humanity.
(3) My transgression was using the emotionally loaded word “love”.
The use of emotionally-loaded word is inappropriate, unless it is. In this case, your statement of attribution of emotion was false, and so affective aura accompanying the statement was inappropriate. I hypothesized that emotional thinking was one of the sources of your belief in the truth of the statement you made, so stating that your words were “affective rhetoric” meant to communicate this diagnostic (by analogy with “empty rhetoric”). I actually edited to that phrase from earlier “affective silliness”, that directly communicated a reference to the fact of you making a mistake, but I changed it to be less offensive.
Vladimir Nesov’s comment was I think quite rational: he asserted I probably needed to learn more about the subject and he provided some links
The ‘probably’ was more of a weasel word, referring to the fact that I’m not sure whether you actually want to spend time learning all that stuff, rather than to special uncertainty in whether the answer to your question is found there.
(1) I’m disappointed that even as rationalists, while you were able to recognize that I had committed some transgression, you were not able to identify it precisely and power was used (authority, shaming) instead of the truth to broadly punish me.
The problem is that the inferential distance is too great, and so it’s easier to refer the newcommer to the archive, where the answer to what was wrong can be learned systematically, instead of trying to explain the problems on her own terms.
I read “affective rhetoric” as “effective rhetoric”. (oops) Yes, “affective rhetoric” is a much more appropriate comment than (“effective rhetoric”). Since it seems like a good place for a neophyte to begin, I will address your comment about the paperclip AI in the welcome thread where Anna Salamon replied.
Anna Salamon replied on the Welcome thread, starting with:
This is in response to a comment of brynema’s elsewhere; if we want LW discussions to thrive even in cases where the discussions require non-trivial prerequisites, my guess is that we should get in the habit of taking “already discussed exhaustively” questions to the welcome thread.
It was. What you asserted, depending on interpretation, is either ill-formed or false. A counterexample to your claim is that a Paperclip AI won’t, in any meaningful sense, love humanity.
The use of emotionally-loaded word is inappropriate, unless it is. In this case, your statement of attribution of emotion was false, and so affective aura accompanying the statement was inappropriate. I hypothesized that emotional thinking was one of the sources of your belief in the truth of the statement you made, so stating that your words were “affective rhetoric” meant to communicate this diagnostic (by analogy with “empty rhetoric”). I actually edited to that phrase from earlier “affective silliness”, that directly communicated a reference to the fact of you making a mistake, but I changed it to be less offensive.
The ‘probably’ was more of a weasel word, referring to the fact that I’m not sure whether you actually want to spend time learning all that stuff, rather than to special uncertainty in whether the answer to your question is found there.
The problem is that the inferential distance is too great, and so it’s easier to refer the newcommer to the archive, where the answer to what was wrong can be learned systematically, instead of trying to explain the problems on her own terms.
I read “affective rhetoric” as “effective rhetoric”. (oops) Yes, “affective rhetoric” is a much more appropriate comment than (“effective rhetoric”). Since it seems like a good place for a neophyte to begin, I will address your comment about the paperclip AI in the welcome thread where Anna Salamon replied.
Anna Salamon replied on the Welcome thread, starting with: