Tone matters, but would you really destroy the Earth because Eliezer was mean?
The issue seems to be not whether Eliezer personally behaves in an unpleasant manner but rather how Eliezer’s responses influence predictions of just how much difference Eliezer will be able to make on the problem at hand. The implied reasoning Jordan makes is different to the one you suggest.
BTW, when people start saying, not, “You offended me, personally” but “I’m worried about how other people will react”, I usually take that as a cue to give up.
This too is not the key point in question. Your reaction here can be expected to correlate with other reactions you would make in situations not necessarily related to PR. In particular it says something about potential responses to ego damaging information. I am sure you can see how that would be relevant to estimates of possible value (positive or negative) that you will contribute. I again disclaim that ‘influences’ does not mean ‘drastically influences’.
Note that the reply in the parent does relate to other contexts somewhat more than this one.
Maybe because most people who talk to you know that they are not typical and also know that others wouldn’t be as polite given your style of argumentation?
People who know they aren’t typical should probably realize that their simulation of typical minds will often be wrong. A corollary to Vinge’s Law, perhaps?
That’s still hard to see—that you could go from thinking that SIAI represents the most efficient form of altruistic giving to that it no longer is, because Eliezer was rude to someone on LW. He’s manifestly managed to persuade quite a few people of the importance of AI risk despite these disadvantages—I think you’d have to go a long way to build a case that SIAI is significantly more likely to founder and fail on the basis of that one comment.
Rudeness isn’t the issue* so much as what the rudeness is used instead of. A terrible response in a place where a good response should be possible is information that should have influence on evaluations.
* Except, I suppose, PR implications of behaviour in public having some effect, but that isn’t something that Jordan’s statement needs to rely on.
I think you’d have to go a long way to build a case that SIAI is significantly more likely to founder and fail on the basis of that one comment.
Obviously. But not every update needs to be an overwhelming one. Again, the argument you refute here is not one that Jordan made.
EDIT: I just saw Jordan’s reply and saw that he did mean both these points but that he also considered the part that I had included only as a footnote.
Tone matters, but would you really destroy the Earth because Eliezer was mean?
The issue seems to be not whether Eliezer personally behaves in an unpleasant manner but rather how Eliezer’s responses influence predictions of just how much difference Eliezer will be able to make on the problem at hand. The implied reasoning Jordan makes is different to the one you suggest.
BTW, when people start saying, not, “You offended me, personally” but “I’m worried about how other people will react”, I usually take that as a cue to give up.
This too is not the key point in question. Your reaction here can be expected to correlate with other reactions you would make in situations not necessarily related to PR. In particular it says something about potential responses to ego damaging information. I am sure you can see how that would be relevant to estimates of possible value (positive or negative) that you will contribute. I again disclaim that ‘influences’ does not mean ‘drastically influences’.
Note that the reply in the parent does relate to other contexts somewhat more than this one.
Maybe because most people who talk to you know that they are not typical and also know that others wouldn’t be as polite given your style of argumentation?
People who know they aren’t typical should probably realize that their simulation of typical minds will often be wrong. A corollary to Vinge’s Law, perhaps?
That’s still hard to see—that you could go from thinking that SIAI represents the most efficient form of altruistic giving to that it no longer is, because Eliezer was rude to someone on LW. He’s manifestly managed to persuade quite a few people of the importance of AI risk despite these disadvantages—I think you’d have to go a long way to build a case that SIAI is significantly more likely to founder and fail on the basis of that one comment.
Rudeness isn’t the issue* so much as what the rudeness is used instead of. A terrible response in a place where a good response should be possible is information that should have influence on evaluations.
* Except, I suppose, PR implications of behaviour in public having some effect, but that isn’t something that Jordan’s statement needs to rely on.
Obviously. But not every update needs to be an overwhelming one. Again, the argument you refute here is not one that Jordan made.
EDIT: I just saw Jordan’s reply and saw that he did mean both these points but that he also considered the part that I had included only as a footnote.
Yes, thank you for clarifying that.