Can you tell us more about how you’ve seen people react to Yudkowsky? That these negative reactions are significant is crucial to your proposal, but I have rarely seen negative reactions to Yudkowsky (and never in person) so my first availability-heuristic-naive reaction is to think it isn’t a problem. But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I’m not looking, so would like to know more about that.
I haven’t seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him. Recalling Hanson’s view that a lot human behavior is really signaling and vying for status, I interpret this ridicule as an functioning to lower Eliezer’s status to compensate for what people perceive as inappropriate status grubbing on his part.
Most of the smart people who I know (including myself) perceive him as exhibiting a high degree of overconfidence in the validity of his views about the world.
This leads some of them conceptualize him as a laughingstock; as somebody who’s totally oblivious and feel that the idea that we should be thinking about artificial intelligence is equally worthy of ridicule. I personally am quite uncomfortable with these attitudes, agreeing with Holden Karnofsky’s comment
“I believe that there are enormous risks and upsides associated with artificial intelligence. Managing these deserves serious discussion, and it’s a shame that many laugh off such discussion.”
I’m somewhat surprised that you appear not to have notice this sort of thing independently. Maybe we hang out in rather different crowds.
Did that objectionable Yudkowsky-meteorite comment get widely disseminated? YouTube says the video has only 500 views, and I imagine most of those are from Yudkowsky-sympathizing Less Wrong readers.
Yes, I think that you’re right. I just picked it out as a very concrete example of a statement that could provoke a substantial negative reaction. There are other qualitatively similar things (but more mild) things that Eliezer has said that have been more widely disseminated.
I haven’t seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him
Ditto.
I know of a lot of very smart people (ok, less than 10, but still, more than 1) who essentially read Eliezer’s AI writings as a form of entertainment, and don’t take them even slightly seriously. This is partly because of the Absurdity Heuristic, but I think it’s also because of Eliezer’s writing style, and statements like the one in the initial post.
I personally fall somewhere between these people and, say, someone who has spent a summer at the SIAI on the ‘taking Eliezer seriously’ scale—I think he (and the others) probably have a point, and I at least know that they intend to be taken seriously, but I’ve never gotten round to doing anything about it.
I’m somewhat surprised that you appear not to have notice this sort of thing independently. Maybe we hang out in rather different crowds.
Oh, definitely. I have no real-life friends who are interested enough in these topics to know who Yudkowsky is (except, possibly, for what little they hear from me, and I try to keep the proselytizing to acceptable levels). So it’s just me and the internet.
I haven’t seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him. Recalling Hanson’s view that a lot human behavior is really signaling and vying for status, I interpret this ridicule as an functioning to lower Eliezer’s status to compensate for what people perceive as inappropriate status grubbing on his part.
I have seen some ridicule of Yudkowsky (on the internet) but my impression had been that it wasn’t a reaction to his tone, but rather that people were using the absurdity heuristic (cryonics and AGI are crazy talk) or reacting to surface-level status markers (Yudkowsky doesn’t have a PhD). That is to say, it didn’t seem the kind of ridicule that was avoidable by managing one’s tone. I don’t usually read ridicule in detail so it makes sense I’d be mistaken about that.
I just picked it out as a very concrete example of a statement that could provoke a substantial negative reaction.
If it hasn’t happened yet, that’s at least some evidence it won’t happen. Do you have reason to imagine a scenario which makes things very much worse than they already are based on such an effect, which means we must take care to tiptoe around these possibilities without allowing even one to happen? Because if not, we should probably worry about the things that already go wrong more than the things that might go wrong but haven’t yet.
Recalling Hanson’s view that a lot human behavior is really signaling and
vying for status
Existential risk reduction too! Charities are mostly used for signalling purposes—and to display affiliations and interests. Those caught up in causes use them for social networking with like-minded individuals—to signal how much they care, to signal how much spare time and energy they have—and so on. The actual cause is usually not irrelevant—but it is not particularly central either. It doesn’t make much sense to expect individuals to be actually attempting to SAVE THE WORLD! This is much more likely to be a signalling phenomenon, making use of a superstimulus for viral purposes.
I haven’t seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him. Recalling Hanson’s view that a lot human behavior is really signaling and vying for status, I interpret this ridicule as an functioning to lower Eliezer’s status to compensate for what people perceive as inappropriate status grubbing on his part.
Most of the smart people who I know (including myself) perceive him as exhibiting a high degree of overconfidence in the validity of his views about the world.
This leads some of them conceptualize him as a laughingstock; as somebody who’s totally oblivious and feel that the idea that we should be thinking about artificial intelligence is equally worthy of ridicule. I personally am quite uncomfortable with these attitudes, agreeing with Holden Karnofsky’s comment
“I believe that there are enormous risks and upsides associated with artificial intelligence. Managing these deserves serious discussion, and it’s a shame that many laugh off such discussion.”
I’m somewhat surprised that you appear not to have notice this sort of thing independently. Maybe we hang out in rather different crowds.
Yes, I think that you’re right. I just picked it out as a very concrete example of a statement that could provoke a substantial negative reaction. There are other qualitatively similar things (but more mild) things that Eliezer has said that have been more widely disseminated.
Ditto.
I know of a lot of very smart people (ok, less than 10, but still, more than 1) who essentially read Eliezer’s AI writings as a form of entertainment, and don’t take them even slightly seriously. This is partly because of the Absurdity Heuristic, but I think it’s also because of Eliezer’s writing style, and statements like the one in the initial post.
I personally fall somewhere between these people and, say, someone who has spent a summer at the SIAI on the ‘taking Eliezer seriously’ scale—I think he (and the others) probably have a point, and I at least know that they intend to be taken seriously, but I’ve never gotten round to doing anything about it.
Why do they find them entertaining?
As XiXiDu says—pretty much the same reason they find Isaac Asimov entertaining.
I said the same before. It’s mainly good science fiction. I’m trying to find out if there’s more to it though.
Just saying this as evidence that there is a lot doubt even within the LW community.
Oh, definitely. I have no real-life friends who are interested enough in these topics to know who Yudkowsky is (except, possibly, for what little they hear from me, and I try to keep the proselytizing to acceptable levels). So it’s just me and the internet.
I have seen some ridicule of Yudkowsky (on the internet) but my impression had been that it wasn’t a reaction to his tone, but rather that people were using the absurdity heuristic (cryonics and AGI are crazy talk) or reacting to surface-level status markers (Yudkowsky doesn’t have a PhD). That is to say, it didn’t seem the kind of ridicule that was avoidable by managing one’s tone. I don’t usually read ridicule in detail so it makes sense I’d be mistaken about that.
If it hasn’t happened yet, that’s at least some evidence it won’t happen. Do you have reason to imagine a scenario which makes things very much worse than they already are based on such an effect, which means we must take care to tiptoe around these possibilities without allowing even one to happen? Because if not, we should probably worry about the things that already go wrong more than the things that might go wrong but haven’t yet.
I’m confused—I feel like I already address most of your remarks in the comment that you’re responding to?
Existential risk reduction too! Charities are mostly used for signalling purposes—and to display affiliations and interests. Those caught up in causes use them for social networking with like-minded individuals—to signal how much they care, to signal how much spare time and energy they have—and so on. The actual cause is usually not irrelevant—but it is not particularly central either. It doesn’t make much sense to expect individuals to be actually attempting to SAVE THE WORLD! This is much more likely to be a signalling phenomenon, making use of a superstimulus for viral purposes.