During graduate school I’ve met many smart people who I wish would take existential risk more seriously. Most such people who have heard of Eliezer do not find his claims credible. My understanding is that the reason for this is that Eliezer has made some claims which they perceive to be falling under the above rubric, and the strength of their negative reaction to these has tarnished their mental image of all of Eliezer’s claims.
Can you tell us more about how you’ve seen people react to Yudkowsky? That these negative reactions are significant is crucial to your proposal, but I have rarely seen negative reactions to Yudkowsky (and never in person) so my first availability-heuristic-naive reaction is to think it isn’t a problem. But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I’m not looking, so would like to know more about that.
Did that objectionable Yudkowsky-meteorite comment get widely disseminated? YouTube says the video has only 500 views, and I imagine most of those are from Yudkowsky-sympathizing Less Wrong readers.
Negative reactions to Yudkowsky from various people (academics concerned with x-risk), just within the past few weeks:
I also have an extreme distaste for Eliezer Yudkowsky, and so I have a hard time forcing myself to cooperate with any organization that he is included in, but that is a personal matter.
You know, maybe I’m not all that
interested in any sort of relationship with SIAI after all if this, and
Yudkowsky, are the best you have to offer.
...
There are certainly many reasons to doubt the belief system of a cult based around the haphazard musings of a high school dropout, who has never written a single computer program but professes to be an expert on AI. As you point out none of the real AI experts are crying chicken little, and only a handful of AI researchers, cognitive scientists or philosophers take the FAI idea seriously.
…
Wow, that’s an incredibly arrogant put-down by Eliezer..SIAI won’t win many friends if he puts things like that...
...
...he seems to have lost his mind and written out of strong feelings. I disagree with him on most of these matters.
…
Questions of priority—and the relative intensity of suffering between members of different species—need to be distinguished from the question of whether other sentient beings have moral status at all. I guess that was what shocked me about Eliezer’s bald assertion that frogs have no moral status. After all, humans may be less sentient than frogs compared to our posthuman successors. So it’s unsettling to think that posthumans might give simple-minded humans the same level of moral consideration that Elizeer accords frogs.
I was told that the quotes above state some ad hominem falsehoods regarding Eliezer. I think it is appropriate to edit the message to show that indeed some person might not be have been honest, or clueful. Otherwise I’ll unnecessary end up perpetuating possible ad hominem attacks.
utterly false, wrote my first one at age 5 or 6, in BASIC on a ZX-81 with 4K of RAM
The fact that a lot of these reactions are based on false info is worth noting. It doesn’t defeat any arguments directly, but it says that the naive model where everything happens because of the direct perception of actions I directly control is false.
I don’t like to, but if necessary I can provide the indentity of the people who stated the above. They all directly work to reduce x-risks. I won’t do so in public however.
Identity of these people is not the issue. The percentage of people in given category that have negative reactions for given reason, negative reactions for other reason, and positive reactions would be useful, but not a bunch of filtered (in unknown way) soldier-arguments.
I know. I however just wanted to highlight that there are negative reactions, including not so negative critique. If you look further, you’ll probably find more. I haven’t saved all I saw over the years, I just wanted to show that it’s not like nobody has a problem with EY. And in all ocassion I actually defended him by the way.
The context is also difficult to provide as some of it is from private e-Mails. Although the first one is from here and after thinking about it I can also provide the name since he was anyway telling this Michael Anissimov. It is from Sean Hays:
Sean A Hays PhD
Post Doctoral Fellow, Center for Nanotechnology in Society at ASU
Research Associate, ASU-NAF-Slate Magazine “Future Tense” Initiative
Program Director, IEET Securing the Future Program
But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I’m not looking, so would like to know more about that.
Yudkowsky-hatred isn’t the risk, Yudkowsky-mild-contempt is. People engage with things they hate, sometimes it brings respect and attention to both parties (by polarizing a crowd that would otherwise be indifferent.) But you never want to be exposed to mild contempt.
I can think of some examples of conversations about Eliezer that would fit the category but it is hard to translate them to text. The important part of the reaction was non-verbal. Cryonics was one topic and the problem there wasn’t that it was uncredible but that it was uncool. Another topic is the old “thinks he can know something about Friendly AIs when he hasn’t even made an AI yet” theme. Again, I’ve seen that reaction evident through mannerisms that in no way translate to text. You can convey that people aren’t socially relevant without anything so crude as saying stuff.
Can you tell us more about how you’ve seen people react to Yudkowsky? That these negative reactions are significant is crucial to your proposal, but I have rarely seen negative reactions to Yudkowsky (and never in person) so my first availability-heuristic-naive reaction is to think it isn’t a problem. But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I’m not looking, so would like to know more about that.
I haven’t seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him. Recalling Hanson’s view that a lot human behavior is really signaling and vying for status, I interpret this ridicule as an functioning to lower Eliezer’s status to compensate for what people perceive as inappropriate status grubbing on his part.
Most of the smart people who I know (including myself) perceive him as exhibiting a high degree of overconfidence in the validity of his views about the world.
This leads some of them conceptualize him as a laughingstock; as somebody who’s totally oblivious and feel that the idea that we should be thinking about artificial intelligence is equally worthy of ridicule. I personally am quite uncomfortable with these attitudes, agreeing with Holden Karnofsky’s comment
“I believe that there are enormous risks and upsides associated with artificial intelligence. Managing these deserves serious discussion, and it’s a shame that many laugh off such discussion.”
I’m somewhat surprised that you appear not to have notice this sort of thing independently. Maybe we hang out in rather different crowds.
Did that objectionable Yudkowsky-meteorite comment get widely disseminated? YouTube says the video has only 500 views, and I imagine most of those are from Yudkowsky-sympathizing Less Wrong readers.
Yes, I think that you’re right. I just picked it out as a very concrete example of a statement that could provoke a substantial negative reaction. There are other qualitatively similar things (but more mild) things that Eliezer has said that have been more widely disseminated.
I haven’t seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him
Ditto.
I know of a lot of very smart people (ok, less than 10, but still, more than 1) who essentially read Eliezer’s AI writings as a form of entertainment, and don’t take them even slightly seriously. This is partly because of the Absurdity Heuristic, but I think it’s also because of Eliezer’s writing style, and statements like the one in the initial post.
I personally fall somewhere between these people and, say, someone who has spent a summer at the SIAI on the ‘taking Eliezer seriously’ scale—I think he (and the others) probably have a point, and I at least know that they intend to be taken seriously, but I’ve never gotten round to doing anything about it.
I’m somewhat surprised that you appear not to have notice this sort of thing independently. Maybe we hang out in rather different crowds.
Oh, definitely. I have no real-life friends who are interested enough in these topics to know who Yudkowsky is (except, possibly, for what little they hear from me, and I try to keep the proselytizing to acceptable levels). So it’s just me and the internet.
I haven’t seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him. Recalling Hanson’s view that a lot human behavior is really signaling and vying for status, I interpret this ridicule as an functioning to lower Eliezer’s status to compensate for what people perceive as inappropriate status grubbing on his part.
I have seen some ridicule of Yudkowsky (on the internet) but my impression had been that it wasn’t a reaction to his tone, but rather that people were using the absurdity heuristic (cryonics and AGI are crazy talk) or reacting to surface-level status markers (Yudkowsky doesn’t have a PhD). That is to say, it didn’t seem the kind of ridicule that was avoidable by managing one’s tone. I don’t usually read ridicule in detail so it makes sense I’d be mistaken about that.
I just picked it out as a very concrete example of a statement that could provoke a substantial negative reaction.
If it hasn’t happened yet, that’s at least some evidence it won’t happen. Do you have reason to imagine a scenario which makes things very much worse than they already are based on such an effect, which means we must take care to tiptoe around these possibilities without allowing even one to happen? Because if not, we should probably worry about the things that already go wrong more than the things that might go wrong but haven’t yet.
Recalling Hanson’s view that a lot human behavior is really signaling and
vying for status
Existential risk reduction too! Charities are mostly used for signalling purposes—and to display affiliations and interests. Those caught up in causes use them for social networking with like-minded individuals—to signal how much they care, to signal how much spare time and energy they have—and so on. The actual cause is usually not irrelevant—but it is not particularly central either. It doesn’t make much sense to expect individuals to be actually attempting to SAVE THE WORLD! This is much more likely to be a signalling phenomenon, making use of a superstimulus for viral purposes.
Can you tell us more about how you’ve seen people react to Yudkowsky? That these negative reactions are significant is crucial to your proposal, but I have rarely seen negative reactions to Yudkowsky (and never in person) so my first availability-heuristic-naive reaction is to think it isn’t a problem. But I realize my experience may be atypical and there could be an abundance of avoidable Yudkowsky-hatred where I’m not looking, so would like to know more about that.
Did that objectionable Yudkowsky-meteorite comment get widely disseminated? YouTube says the video has only 500 views, and I imagine most of those are from Yudkowsky-sympathizing Less Wrong readers.
Negative reactions to Yudkowsky from various people (academics concerned with x-risk), just within the past few weeks:
...
…
...
…
I was told that the quotes above state some ad hominem falsehoods regarding Eliezer. I think it is appropriate to edit the message to show that indeed some person might not be have been honest, or clueful. Otherwise I’ll unnecessary end up perpetuating possible ad hominem attacks.
utterly false, wrote my first one at age 5 or 6, in BASIC on a ZX-81 with 4K of RAM
The fact that a lot of these reactions are based on false info is worth noting. It doesn’t defeat any arguments directly, but it says that the naive model where everything happens because of the direct perception of actions I directly control is false.
That sounds like a pretty rare device! Most ZX81 models had either 1K or 16K of RAM. 32 KB and 64 KB expansion packs were eventually released too.
Sent you a PM on who said that.
Is it likely that someone who’s doing interesting work that’s publicly available wouldn’t attract some hostility?
That N negative reactions about issue S exist only means that issue S is sufficiently popular.
Not if the polling is of folk in a position to have had contact with S, or is representative.
I don’t like to, but if necessary I can provide the indentity of the people who stated the above. They all directly work to reduce x-risks. I won’t do so in public however.
Identity of these people is not the issue. The percentage of people in given category that have negative reactions for given reason, negative reactions for other reason, and positive reactions would be useful, but not a bunch of filtered (in unknown way) soldier-arguments.
I know. I however just wanted to highlight that there are negative reactions, including not so negative critique. If you look further, you’ll probably find more. I haven’t saved all I saw over the years, I just wanted to show that it’s not like nobody has a problem with EY. And in all ocassion I actually defended him by the way.
The context is also difficult to provide as some of it is from private e-Mails. Although the first one is from here and after thinking about it I can also provide the name since he was anyway telling this Michael Anissimov. It is from Sean Hays:
You have a ‘nasty things people say about Eliezer’ quotes file?
The last one was from David Pearce.
Sure, but XiXiDu’s quotes bear no such framing.
This seems a rather minor objection.
But frogs are CUTE!
And existential risks are boring, and only interest Sci-Fi nerds.
Yudkowsky-hatred isn’t the risk, Yudkowsky-mild-contempt is. People engage with things they hate, sometimes it brings respect and attention to both parties (by polarizing a crowd that would otherwise be indifferent.) But you never want to be exposed to mild contempt.
I can think of some examples of conversations about Eliezer that would fit the category but it is hard to translate them to text. The important part of the reaction was non-verbal. Cryonics was one topic and the problem there wasn’t that it was uncredible but that it was uncool. Another topic is the old “thinks he can know something about Friendly AIs when he hasn’t even made an AI yet” theme. Again, I’ve seen that reaction evident through mannerisms that in no way translate to text. You can convey that people aren’t socially relevant without anything so crude as saying stuff.
[insert the obvious bad pun here]
I know, I couldn’t think of worthy witticism to lampshade it so I let it slide. :P
I haven’t seen examples of Yudkowsky-hatred. But I have regularly seen people ridicule him. Recalling Hanson’s view that a lot human behavior is really signaling and vying for status, I interpret this ridicule as an functioning to lower Eliezer’s status to compensate for what people perceive as inappropriate status grubbing on his part.
Most of the smart people who I know (including myself) perceive him as exhibiting a high degree of overconfidence in the validity of his views about the world.
This leads some of them conceptualize him as a laughingstock; as somebody who’s totally oblivious and feel that the idea that we should be thinking about artificial intelligence is equally worthy of ridicule. I personally am quite uncomfortable with these attitudes, agreeing with Holden Karnofsky’s comment
“I believe that there are enormous risks and upsides associated with artificial intelligence. Managing these deserves serious discussion, and it’s a shame that many laugh off such discussion.”
I’m somewhat surprised that you appear not to have notice this sort of thing independently. Maybe we hang out in rather different crowds.
Yes, I think that you’re right. I just picked it out as a very concrete example of a statement that could provoke a substantial negative reaction. There are other qualitatively similar things (but more mild) things that Eliezer has said that have been more widely disseminated.
Ditto.
I know of a lot of very smart people (ok, less than 10, but still, more than 1) who essentially read Eliezer’s AI writings as a form of entertainment, and don’t take them even slightly seriously. This is partly because of the Absurdity Heuristic, but I think it’s also because of Eliezer’s writing style, and statements like the one in the initial post.
I personally fall somewhere between these people and, say, someone who has spent a summer at the SIAI on the ‘taking Eliezer seriously’ scale—I think he (and the others) probably have a point, and I at least know that they intend to be taken seriously, but I’ve never gotten round to doing anything about it.
Why do they find them entertaining?
As XiXiDu says—pretty much the same reason they find Isaac Asimov entertaining.
I said the same before. It’s mainly good science fiction. I’m trying to find out if there’s more to it though.
Just saying this as evidence that there is a lot doubt even within the LW community.
Oh, definitely. I have no real-life friends who are interested enough in these topics to know who Yudkowsky is (except, possibly, for what little they hear from me, and I try to keep the proselytizing to acceptable levels). So it’s just me and the internet.
I have seen some ridicule of Yudkowsky (on the internet) but my impression had been that it wasn’t a reaction to his tone, but rather that people were using the absurdity heuristic (cryonics and AGI are crazy talk) or reacting to surface-level status markers (Yudkowsky doesn’t have a PhD). That is to say, it didn’t seem the kind of ridicule that was avoidable by managing one’s tone. I don’t usually read ridicule in detail so it makes sense I’d be mistaken about that.
If it hasn’t happened yet, that’s at least some evidence it won’t happen. Do you have reason to imagine a scenario which makes things very much worse than they already are based on such an effect, which means we must take care to tiptoe around these possibilities without allowing even one to happen? Because if not, we should probably worry about the things that already go wrong more than the things that might go wrong but haven’t yet.
I’m confused—I feel like I already address most of your remarks in the comment that you’re responding to?
Existential risk reduction too! Charities are mostly used for signalling purposes—and to display affiliations and interests. Those caught up in causes use them for social networking with like-minded individuals—to signal how much they care, to signal how much spare time and energy they have—and so on. The actual cause is usually not irrelevant—but it is not particularly central either. It doesn’t make much sense to expect individuals to be actually attempting to SAVE THE WORLD! This is much more likely to be a signalling phenomenon, making use of a superstimulus for viral purposes.