And, Eliezer’s choice to work on rationality has paid off. The Sequences, and the Less Wrong community that grew out of them, have been successful.
While 38.5% of all people that know about Less Wrong have read at least 75% of the Sequences only 16.5% think that unfriendly AI is the most worrisome existential risk. How do you know that those 16.5% wouldn’t believe you anyway, even without the work on rationality, e.g. by writing science fiction?
One doesn’t need to know that hundreds of people have been influenced to know that Eliezer’s writings have had x-risk reduction value; if he’s succeeded in getting a handful of people seriously interested in x-risk reduction relative to the counterfactual his work is of high value. Based on my conversations with those who have been so influenced, this last point seems plausible to me. But I agree that the importance of the sequences for x-risk reduction has been overplayed.
As one of the 83.5%, I wish to point out that you’re misinterpreting the results of the poll. The question was: “Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?” This is not the same as “unfriendly AI is the most worrisome existential risk”.
I think that unfriendly AI is the most likely existential risk to wipe out humanity. But I think that an AI singularity is likely farther off than 2100. I voted for an engineered pandemic, because that and nuclear war were the only two risks I thought decently likely to occur before 2100, though a >90% wipeout of humanity is still quite unlikely.
edit: I should note that I have read the sequences and it is because of Eliezer’s writing that I think unfriendly AI is the most likely way for humanity to end.
While 38.5% of all people that know about Less Wrong have read at least 75% of the Sequences only 16.5% think that unfriendly AI is the most worrisome existential risk. How do you know that those 16.5% wouldn’t believe you anyway, even without the work on rationality, e.g. by writing science fiction?
One doesn’t need to know that hundreds of people have been influenced to know that Eliezer’s writings have had x-risk reduction value; if he’s succeeded in getting a handful of people seriously interested in x-risk reduction relative to the counterfactual his work is of high value. Based on my conversations with those who have been so influenced, this last point seems plausible to me. But I agree that the importance of the sequences for x-risk reduction has been overplayed.
As one of the 83.5%, I wish to point out that you’re misinterpreting the results of the poll. The question was: “Which disaster do you think is most likely to wipe out greater than 90% of humanity before the year 2100?” This is not the same as “unfriendly AI is the most worrisome existential risk”.
I think that unfriendly AI is the most likely existential risk to wipe out humanity. But I think that an AI singularity is likely farther off than 2100. I voted for an engineered pandemic, because that and nuclear war were the only two risks I thought decently likely to occur before 2100, though a >90% wipeout of humanity is still quite unlikely.
edit: I should note that I have read the sequences and it is because of Eliezer’s writing that I think unfriendly AI is the most likely way for humanity to end.