I don’t see why people can’t just bite the bullet about it and accept their intuitions are wrong like they do a myriad other highly counterintuitive things in math and science.
I don’t see why people can’t just bite the bullet about it and accept their intuitions are wrong...
I think that it is not clear enough how they are wrong in this case. That is why I wrote the OP, to hint at the possibility that risks from AI in and of itself is not the problem but something that has to do with the risk aversion and the discounting of low-probability events.
What do you think is the underlying reason for the disagreement of organisations like GiveWell or people like John Baez, Robin Hanson, Greg Egan, Douglas Hofstadter etc.?
Where should you go in life? I don’t know exactly, but I think I’ll go ahead and say “not environmentalism”. There’s just no way that the product of scope, marginal impact, and John Baez’s comparative advantage is going to end up being maximal at that point. (...) Maybe if there were ten people working on environmentalism and millions of people working on Friendly AI, I could see sending the next marginal dollar to environmentalism. But with millions of people working on environmentalism, and major existential risks that are completely ignored…
Why don’t they accept this line of reasoning? There must be a reason other than the existence of existential risks, because all of them agree that existential risks do exist.
Because they are irrational, or haven’t been exposed to it?
If I remember correctly, even Eliezer himself had a hard time biting the bullet on the St. Petersburg’d version. Actually, come to think of it I’m not sure if he ever did...
Because they are irrational, or haven’t been exposed to it?
They all have been exposed to it. John Baez, GiveWell, Robin Hanson, Katja Grace, Greg Egan, Douglas Hofstadter and many others. John Baez has interviewed Eliezer Yudkowsky (part 1, 2, 3). Greg Egan wrote a book where he disses the SIAI. GiveWell interviewed the SIAI. Katja Grace has been a visiting fellow. Robin Hanson started Overcoming Bias with Eliezer. And Douglas Hofstadter talked at the Singularity Summit. None of them believes that risks from AI are terrible important. And there are manyotherpeople. And those are just the few that even care to comment on it.
Are they all irrational? If so, how can we fix that?
I don’t see why people can’t just bite the bullet about it and accept their intuitions are wrong like they do a myriad other highly counterintuitive things in math and science.
I think that it is not clear enough how they are wrong in this case. That is why I wrote the OP, to hint at the possibility that risks from AI in and of itself is not the problem but something that has to do with the risk aversion and the discounting of low-probability events.
What do you think is the underlying reason for the disagreement of organisations like GiveWell or people like John Baez, Robin Hanson, Greg Egan, Douglas Hofstadter etc.?
Eliezer Yudkowsky wrote:
Why don’t they accept this line of reasoning? There must be a reason other than the existence of existential risks, because all of them agree that existential risks do exist.
Because they are irrational, or haven’t been exposed to it?
If I remember correctly, even Eliezer himself had a hard time biting the bullet on the St. Petersburg’d version. Actually, come to think of it I’m not sure if he ever did...
They all have been exposed to it. John Baez, GiveWell, Robin Hanson, Katja Grace, Greg Egan, Douglas Hofstadter and many others. John Baez has interviewed Eliezer Yudkowsky (part 1, 2, 3). Greg Egan wrote a book where he disses the SIAI. GiveWell interviewed the SIAI. Katja Grace has been a visiting fellow. Robin Hanson started Overcoming Bias with Eliezer. And Douglas Hofstadter talked at the Singularity Summit. None of them believes that risks from AI are terrible important. And there are many other people. And those are just the few that even care to comment on it.
Are they all irrational? If so, how can we fix that?
No idea. Colour me confused.