Reposting my agreement from the EA forum! (Personally I feel like it would be nice to have EA/Lesswrong crossposts have totally synced comments, such that it is all one big community discussion. Anyways --)
Definitely agree with this. Consider for instance how markets seemed to have reacted strangely / too slowly to the emergence of the Covid-19 pandemic, and then consider how much more familiar and predictable is the idea of a viral pandemic compared to the idea of unaligned AI:
The coronavirus was x-risk on easy mode: a risk (global influenza pandemic) warned of for many decades in advance, in highly specific detail, by respected & high-status people like Bill Gates, which was easy to understand with well-known historical precedents, fitting into standard human conceptions of risk, which could be planned & prepared for effectively at small expense, and whose absolute progress human by human could be recorded in real-time… If the worst-case AI x-risk happened, it would be hard for every reason that corona was easy. When we speak of “fast takeoffs”, I increasingly think we should clarify that apparently, a “fast takeoff” in terms of human coordination means any takeoff faster than ‘several decades’ will get inside our decision loops. -- Gwern
Those investors who limit themselves to what seems normal and reasonable in light of human history are unprepared for the age of miracle and wonder in which they now find themselves. The twentieth century was great and terrible, and the twenty-first century promises to be far greater and more terrible. …The limits of a George Soros or a Julian Robertson, much less of an LTCM, can be attributed to a failure of the imagination about the possible trajectories for our world, especially regarding the radically divergent alternatives of total collapse and good globalization.
Reposting my agreement from the EA forum! (Personally I feel like it would be nice to have EA/Lesswrong crossposts have totally synced comments, such that it is all one big community discussion. Anyways --)
Definitely agree with this. Consider for instance how markets seemed to have reacted strangely / too slowly to the emergence of the Covid-19 pandemic, and then consider how much more familiar and predictable is the idea of a viral pandemic compared to the idea of unaligned AI:
Peter Thiel (in his “Optimistic Thought Experiment” essay about investing under anthropic shadow, which I analyzed in a Forum post) also thinks that there is a “failure of imagination” going on here, similar to what Gwern describes: