It would be interesting if you went into more detail on how long-termists should allocate their resources at some point; what proportion of resources should go into which scenarios, etc. (I know that you’ve written a bit on such themes.)
That was also probably my main question when listening to this interview.
I also found it interesting to hear that statement you quoted now that The Precipicehas been released, and now that there are two more books on the horizon (by MacAskill and Sandberg) that I believe are meant to be broadly on longtermism but not specifically on AI. The Precipice has 8 chapters, with roughly a quarter of 1 chapter specifically on AI, and a bunch of other scenarios discussed, so it seems quite close to what Hanson was discussing. Perhaps at least parts of the longtermist community have shifted (were already shifting?) more towards the sort of allocation of attention/resources that Hanson was envisioning.
I share the view that research on the supposed “crying wolf effect” would be quite interesting. I think its results have direct implications for longtermist/EA/x-risk strategy and communication.
That was also probably my main question when listening to this interview.
I also found it interesting to hear that statement you quoted now that The Precipice has been released, and now that there are two more books on the horizon (by MacAskill and Sandberg) that I believe are meant to be broadly on longtermism but not specifically on AI. The Precipice has 8 chapters, with roughly a quarter of 1 chapter specifically on AI, and a bunch of other scenarios discussed, so it seems quite close to what Hanson was discussing. Perhaps at least parts of the longtermist community have shifted (were already shifting?) more towards the sort of allocation of attention/resources that Hanson was envisioning.
I share the view that research on the supposed “crying wolf effect” would be quite interesting. I think its results have direct implications for longtermist/EA/x-risk strategy and communication.