How about a book that has a whole bunch of other scenarios, one of which is AI risk which takes one chapter out of 20, and 19 other chapters on other scenarios?
It would be interesting if you went into more detail on how long-termists should allocate their resources at some point; what proportion of resources should go into which scenarios, etc. (I know that you’ve written a bit on such themes.)
Unrelatedly, it would be interesting to see some research on the supposed “crying wolf effect”; maybe with regards to other risks. I’m not sure that effect is as strong as one might think at first glance.
It would be interesting if you went into more detail on how long-termists should allocate their resources at some point; what proportion of resources should go into which scenarios, etc. (I know that you’ve written a bit on such themes.)
That was also probably my main question when listening to this interview.
I also found it interesting to hear that statement you quoted now that The Precipicehas been released, and now that there are two more books on the horizon (by MacAskill and Sandberg) that I believe are meant to be broadly on longtermism but not specifically on AI. The Precipice has 8 chapters, with roughly a quarter of 1 chapter specifically on AI, and a bunch of other scenarios discussed, so it seems quite close to what Hanson was discussing. Perhaps at least parts of the longtermist community have shifted (were already shifting?) more towards the sort of allocation of attention/resources that Hanson was envisioning.
I share the view that research on the supposed “crying wolf effect” would be quite interesting. I think its results have direct implications for longtermist/EA/x-risk strategy and communication.
It would be interesting if you went into more detail on how long-termists should allocate their resources at some point; what proportion of resources should go into which scenarios, etc. (I know that you’ve written a bit on such themes.)
Unrelatedly, it would be interesting to see some research on the supposed “crying wolf effect”; maybe with regards to other risks. I’m not sure that effect is as strong as one might think at first glance.
That was also probably my main question when listening to this interview.
I also found it interesting to hear that statement you quoted now that The Precipice has been released, and now that there are two more books on the horizon (by MacAskill and Sandberg) that I believe are meant to be broadly on longtermism but not specifically on AI. The Precipice has 8 chapters, with roughly a quarter of 1 chapter specifically on AI, and a bunch of other scenarios discussed, so it seems quite close to what Hanson was discussing. Perhaps at least parts of the longtermist community have shifted (were already shifting?) more towards the sort of allocation of attention/resources that Hanson was envisioning.
I share the view that research on the supposed “crying wolf effect” would be quite interesting. I think its results have direct implications for longtermist/EA/x-risk strategy and communication.