Note that Hanson currently thinks the chances of AI doom are < 1%, while Yudkowsky thinks that they are > 99%.
It is good to note that the optimistic version of Hanson would be considered doom by many (including Yudkowsky). Doom/utopia definition Yudkowsky is not equal to doom/utopia definition of Hanson.
This is important in many discussions. Many non-doomers have definitions of utopia that many consider to be dystopian. E.g. AI will replace humans to create a very interesting future where the AI’s will conquer the stars, some think this is positive others think this is doom because there are no humans.
Curious if OP or anyone else has a source for the <1% claim? (Partially interested in order to tell exactly what kind of “doom” this is anti-predicting.)
It is good to note that the optimistic version of Hanson would be considered doom by many (including Yudkowsky). Doom/utopia definition Yudkowsky is not equal to doom/utopia definition of Hanson.
This is important in many discussions. Many non-doomers have definitions of utopia that many consider to be dystopian. E.g. AI will replace humans to create a very interesting future where the AI’s will conquer the stars, some think this is positive others think this is doom because there are no humans.
This was also my impression.
Curious if OP or anyone else has a source for the <1% claim? (Partially interested in order to tell exactly what kind of “doom” this is anti-predicting.)
Here a summary of the Hanson position (by himself). He is very clear about humanity being replaced by AI.
https://www.overcomingbias.com/p/to-imagine-ai-imagine-no-ai