Thanks for pointing this out. I find it particularly epistemically suspicious that all the quoted people (that currently predict grim future for humanity in the next 100 years if AGI won’t be built and deployed massively) could hardly have median AGI timelines shorter than 30 years 5 years ago, with a significant probability weight for 70+ year timelines, yet they didn’t voice this as a significant concern, even existential risk, at that time. And I don’t think anything that has happened in the last 5 years should have made them so much more pessimistic. Trump already won the US election, Brexit was already voted for, the erosion of epistemic and civic commons, and the democratic backsliding was already happening, the climate situation was basically as obviously grave as it is today. Civilisational unpreparedness to a major pandemic was also obvious, as that nothing was being made to fix that.
This suggests to me that these people currently converged on thinking “x-risk from building AGI is high, but the risk for the civilisational collapse in the next 100 years without AGI is even higher” is a trick of mind that would probably not have happened if AGI timelines didn’t so dramatically shrink for everyone.
I’m also aware that Will MacAskill argued for 30% chance of stagnation and civilisational collapse in this century in “What We Owe The Future” book last year, that could have prompted various people to update. But I was mostly unconvinced by that argument and I wonder if that argument itself was an example of the same psychological reaction (or a sort of counter-reaction) to massively shrinking timelines.
I would further hypothesize that these inferences are the result of brain’s attempt to make the fear and excitement about AGI coherent. If the person is not a longtermist, they typically reach to the idea that AGI will be such a massive upside for the people currently living (and I think Altman is in this camp, despite being quoted here). But for longtermists, such as Alexander, this “doesn’t work” to explain intuitive excitement about AGI, so they reach to this idea of “massive risk without AGI”.
I should say that I don’t imagine these hypotheticals in the void, I feel something like “sub-excitement” (or proper excitement, which I deliberately suppress) about AGI myself, and also I was close to be convinced by arguments by MacAskill and Alexander.
Thanks for pointing this out. I find it particularly epistemically suspicious that all the quoted people (that currently predict grim future for humanity in the next 100 years if AGI won’t be built and deployed massively) could hardly have median AGI timelines shorter than 30 years 5 years ago, with a significant probability weight for 70+ year timelines, yet they didn’t voice this as a significant concern, even existential risk, at that time. And I don’t think anything that has happened in the last 5 years should have made them so much more pessimistic. Trump already won the US election, Brexit was already voted for, the erosion of epistemic and civic commons, and the democratic backsliding was already happening, the climate situation was basically as obviously grave as it is today. Civilisational unpreparedness to a major pandemic was also obvious, as that nothing was being made to fix that.
This suggests to me that these people currently converged on thinking “x-risk from building AGI is high, but the risk for the civilisational collapse in the next 100 years without AGI is even higher” is a trick of mind that would probably not have happened if AGI timelines didn’t so dramatically shrink for everyone.
I’m also aware that Will MacAskill argued for 30% chance of stagnation and civilisational collapse in this century in “What We Owe The Future” book last year, that could have prompted various people to update. But I was mostly unconvinced by that argument and I wonder if that argument itself was an example of the same psychological reaction (or a sort of counter-reaction) to massively shrinking timelines.
I would further hypothesize that these inferences are the result of brain’s attempt to make the fear and excitement about AGI coherent. If the person is not a longtermist, they typically reach to the idea that AGI will be such a massive upside for the people currently living (and I think Altman is in this camp, despite being quoted here). But for longtermists, such as Alexander, this “doesn’t work” to explain intuitive excitement about AGI, so they reach to this idea of “massive risk without AGI”.
I should say that I don’t imagine these hypotheticals in the void, I feel something like “sub-excitement” (or proper excitement, which I deliberately suppress) about AGI myself, and also I was close to be convinced by arguments by MacAskill and Alexander.