I would further hypothesize that these inferences are the result of brain’s attempt to make the fear and excitement about AGI coherent. If the person is not a longtermist, they typically reach to the idea that AGI will be such a massive upside for the people currently living (and I think Altman is in this camp, despite being quoted here). But for longtermists, such as Alexander, this “doesn’t work” to explain intuitive excitement about AGI, so they reach to this idea of “massive risk without AGI”.
I should say that I don’t imagine these hypotheticals in the void, I feel something like “sub-excitement” (or proper excitement, which I deliberately suppress) about AGI myself, and also I was close to be convinced by arguments by MacAskill and Alexander.
I would further hypothesize that these inferences are the result of brain’s attempt to make the fear and excitement about AGI coherent. If the person is not a longtermist, they typically reach to the idea that AGI will be such a massive upside for the people currently living (and I think Altman is in this camp, despite being quoted here). But for longtermists, such as Alexander, this “doesn’t work” to explain intuitive excitement about AGI, so they reach to this idea of “massive risk without AGI”.
I should say that I don’t imagine these hypotheticals in the void, I feel something like “sub-excitement” (or proper excitement, which I deliberately suppress) about AGI myself, and also I was close to be convinced by arguments by MacAskill and Alexander.