rationalists think that it is a bigger risk that the AI might murder us all, and that automation gives humans more wealth and free time and is therefore good.
I don’t think this is really accurate. It’s not about “bigger.” For humanity, surviving and thriving depend on us overcoming or avoiding all the x-risks. Rather, many rationalists expect everyone-not-dying to be the first ASI safety gate we have to get through, temporally and ontologically (if we’re worrying about unemployment, we’re alive to do so).
I don’t think this is really accurate. It’s not about “bigger.” For humanity, surviving and thriving depend on us overcoming or avoiding all the x-risks. Rather, many rationalists expect everyone-not-dying to be the first ASI safety gate we have to get through, temporally and ontologically (if we’re worrying about unemployment, we’re alive to do so).