From my perspective, I’d say that conditional on takeover happening, I’d probably say that a human taking over compared to an AI has pretty similar distributions of outcomes, mostly because I consider the variance of human and AI values to have surprisingly similar outcomes (notably a key factor here is I expect a lot of the more alien values to result in extinction, though partial alignment can make things worse, but compared to the horror show that quite a bit of people have on their values, death can be pretty good, and that’s because I’m quite a bit more skeptical of the average person’s values, especially conditioning on takeover leading to automatically good outcomes.)
From my perspective, I’d say that conditional on takeover happening, I’d probably say that a human taking over compared to an AI has pretty similar distributions of outcomes, mostly because I consider the variance of human and AI values to have surprisingly similar outcomes (notably a key factor here is I expect a lot of the more alien values to result in extinction, though partial alignment can make things worse, but compared to the horror show that quite a bit of people have on their values, death can be pretty good, and that’s because I’m quite a bit more skeptical of the average person’s values, especially conditioning on takeover leading to automatically good outcomes.)