I like your point that humans aren’t aligned, and while I’m more optimistic about human alignment than you are, I agree that the level of human alignment currently is not enough to make a superintelligence safe if it only had human levels of motivation/reliability.
The most obvious natural experiments about what humans do when they have a lot of power with no checks-and-balances are autocracies. While there are occasional examples (such as Singapore) of autocracies that didn’t work out too badly for the governed, they’re sadly few and far between. The obvious question then is whether “humans who become autocrats” are a representative random sample of all humans, or if there’s a strong selection bias here. It seems entirely plausible that there’s at least some selection effects in the process of becoming an autocrat. A couple of percent of all humans are sociopaths, so if there were a sufficiently strong (two orders of magnitude or more) selection bias, then this might, for example, be a natural experiment about the alignment properties of a set of humans consisting mostly of sociopaths, in which case it usually going badly would be unsurprising.
The thing that concerns me is the aphorism “Power corrupts, and absolute power corrupts absolutely”. There does seem to be a strong correlation between how long someone has had a lot of power and an increasing likelihood of them using it badly. That’s one of the reasons for term limits in positions like president: humans seem to pretty instinctively not trust a leader after they’ve been in a position of a lot of power with few check-and-balances for roughly a decade. The histories of autocracies tend to reflect them getting worse over time, on decade time-scales. So I don’t think the problem here is just from sociopaths. I think the proportion of humans who wouldn’t eventually be corrupted by a lot of power with no checks-and-balances may be fairly low, comparable to the proportion of honest senior politicians, say.
How much of this argument applies to ASI agents powered by LLMs “distilled” from humans is unclear — it’s much more obviously applicable to uploads of humans that then get upgraded to super-human capabilities.
IMO, there are fairly strong arguments that there is a pretty bad selection effect for people who aim to get into power generally being more Machiavellian/Sociopathic than other people, and at least part of the problem is that the parts of your brain that cares about other people gets damaged when you gain power, which is obviously not good.
But still, I agree with you that an ASI that can entirely run society while only being as aligned as humans are to very distant humans likely ends up in a very bad state for us, possibly enough to be an S-risk or X-risk (I currently see S-risk being more probable than X-risk for ASI if we only had human-level alignment to others.)
The most obvious natural experiments about what humans do when they have a lot of power with no checks-and-balances are autocracies. While there are occasional examples (such as Singapore) of autocracies that didn’t work out too badly for the governed, they’re sadly few and far between. The obvious question then is whether “humans who become autocrats” are a representative random sample of all humans, or if there’s a strong selection bias here. It seems entirely plausible that there’s at least some selection effects in the process of becoming an autocrat. A couple of percent of all humans are sociopaths, so if there were a sufficiently strong (two orders of magnitude or more) selection bias, then this might, for example, be a natural experiment about the alignment properties of a set of humans consisting mostly of sociopaths, in which case it usually going badly would be unsurprising.
The thing that concerns me is the aphorism “Power corrupts, and absolute power corrupts absolutely”. There does seem to be a strong correlation between how long someone has had a lot of power and an increasing likelihood of them using it badly. That’s one of the reasons for term limits in positions like president: humans seem to pretty instinctively not trust a leader after they’ve been in a position of a lot of power with few check-and-balances for roughly a decade. The histories of autocracies tend to reflect them getting worse over time, on decade time-scales. So I don’t think the problem here is just from sociopaths. I think the proportion of humans who wouldn’t eventually be corrupted by a lot of power with no checks-and-balances may be fairly low, comparable to the proportion of honest senior politicians, say.
How much of this argument applies to ASI agents powered by LLMs “distilled” from humans is unclear — it’s much more obviously applicable to uploads of humans that then get upgraded to super-human capabilities.
IMO, there are fairly strong arguments that there is a pretty bad selection effect for people who aim to get into power generally being more Machiavellian/Sociopathic than other people, and at least part of the problem is that the parts of your brain that cares about other people gets damaged when you gain power, which is obviously not good.
But still, I agree with you that an ASI that can entirely run society while only being as aligned as humans are to very distant humans likely ends up in a very bad state for us, possibly enough to be an S-risk or X-risk (I currently see S-risk being more probable than X-risk for ASI if we only had human-level alignment to others.)