If an unaligned AI by itself can do near-world-ending damage, an identically powerful AI that is instead alignable to a specific person can do the same damage.
If you mean that as the simplified version of my claim, I don’t agree that it is equivalent.
Your starting point, with a powerful AI that can do damage by itself, is wrong. My starting point is groups of people whom we would not currently consider to be sources of risk, who become very dangerous as novel weaponry, along with changes in relations of economic production, unlock the means and the motive to kill very large numbers of people.
And (as I’ve tried to clarify in my other responses) the comparison of this scenario to misaligned AI cases is not the point, it’s the threat from both sides of the alignment question.
If you mean that as the simplified version of my claim, I don’t agree that it is equivalent.
Your starting point, with a powerful AI that can do damage by itself, is wrong. My starting point is groups of people whom we would not currently consider to be sources of risk, who become very dangerous as novel weaponry, along with changes in relations of economic production, unlock the means and the motive to kill very large numbers of people.
And (as I’ve tried to clarify in my other responses) the comparison of this scenario to misaligned AI cases is not the point, it’s the threat from both sides of the alignment question.