What are dangerous things that a malign AI superintelligence can do, which a large enough group of humans with sufficient motivation cannot do? All the “horrible threats” listed are things that are well within the ability of large organizations that exist today. So why would an “AI superintelligence” able to execute those actions on its’ own, or at the direction of its’ human masters be more of a problem than the status quo?
What are dangerous things that a malign AI superintelligence can do, which a large enough group of humans with sufficient motivation cannot do? All the “horrible threats” listed are things that are well within the ability of large organizations that exist today. So why would an “AI superintelligence” able to execute those actions on its’ own, or at the direction of its’ human masters be more of a problem than the status quo?