Once the economy is fully automated we end up in a Paul-Christiano-scenario where all the stuff that happens in the world is incomprehensible to humans without a large amount of AI help. But ultimately the AI, having been in control for so long, is able to subvert all the systems that human experts use to monitor what is actually going on. The stuff they see on screens is fake, just like how Stuxnet gave false information to Iranian technicians at Natanz
This concedes the entire argument that we should regulate uses not intelligence per-se. In your story a singleton AI uses a bunch of end-effectors (robot factories, killer drones, virus manufacturing facilities) to cause the end of humanity.
If there isn’t a singleton AI (i.e. my good AI will stop your bad AI), or if we just actually have human control of dangerous end-effectors then you can never pass through to the “and then the AI kills us all” step.
Certainly you can argue that the AI will be so good at persuasion/deception that there’s no way to maintain human control. Or that there’s no way to identify dangerous end-effectors in advance. Or that AI will inevitably all cooperate against humanity (due to some galaxy-brained take about how AI can engage in acausal bargaining by revealing their source code but humans can’t). But none of these things follow automatically from the mere existence somewhere of a set of numbers on a computer that happens to surpass humanity’s intelligence. Under any plausible scenario without Foom, the level at which AGI becomes dangerous just by existing is well-above the threshold of human-level intelligence.
This concedes the entire argument that we should regulate uses not intelligence per-se. In your story a singleton AI uses a bunch of end-effectors (robot factories, killer drones, virus manufacturing facilities) to cause the end of humanity.
If there isn’t a singleton AI (i.e. my good AI will stop your bad AI), or if we just actually have human control of dangerous end-effectors then you can never pass through to the “and then the AI kills us all” step.
Certainly you can argue that the AI will be so good at persuasion/deception that there’s no way to maintain human control. Or that there’s no way to identify dangerous end-effectors in advance. Or that AI will inevitably all cooperate against humanity (due to some galaxy-brained take about how AI can engage in acausal bargaining by revealing their source code but humans can’t). But none of these things follow automatically from the mere existence somewhere of a set of numbers on a computer that happens to surpass humanity’s intelligence. Under any plausible scenario without Foom, the level at which AGI becomes dangerous just by existing is well-above the threshold of human-level intelligence.