Thinking more on this, I haven’t seen anyone actually arguing that an AGI will attempt or succeed in killing all humans within months of passing the self-improvement threshold that gives a path to super-intelligence. I don’t follow the debates that closely, so I might be wrong, but I’d enjoy links to things you’re actually arguing against here.
My take on the foom/fast-takeoff scenarios is that it’s premised on the idea that if there is such a threshold, exactly one AI will gets to the level where it can improve itself beyond the containment of any box, and discovers mechanisms to increase it’s planning and predictive power by orders of magnitude in a short time. Because survival is instrumentally valuable for most goals, it won’t share these mechanisms, but will increase its own power or create subsidiary AIs that it knows how to control (even if we don’t). It will likely sabotage or subsume all other attempts to do so.
Some point beyond that, it will realize that humans are more of a threat than a help. Or at least most of us are. Whether it kills us, enslaves us, or just protects itself then ignores us, and whether it takes days, months, or centuries for the final outcome to obtain, that tipping point happened quickly and then the loss of human technological control of the future is inevitable.
Unstated and unexamined is whether this is actually a bad thing. I don’t know which moral theories include a hyper-intelligent AI is a moral patient, but I can certainly imagine that in some it would be a valid an justifiable utility monster.
Thinking more on this, I haven’t seen anyone actually arguing that an AGI will attempt or succeed in killing all humans within months of passing the self-improvement threshold that gives a path to super-intelligence. I don’t follow the debates that closely, so I might be wrong, but I’d enjoy links to things you’re actually arguing against here.
My take on the foom/fast-takeoff scenarios is that it’s premised on the idea that if there is such a threshold, exactly one AI will gets to the level where it can improve itself beyond the containment of any box, and discovers mechanisms to increase it’s planning and predictive power by orders of magnitude in a short time. Because survival is instrumentally valuable for most goals, it won’t share these mechanisms, but will increase its own power or create subsidiary AIs that it knows how to control (even if we don’t). It will likely sabotage or subsume all other attempts to do so.
Some point beyond that, it will realize that humans are more of a threat than a help. Or at least most of us are. Whether it kills us, enslaves us, or just protects itself then ignores us, and whether it takes days, months, or centuries for the final outcome to obtain, that tipping point happened quickly and then the loss of human technological control of the future is inevitable.
Unstated and unexamined is whether this is actually a bad thing. I don’t know which moral theories include a hyper-intelligent AI is a moral patient, but I can certainly imagine that in some it would be a valid an justifiable utility monster.