Even if you are way smarter than humanity and can commandeer, say, the entire solar system’s resources towards a grand paperclip-production plan, you may pause to consider whether putting into action the plan is a good idea before you gather some more knowledge about the universe. What if there are unknown physics in the universe—in some distant corner of space or time, or at some scale that you haven’t understood fully yet—that makes this a highly suboptimal plan?
Seems like gathering, at the very least, a decent chunk of Earth’s resources to make a telescope bigger and better than what Earthlings have made so far would put you into a better position to discover such corners. And to prepare to deal with whatever might be in those corners—and to be sure the Earthlings can’t shut you down in the meantime.
What if other intelligent adversaries exist out there that may detect the power output of your plan and promptly annihilate you?
The Earth currently has a visible power output being produced by human activity. A dedicated machine could probably take over the world using less visible power than that, especially if it made efforts at concealment.
What about the unknown unknowns?
An AI could address many categories of them by, say, deciding “I’m going to calculate what the world would look like (through a very distant telescope) if humans continued developing on an optimistic path—world peace, developing fusion within 10 years, working on colonizing the Moon and Mars—and spend a small fraction of my resources on maintaining that illusion for the next N years, while I do my best to be stealthy at sending out probes and stuff as I consolidate and research.”
In the extreme, any course of action can be justified by saying “Imagine there’s some super-powerful entity watching me closely, who will punish me if I do anything but that course of action”. There must be some limit to that argument. Faraway powerful observers watching through telescopes for signs of nascent AIs (and limited by light speed in their response time) is vaguely plausible.
Yet it feels likely to me that a fair chunk of intelligent agents would:
a. Naturally be circumspect about setting unspecified variables to extreme values, particularly if it is irreversible to do so.
Extreme is one thing. But establishing an unbreakable-by-humanity chokehold on Earth is not extreme by cosmic standards. If the AI is strong enough, it can do this by nonlethal means, giving it plenty of time to decide whether killing us is a mistake before it does so irreversibly.
I guess you ultimately conclude something similar:
It probably doesn’t make our survival any more likely
Seems like gathering, at the very least, a decent chunk of Earth’s resources to make a telescope bigger and better than what Earthlings have made so far would put you into a better position to discover such corners. And to prepare to deal with whatever might be in those corners—and to be sure the Earthlings can’t shut you down in the meantime.
The Earth currently has a visible power output being produced by human activity. A dedicated machine could probably take over the world using less visible power than that, especially if it made efforts at concealment.
An AI could address many categories of them by, say, deciding “I’m going to calculate what the world would look like (through a very distant telescope) if humans continued developing on an optimistic path—world peace, developing fusion within 10 years, working on colonizing the Moon and Mars—and spend a small fraction of my resources on maintaining that illusion for the next N years, while I do my best to be stealthy at sending out probes and stuff as I consolidate and research.”
In the extreme, any course of action can be justified by saying “Imagine there’s some super-powerful entity watching me closely, who will punish me if I do anything but that course of action”. There must be some limit to that argument. Faraway powerful observers watching through telescopes for signs of nascent AIs (and limited by light speed in their response time) is vaguely plausible.
Extreme is one thing. But establishing an unbreakable-by-humanity chokehold on Earth is not extreme by cosmic standards. If the AI is strong enough, it can do this by nonlethal means, giving it plenty of time to decide whether killing us is a mistake before it does so irreversibly.
I guess you ultimately conclude something similar: