We don’t know exactly how a self-aware AI would act, but we know this: it will strive to prevent its own shutdown. No matter what the AI’s goals are, it wouldn’t be able to achieve them if it gets turned off. The only sure fire way to prevent its shutdown would be to eliminate the ones with the power to do so: humans. There is currently no known method to teach an AI to care about humans. Solving this problem may take decades, and we are running out of time.
Shutdown points are really important. It could probably fit well into all of my entries, since they target executives and policymakers who will mentally beeline to “off-switch”. But it’s also really hard to do it right concisely, because that brings an anthropomorphic god-like entity to mind, which rapidly triggers the absurdity heuristic. And the whole thing with “wanting to turn itself off but turning off the wrong way or doing damage in the process” is really hard to keep concise.
We don’t know exactly how a self-aware AI would act, but we know this: it will strive to prevent its own shutdown. No matter what the AI’s goals are, it wouldn’t be able to achieve them if it gets turned off. The only sure fire way to prevent its shutdown would be to eliminate the ones with the power to do so: humans. There is currently no known method to teach an AI to care about humans. Solving this problem may take decades, and we are running out of time.
Shutdown points are really important. It could probably fit well into all of my entries, since they target executives and policymakers who will mentally beeline to “off-switch”. But it’s also really hard to do it right concisely, because that brings an anthropomorphic god-like entity to mind, which rapidly triggers the absurdity heuristic. And the whole thing with “wanting to turn itself off but turning off the wrong way or doing damage in the process” is really hard to keep concise.