I would expect a superhuman AI to be really good at tracking the consequences of its actions. The AI isn’t setting out to wipe out humanity. But in the list of side effects of removing all oxygen, along with many things no human would ever consider, is wiping out humanity.
AIXI tracks every consequence of its actions, at the quantum level. A physical AI must approximate, tracking only the most important consequences. So in its decision process, I would expect a smart AI to extensively track all consequences that might be important.
I don’t think lazy data structures can pull this off. The AI must calculate various ways human extinction could affect its utility.
So unless there are some heuristics that are so general they cover this as a special case, and the AI can find them without considering the special cases first, then it must explicitly consider human extinction.
I would expect a superhuman AI to be really good at tracking the consequences of its actions. The AI isn’t setting out to wipe out humanity. But in the list of side effects of removing all oxygen, along with many things no human would ever consider, is wiping out humanity.
AIXI tracks every consequence of its actions, at the quantum level. A physical AI must approximate, tracking only the most important consequences. So in its decision process, I would expect a smart AI to extensively track all consequences that might be important.
Lazy data structures.
I don’t think lazy data structures can pull this off. The AI must calculate various ways human extinction could affect its utility.
So unless there are some heuristics that are so general they cover this as a special case, and the AI can find them without considering the special cases first, then it must explicitly consider human extinction.