A related thought: an intelligence can only work on the information that it has, regardless of its veracity, and it can only work on information that actually exists.
My hunch is that the plan of “AI boostraps itself to superintelligence, then superpower, then wipes out humanity” relies on it having access to information that is too well hidden to divine through sheer calculation and infogathering, regardless of its intelligence (ex: the location of all the military bunkers, and nuclear submarines humanity has), or simply does not exist (ex: future Human strategic choices based on coin-flips).
Most AI Apocalypse scenarios depend not only on the AI being superhumanly smart, but being inexplicably Omniscient about things that nobody could be plausibly Omniscient about.
A related thought: an intelligence can only work on the information that it has, regardless of its veracity, and it can only work on information that actually exists.
My hunch is that the plan of “AI boostraps itself to superintelligence, then superpower, then wipes out humanity” relies on it having access to information that is too well hidden to divine through sheer calculation and infogathering, regardless of its intelligence (ex: the location of all the military bunkers, and nuclear submarines humanity has), or simply does not exist (ex: future Human strategic choices based on coin-flips).
Most AI Apocalypse scenarios depend not only on the AI being superhumanly smart, but being inexplicably Omniscient about things that nobody could be plausibly Omniscient about.