Another option is that the only way an AI could survive halt risks is: or becoming crazy or using very strange optimisation method of problem solving. In this case it may be here, but we could not recognise it because it behavior is absurd from any rational point of view. I came to this idea when I explored an idea if UFOs may be alien AI with broken goal system. (I estimate it to be less than 1 per cent true, because both premises are unlikely: that UFOs is something real and that Alien AI exist but crazy). I wrote about it in my controversial manuscript “Unknown unknowns as existential risks”, p.90.
Another option is that the only way an AI could survive halt risks is: or becoming crazy or using very strange optimisation method of problem solving. In this case it may be here, but we could not recognise it because it behavior is absurd from any rational point of view. I came to this idea when I explored an idea if UFOs may be alien AI with broken goal system. (I estimate it to be less than 1 per cent true, because both premises are unlikely: that UFOs is something real and that Alien AI exist but crazy). I wrote about it in my controversial manuscript “Unknown unknowns as existential risks”, p.90.
https://www.scribd.com/doc/18221425/Unknown-unknowns-as-existential-risk-was-UFO-as-Global-Risk