In my second point I meant original people, who created AI. Not all of the will be killed during creation and during AI halt. Many will survive and will be rather strong posthumans from our point of view. Just one instance of them is enough to start intelligence wave.
Another option is that AI may create nanobots capable to self-replicate in space, but not to star travel. But they anyway will jump from one comet to another randomly and in 1 billion year (arox) will colonise all Galaxy. We could search for such relicts in space. They may be rather benign from the risk point, just like mechanical plants.
Another option is that the only way an AI could survive halt risks is: or becoming crazy or using very strange optimisation method of problem solving. In this case it may be here, but we could not recognise it because it behavior is absurd from any rational point of view. I came to this idea when I explored an idea if UFOs may be alien AI with broken goal system. (I estimate it to be less than 1 per cent true, because both premises are unlikely: that UFOs is something real and that Alien AI exist but crazy). I wrote about it in my controversial manuscript “Unknown unknowns as existential risks”, p.90.
In my second point I meant original people, who created AI. Not all of the will be killed during creation and during AI halt. Many will survive and will be rather strong posthumans from our point of view. Just one instance of them is enough to start intelligence wave.
Another option is that AI may create nanobots capable to self-replicate in space, but not to star travel. But they anyway will jump from one comet to another randomly and in 1 billion year (arox) will colonise all Galaxy. We could search for such relicts in space. They may be rather benign from the risk point, just like mechanical plants.
Another option is that the only way an AI could survive halt risks is: or becoming crazy or using very strange optimisation method of problem solving. In this case it may be here, but we could not recognise it because it behavior is absurd from any rational point of view. I came to this idea when I explored an idea if UFOs may be alien AI with broken goal system. (I estimate it to be less than 1 per cent true, because both premises are unlikely: that UFOs is something real and that Alien AI exist but crazy). I wrote about it in my controversial manuscript “Unknown unknowns as existential risks”, p.90.
https://www.scribd.com/doc/18221425/Unknown-unknowns-as-existential-risk-was-UFO-as-Global-Risk