Not necessarily all instances. Just enough instances to allow our observations to not be incredibly unlikely. I wouldn’t be too surprised if out of a sample of 100 000 AIs none of them managed to produce successful vNP before crashing. In addition to the previous points the vNP would have to leave the solar system fast enough to avoid the AI’s “crash radius” of destruction.
Regarding your second point, if it turns out that most organic races can’t produce a stable AI, then I doubt an insane AI would be able to make a sane intelligence. Even if it had the knowledge to, its own unstable value system could cause the VNP to have a really unstable value system too.
It might be the case that the space of self-modifying unstable AI has attractor zones that cause unstable AI of different designs to converge on similar behaviors, none of which produce VNP before crashing.
In my second point I meant original people, who created AI. Not all of the will be killed during creation and during AI halt. Many will survive and will be rather strong posthumans from our point of view. Just one instance of them is enough to start intelligence wave.
Another option is that AI may create nanobots capable to self-replicate in space, but not to star travel. But they anyway will jump from one comet to another randomly and in 1 billion year (arox) will colonise all Galaxy. We could search for such relicts in space. They may be rather benign from the risk point, just like mechanical plants.
Another option is that the only way an AI could survive halt risks is: or becoming crazy or using very strange optimisation method of problem solving. In this case it may be here, but we could not recognise it because it behavior is absurd from any rational point of view. I came to this idea when I explored an idea if UFOs may be alien AI with broken goal system. (I estimate it to be less than 1 per cent true, because both premises are unlikely: that UFOs is something real and that Alien AI exist but crazy). I wrote about it in my controversial manuscript “Unknown unknowns as existential risks”, p.90.
Not necessarily all instances. Just enough instances to allow our observations to not be incredibly unlikely. I wouldn’t be too surprised if out of a sample of 100 000 AIs none of them managed to produce successful vNP before crashing. In addition to the previous points the vNP would have to leave the solar system fast enough to avoid the AI’s “crash radius” of destruction.
Regarding your second point, if it turns out that most organic races can’t produce a stable AI, then I doubt an insane AI would be able to make a sane intelligence. Even if it had the knowledge to, its own unstable value system could cause the VNP to have a really unstable value system too.
It might be the case that the space of self-modifying unstable AI has attractor zones that cause unstable AI of different designs to converge on similar behaviors, none of which produce VNP before crashing.
Your last point is an interesting idea though.
In my second point I meant original people, who created AI. Not all of the will be killed during creation and during AI halt. Many will survive and will be rather strong posthumans from our point of view. Just one instance of them is enough to start intelligence wave.
Another option is that AI may create nanobots capable to self-replicate in space, but not to star travel. But they anyway will jump from one comet to another randomly and in 1 billion year (arox) will colonise all Galaxy. We could search for such relicts in space. They may be rather benign from the risk point, just like mechanical plants.
Another option is that the only way an AI could survive halt risks is: or becoming crazy or using very strange optimisation method of problem solving. In this case it may be here, but we could not recognise it because it behavior is absurd from any rational point of view. I came to this idea when I explored an idea if UFOs may be alien AI with broken goal system. (I estimate it to be less than 1 per cent true, because both premises are unlikely: that UFOs is something real and that Alien AI exist but crazy). I wrote about it in my controversial manuscript “Unknown unknowns as existential risks”, p.90.
https://www.scribd.com/doc/18221425/Unknown-unknowns-as-existential-risk-was-UFO-as-Global-Risk