Because before crash an AI may create non-evolving von Neumann probes. They will be safe against philosophical crisises, and will continue “to eat” universe. If many AIs crashed in the universe before us, one surely created self-replicating space probes. Why we don’t see them?
AI just don’t create them in the first place. Most utility functions don’t need non-evolving von Neumann probes, and instead the AI itself leads the expansion.
AI crash before creating von Neumann probes. There are lots of destructive technologies an AI could get to before being able to build such probes. An unstable AI that isn’t in the attractor zone of self-correcting fooms would probably become more and more unstable with each modification, meaning that the more powerful it becomes the more likely it is to destroy itself. von Neumann probes may simply be far beyond this point.
Any von Neumann probes that could successfully colonize the universe would have to have enough intelligence to risk falling into the same trap as their parent AI.
It would only take one exception, but the second and third possibilities are probably strong enough to handle it. A successful von Neumann probe would be really advanced, while an increasingly insane AI might get ahold of destructive nanotech and nukes and all kinds of things before then.
Any real solution of Fermi paradox must work in ALL instances. If we have 100 000 AIs in past light cone, it seems unplausible that all of them will fail the same trap before creating vNP.
Most of them will have stable form of intelligence like local “humans” which will able to navigate starships even after AI fails. So it will be like old school star navigation without AI. We will return to the world there strong AI is impossible and space is colonised by humanoid colonists. Nice plot, but where are way?
Another solution to FP is that most of new AIs fail to superAI predator which sends virus-like messages via some kind of space radio. The message is complex enough that only AI could find and read it.
Not necessarily all instances. Just enough instances to allow our observations to not be incredibly unlikely. I wouldn’t be too surprised if out of a sample of 100 000 AIs none of them managed to produce successful vNP before crashing. In addition to the previous points the vNP would have to leave the solar system fast enough to avoid the AI’s “crash radius” of destruction.
Regarding your second point, if it turns out that most organic races can’t produce a stable AI, then I doubt an insane AI would be able to make a sane intelligence. Even if it had the knowledge to, its own unstable value system could cause the VNP to have a really unstable value system too.
It might be the case that the space of self-modifying unstable AI has attractor zones that cause unstable AI of different designs to converge on similar behaviors, none of which produce VNP before crashing.
In my second point I meant original people, who created AI. Not all of the will be killed during creation and during AI halt. Many will survive and will be rather strong posthumans from our point of view. Just one instance of them is enough to start intelligence wave.
Another option is that AI may create nanobots capable to self-replicate in space, but not to star travel. But they anyway will jump from one comet to another randomly and in 1 billion year (arox) will colonise all Galaxy. We could search for such relicts in space. They may be rather benign from the risk point, just like mechanical plants.
Another option is that the only way an AI could survive halt risks is: or becoming crazy or using very strange optimisation method of problem solving. In this case it may be here, but we could not recognise it because it behavior is absurd from any rational point of view. I came to this idea when I explored an idea if UFOs may be alien AI with broken goal system. (I estimate it to be less than 1 per cent true, because both premises are unlikely: that UFOs is something real and that Alien AI exist but crazy). I wrote about it in my controversial manuscript “Unknown unknowns as existential risks”, p.90.
Another option is that the only way an AI could survive halt risks is: or becoming crazy or using very strange optimisation method of problem solving. In this case it may be here, but we could not recognise it because it behavior is absurd from any rational point of view. I came to this idea when I explored an idea if UFOs may be alien AI with broken goal system. (I estimate it to be less than 1 per cent true, because both premises are unlikely: that UFOs is something real and that Alien AI exist but crazy). I wrote about it in my controversial manuscript “Unknown unknowns as existential risks”, p.90.
While AI may halt by many reasons, which I listed in my map of AI failure levels (http://lesswrong.com/lw/mgf/a_map_agi_failures_modes_and_levels/), it does not explain Fermi paradox. ((
Because before crash an AI may create non-evolving von Neumann probes. They will be safe against philosophical crisises, and will continue “to eat” universe. If many AIs crashed in the universe before us, one surely created self-replicating space probes. Why we don’t see them?
That’s a good point. Possible solutions:
AI just don’t create them in the first place. Most utility functions don’t need non-evolving von Neumann probes, and instead the AI itself leads the expansion.
AI crash before creating von Neumann probes. There are lots of destructive technologies an AI could get to before being able to build such probes. An unstable AI that isn’t in the attractor zone of self-correcting fooms would probably become more and more unstable with each modification, meaning that the more powerful it becomes the more likely it is to destroy itself. von Neumann probes may simply be far beyond this point.
Any von Neumann probes that could successfully colonize the universe would have to have enough intelligence to risk falling into the same trap as their parent AI.
It would only take one exception, but the second and third possibilities are probably strong enough to handle it. A successful von Neumann probe would be really advanced, while an increasingly insane AI might get ahold of destructive nanotech and nukes and all kinds of things before then.
Any real solution of Fermi paradox must work in ALL instances. If we have 100 000 AIs in past light cone, it seems unplausible that all of them will fail the same trap before creating vNP.
Most of them will have stable form of intelligence like local “humans” which will able to navigate starships even after AI fails. So it will be like old school star navigation without AI. We will return to the world there strong AI is impossible and space is colonised by humanoid colonists. Nice plot, but where are way?
Another solution to FP is that most of new AIs fail to superAI predator which sends virus-like messages via some kind of space radio. The message is complex enough that only AI could find and read it.
Not necessarily all instances. Just enough instances to allow our observations to not be incredibly unlikely. I wouldn’t be too surprised if out of a sample of 100 000 AIs none of them managed to produce successful vNP before crashing. In addition to the previous points the vNP would have to leave the solar system fast enough to avoid the AI’s “crash radius” of destruction.
Regarding your second point, if it turns out that most organic races can’t produce a stable AI, then I doubt an insane AI would be able to make a sane intelligence. Even if it had the knowledge to, its own unstable value system could cause the VNP to have a really unstable value system too.
It might be the case that the space of self-modifying unstable AI has attractor zones that cause unstable AI of different designs to converge on similar behaviors, none of which produce VNP before crashing.
Your last point is an interesting idea though.
In my second point I meant original people, who created AI. Not all of the will be killed during creation and during AI halt. Many will survive and will be rather strong posthumans from our point of view. Just one instance of them is enough to start intelligence wave.
Another option is that AI may create nanobots capable to self-replicate in space, but not to star travel. But they anyway will jump from one comet to another randomly and in 1 billion year (arox) will colonise all Galaxy. We could search for such relicts in space. They may be rather benign from the risk point, just like mechanical plants.
Another option is that the only way an AI could survive halt risks is: or becoming crazy or using very strange optimisation method of problem solving. In this case it may be here, but we could not recognise it because it behavior is absurd from any rational point of view. I came to this idea when I explored an idea if UFOs may be alien AI with broken goal system. (I estimate it to be less than 1 per cent true, because both premises are unlikely: that UFOs is something real and that Alien AI exist but crazy). I wrote about it in my controversial manuscript “Unknown unknowns as existential risks”, p.90.
https://www.scribd.com/doc/18221425/Unknown-unknowns-as-existential-risk-was-UFO-as-Global-Risk
Another option is that the only way an AI could survive halt risks is: or becoming crazy or using very strange optimisation method of problem solving. In this case it may be here, but we could not recognise it because it behavior is absurd from any rational point of view. I came to this idea when I explored an idea if UFOs may be alien AI with broken goal system. (I estimate it to be less than 1 per cent true, because both premises are unlikely: that UFOs is something real and that Alien AI exist but crazy). I wrote about it in my controversial manuscript “Unknown unknowns as existential risks”, p.90.
https://www.scribd.com/doc/18221425/Unknown-unknowns-as-existential-risk-was-UFO-as-Global-Risk