Epistemic status: very speculative Content warning: if true this is pretty depressing
This came to me when thinking about Eliezer’s note on Twitter that he didn’t think superintelligence could do FTL, partially because of Fermi Paradox issues. I think Eliezer made a mistake, there; superintelligent AI with (light-cone-breaking, as opposed to within-light-cone-of-creation) FTL, if you game it out the whole way, actually mostly solves the Fermi Paradox.
I am, of course, aware that UFAI cannot be the Great Filter in a normal sense; the UFAI itself is a potentially-expanding technological civilisation.
But. If a UFAI is expanding at FTL, then it conquers and optimises the entire universe within a potentially-rather-short timeframe (even potentially a negative timeframe at long distances, if the only cosmic-censorship limit is closing a loop). That means the future becomes unobservable; no-one exists then (perhaps not even the AI, if it is not conscious or if it optimises its consciousness away after succeeding). Hence, by the anthropic principle, we should expect to be either the first or extremely close to it (and AIUI, frequency arguments like those in the Grabby Aliens paper suggest that “first in entire universe” should usually be significantly ahead of successors relative to time elapsed since Big Bang).
This is sort of an inverse version of Deadly Probes (which has been basically ruled out in the normal-Great-Filter sense, AIUI, by “if this is true we should be dead” concerns); we are, in this hypothesis, fated to release Deadly Probes that kill everything in the universe, which prevents observations except our own observations of nothing. It also resurrects the Doomsday Argument, as in this scenario there are never any sentient aliens anywhere or anywhen to drown out the doom signal; indeed, insofar as you believe it, the Doomsday Argument would appear to argue for this scenario being true.
Obvious holes in this:
1) FTL may be impossible, or limited to non-light-cone-breaking versions (e.g. wormholes that have to be towed at STL). Without light-cone-breaking FTL there are non-first species and non-Fermi-Paradox observations even with UFAI catastrophe being inevitable.
2) The universe might be too large for exponential growth to fill it up. It doesn’t seem plausible for self-replication to be faster than exponential in the long-run, and if the universe is sufficiently large (like, bigger than 10^10^30 or so?) then it’s impossible—even with FTL—to kill everything, and again the scenario doesn’t work. I suppose an exception would be if there were some act that literally ends the entire universe immediately (thus killing everything without a need to replicate). Also, an extremely-large universe would require an implausibly-strong Great Filter for us to actually be the first this late.
3) AI Doom might not happen. If humanity is asserted to be not AI-doomed then this argument turns on its head and our existence (to at least the extent that we might not be the first) argues that either light-cone-breaking FTL is impossible or AI doom is a highly-unusual thing to happen to civilisations. This is sort of a weird point to mention since the whole scenario is an Outside View argument that AI Doom is likely, but how seriously to condition on these sorts of arguments is a matter of some dispute.
AI X-risk is a possible solution to the Fermi Paradox
Epistemic status: very speculative
Content warning: if true this is pretty depressing
This came to me when thinking about Eliezer’s note on Twitter that he didn’t think superintelligence could do FTL, partially because of Fermi Paradox issues. I think Eliezer made a mistake, there; superintelligent AI with (light-cone-breaking, as opposed to within-light-cone-of-creation) FTL, if you game it out the whole way, actually mostly solves the Fermi Paradox.
I am, of course, aware that UFAI cannot be the Great Filter in a normal sense; the UFAI itself is a potentially-expanding technological civilisation.
But. If a UFAI is expanding at FTL, then it conquers and optimises the entire universe within a potentially-rather-short timeframe (even potentially a negative timeframe at long distances, if the only cosmic-censorship limit is closing a loop). That means the future becomes unobservable; no-one exists then (perhaps not even the AI, if it is not conscious or if it optimises its consciousness away after succeeding). Hence, by the anthropic principle, we should expect to be either the first or extremely close to it (and AIUI, frequency arguments like those in the Grabby Aliens paper suggest that “first in entire universe” should usually be significantly ahead of successors relative to time elapsed since Big Bang).
This is sort of an inverse version of Deadly Probes (which has been basically ruled out in the normal-Great-Filter sense, AIUI, by “if this is true we should be dead” concerns); we are, in this hypothesis, fated to release Deadly Probes that kill everything in the universe, which prevents observations except our own observations of nothing. It also resurrects the Doomsday Argument, as in this scenario there are never any sentient aliens anywhere or anywhen to drown out the doom signal; indeed, insofar as you believe it, the Doomsday Argument would appear to argue for this scenario being true.
Obvious holes in this:
1) FTL may be impossible, or limited to non-light-cone-breaking versions (e.g. wormholes that have to be towed at STL). Without light-cone-breaking FTL there are non-first species and non-Fermi-Paradox observations even with UFAI catastrophe being inevitable.
2) The universe might be too large for exponential growth to fill it up. It doesn’t seem plausible for self-replication to be faster than exponential in the long-run, and if the universe is sufficiently large (like, bigger than 10^10^30 or so?) then it’s impossible—even with FTL—to kill everything, and again the scenario doesn’t work. I suppose an exception would be if there were some act that literally ends the entire universe immediately (thus killing everything without a need to replicate). Also, an extremely-large universe would require an implausibly-strong Great Filter for us to actually be the first this late.
3) AI Doom might not happen. If humanity is asserted to be not AI-doomed then this argument turns on its head and our existence (to at least the extent that we might not be the first) argues that either light-cone-breaking FTL is impossible or AI doom is a highly-unusual thing to happen to civilisations. This is sort of a weird point to mention since the whole scenario is an Outside View argument that AI Doom is likely, but how seriously to condition on these sorts of arguments is a matter of some dispute.