Considerations similar to Kenzi’s have led me to think that if we want to beat potential filters, we should be accelerating work on autonomous self-replicating space-based robotics. Once we do that, we will have beaten the Fermi odds. I’m not saying that it’s all smooth sailing from there, but it does guarantee that something from our civilization will survive in a potentially “showy” way, so that our civilization will not be a “great silence” victim.
The argument is as follows: Any near-future great filter for humankind is probably self-produced, from some development path we can call QEP (quiet extinction path). Let’s call the path to self-replicating autonomous robots FRP (Fermi robot path). Since the success of this path would not produce a great filter, QEP =/= FRP. FRP is an independent path parallel to QEP. In effect the two development paths are in a race. We can’t implement a policy of slowing QEP down, because we are unable to uniquely identify QEP. But since we know that QEP =/= FRP, and that in completing FRP we beat Fermi’s silence, our best strategy is to accelerate FRP and invest substantial resources into robotics that will ultimately produce Fermi probes. Alacrity is necessary because FRP must complete before QEP takes its course, and we have very bad information about QEP’s timelines and nature.
I agree with most of Kenzi’s argument, which I had not heard before.
One concern that comes to mind is that a Singleton is, by definition, an entity that can stop evolution at all lower levels. An AI that makes nanotech of the gray-goo variety and eats all living entities on earth would destroy all evolutionary levels. It doesn’t need to expand into the observable universe to be bequeathed with singletonhood.
More generally: I can see many ways that an AI might destroy a civilization without having to after finishing it off depart on a quest throughout the universe. None of those is eliminated by the great filter.
It’s also possible that FAI might necessarily require the ability to form human-like moral relationships, not only with humans but also nature. Such a FAI might not treat the universe as its cosmic endowment, and any von Neumann probes it might send out might remain inconspicuous.
Like the great filter arguments, this would also reduce the probability of “rogue singletons” under the Fermi paradox (and also against oracles, since human morality is unreliable).
Which means that if we buy this [great filter derivation] argument, we should put a lot more weight on the category of ‘everything else’, and especially the bits of it that come before AI. To the extent that known risks like biotechnology and ecological destruction don’t seem plausible, we should more fear unknown unknowns that we aren’t even preparing for.
True in principle. I do think that the known risks don’t cut it; some of them might be fairly deadly, but even in aggregate they don’t look nearly deadly enough to contribute much to the great filter. Given the uncertainties in the great filter analysis, that conclusion for me mostly feeds back in that direction, increasing the probability that the GF is in fact behind us.
Your SIA doomsday argument—as pointed out by michael vassar in the comments—has interesting interactions with the simulation hypothesis; specifically, since we don’t know if we’re in a simulation, the bayesian update in step 3 can’t be performed as confidently as you stated. Given this, “we really can’t see a plausible great filter coming up early enough to prevent us from hitting superintelligence” is also evidence for this environment being a simulation.
What do you think of Kenzi’s views?
Considerations similar to Kenzi’s have led me to think that if we want to beat potential filters, we should be accelerating work on autonomous self-replicating space-based robotics. Once we do that, we will have beaten the Fermi odds. I’m not saying that it’s all smooth sailing from there, but it does guarantee that something from our civilization will survive in a potentially “showy” way, so that our civilization will not be a “great silence” victim.
The argument is as follows: Any near-future great filter for humankind is probably self-produced, from some development path we can call QEP (quiet extinction path). Let’s call the path to self-replicating autonomous robots FRP (Fermi robot path). Since the success of this path would not produce a great filter, QEP =/= FRP. FRP is an independent path parallel to QEP. In effect the two development paths are in a race. We can’t implement a policy of slowing QEP down, because we are unable to uniquely identify QEP. But since we know that QEP =/= FRP, and that in completing FRP we beat Fermi’s silence, our best strategy is to accelerate FRP and invest substantial resources into robotics that will ultimately produce Fermi probes. Alacrity is necessary because FRP must complete before QEP takes its course, and we have very bad information about QEP’s timelines and nature.
I agree with most of Kenzi’s argument, which I had not heard before.
One concern that comes to mind is that a Singleton is, by definition, an entity that can stop evolution at all lower levels. An AI that makes nanotech of the gray-goo variety and eats all living entities on earth would destroy all evolutionary levels. It doesn’t need to expand into the observable universe to be bequeathed with singletonhood.
More generally: I can see many ways that an AI might destroy a civilization without having to after finishing it off depart on a quest throughout the universe. None of those is eliminated by the great filter.
It’s also possible that FAI might necessarily require the ability to form human-like moral relationships, not only with humans but also nature. Such a FAI might not treat the universe as its cosmic endowment, and any von Neumann probes it might send out might remain inconspicuous.
Like the great filter arguments, this would also reduce the probability of “rogue singletons” under the Fermi paradox (and also against oracles, since human morality is unreliable).
True in principle. I do think that the known risks don’t cut it; some of them might be fairly deadly, but even in aggregate they don’t look nearly deadly enough to contribute much to the great filter. Given the uncertainties in the great filter analysis, that conclusion for me mostly feeds back in that direction, increasing the probability that the GF is in fact behind us.
Your SIA doomsday argument—as pointed out by michael vassar in the comments—has interesting interactions with the simulation hypothesis; specifically, since we don’t know if we’re in a simulation, the bayesian update in step 3 can’t be performed as confidently as you stated. Given this, “we really can’t see a plausible great filter coming up early enough to prevent us from hitting superintelligence” is also evidence for this environment being a simulation.