If I understand Eliezer’s view, it’s that we can’t be extremely confident of whether artificial superintelligence or perilously advanced nanotechnology will come first, but (a) there aren’t many obvious research projects likely to improve our chances against grey goo, whereas (b) there are numerous obvious research projects likely to improve our changes against unFriendly AI, and (c) inventing Friendly AI would solve both the grey goo problem and the uFAI problem.
Cheer up, the main threat from nanotech may be from brute-forced AI going FOOM and killing everyone long before nanotech is sophisticated enough to reproduce in open-air environments.
The question is what to do about nanotech disaster. As near as I can figure out, the main path into [safety] would be a sufficiently fast upload of humans followed by running them at a high enough speed to solve FAI before everything goes blooey.
But that’s already assuming pretty sophisticated nanotech. I’m not sure what to do about moderately strong nanotech. I’ve never really heard of anything good to do about nanotech. It’s one reason I’m not sending attention there.
Considering … please wait … tttrrrrrr … prima facie, Grey Goo scenarios may seem more likely simply because they make better “Great Filter” candidates; whereas a near-arbitrary Foomy would spread out in all directions at relativistic speeds, with self-replicators no overarching agenty will would accelerate them out across space (the insulation layer with the sparse materials).
So if we approached x-risks through the prism of their consequences (extinction, hence no discernible aliens) and then reasoned our way back to our present predicament, we would note that within AI-power-hierachies (AGI and up) there are few distinct long-term dan-ranks (most such ranks would only be intermediary steps while the AI falls “upwards”), whereas it is much more conceivable that there are self-replicators which can e.g. transform enough carbon into carbon copies (of themselves) to render a planet uninhabitable, but which lack the oomph (and the agency) to do the same to their light cone.
Then I thought that Grey Goo may yet be more of a setback, a restart, not the ultimate planetary tombstone. Once everything got transformed into resident von Neumann machines, evolution amongst those copies would probably occur at some point, until eventually there may be new macroorganisms organized from self-replicating building blocks, which may again show significant agency and turn their gaze towards the stars.
Then again (round and round it goes), Grey Goo would still remain the better transient Great Filter candidate (and thus more likely than uFAI when viewed through the Great Filter spectroscope), simply because of the time scales involved. Assuming the Great Filter is in fact an actual absence of highly evolved civilizations in our neighborhood (as opposed to just hiding or other shenanigans), Grey Goo biosphere-resets may stall the Kardashev climb sufficiently to explain us not having witnessed other civs yet. Also, Grey Goo transformations may burn up all the local negentropy (nanobots don’t work for free), precluding future evolution.
Anyways, I agree that FAI would be the most realistic long-term guardian against accidental nanogoo (ironically, also uFAI).
My own suspicion is that the bulk of the Great Filter is behind us. We’ve awoken into a fairly old universe. (Young in terms of total lifespan, but old in terms of maximally life-sustaining years.) If intelligent agents evolve easily but die out fast, we should expect to see a young universe.
We can also consider the possibility of stronger anthropic effects. Suppose intelligent species always succeed in building AGIs that propagate outward at approximately the speed of light, converting all life-sustaining energy into objects or agents outside our anthropic reference class. Then any particular intelligent species Z will observe a Fermi paradox no matter how common or rare intelligent species are, because if any other high-technology species had arisen first in Z’s past light cone it would have prevented the existence of anything Z-like. (However, species in this scenario will observe much younger universes the smaller a Past Filter there is.)
So grey goo creates an actual Future Filter by killing their creators, but hyper-efficient hungry AGI creates an anthropic illusion of a Future Filter by devouring everything in their observable universe except the creator species. (And possibly devouring the creator species too; that’s unclear. Evolved alien values are less likely to eat the universe than artificial unFriendly-relative-to-alien-values values are, but perhaps not dramatically less likely; and unFriendly-relative-to-creator AI is almost certainly more common than Friendly-relative-to-creator AI.)
Once everything got transformed into resident von Neumann machines, evolution amongst those copies would probably occur at some point, until eventually there may be new macroorganisms organized from self-replicating building blocks, which may again show significant agency and turn their gaze towards the stars.
Probably won’t happen before the heat death of the universe. The scariest thing about nanodevices is that they don’t evolve. A universe ruled by nanodevices is plausibly even worse (relative to human values) than one ruled by uFAI like Clippy, because it’s vastly less interesting.
(Not because paperclips are better than nanites, but because there’s at least one sophisticated mind to be found.)
If I understand Eliezer’s view, it’s that we can’t be extremely confident of whether artificial superintelligence or perilously advanced nanotechnology will come first, but (a) there aren’t many obvious research projects likely to improve our chances against grey goo, whereas (b) there are numerous obvious research projects likely to improve our changes against unFriendly AI, and (c) inventing Friendly AI would solve both the grey goo problem and the uFAI problem.
Considering … please wait … tttrrrrrr … prima facie, Grey Goo scenarios may seem more likely simply because they make better “Great Filter” candidates; whereas a near-arbitrary Foomy would spread out in all directions at relativistic speeds, with self-replicators no overarching agenty will would accelerate them out across space (the insulation layer with the sparse materials).
So if we approached x-risks through the prism of their consequences (extinction, hence no discernible aliens) and then reasoned our way back to our present predicament, we would note that within AI-power-hierachies (AGI and up) there are few distinct long-term dan-ranks (most such ranks would only be intermediary steps while the AI falls “upwards”), whereas it is much more conceivable that there are self-replicators which can e.g. transform enough carbon into carbon copies (of themselves) to render a planet uninhabitable, but which lack the oomph (and the agency) to do the same to their light cone.
Then I thought that Grey Goo may yet be more of a setback, a restart, not the ultimate planetary tombstone. Once everything got transformed into resident von Neumann machines, evolution amongst those copies would probably occur at some point, until eventually there may be new macroorganisms organized from self-replicating building blocks, which may again show significant agency and turn their gaze towards the stars.
Then again (round and round it goes), Grey Goo would still remain the better transient Great Filter candidate (and thus more likely than uFAI when viewed through the Great Filter spectroscope), simply because of the time scales involved. Assuming the Great Filter is in fact an actual absence of highly evolved civilizations in our neighborhood (as opposed to just hiding or other shenanigans), Grey Goo biosphere-resets may stall the Kardashev climb sufficiently to explain us not having witnessed other civs yet. Also, Grey Goo transformations may burn up all the local negentropy (nanobots don’t work for free), precluding future evolution.
Anyways, I agree that FAI would be the most realistic long-term guardian against accidental nanogoo (ironically, also uFAI).
My own suspicion is that the bulk of the Great Filter is behind us. We’ve awoken into a fairly old universe. (Young in terms of total lifespan, but old in terms of maximally life-sustaining years.) If intelligent agents evolve easily but die out fast, we should expect to see a young universe.
We can also consider the possibility of stronger anthropic effects. Suppose intelligent species always succeed in building AGIs that propagate outward at approximately the speed of light, converting all life-sustaining energy into objects or agents outside our anthropic reference class. Then any particular intelligent species Z will observe a Fermi paradox no matter how common or rare intelligent species are, because if any other high-technology species had arisen first in Z’s past light cone it would have prevented the existence of anything Z-like. (However, species in this scenario will observe much younger universes the smaller a Past Filter there is.)
So grey goo creates an actual Future Filter by killing their creators, but hyper-efficient hungry AGI creates an anthropic illusion of a Future Filter by devouring everything in their observable universe except the creator species. (And possibly devouring the creator species too; that’s unclear. Evolved alien values are less likely to eat the universe than artificial unFriendly-relative-to-alien-values values are, but perhaps not dramatically less likely; and unFriendly-relative-to-creator AI is almost certainly more common than Friendly-relative-to-creator AI.)
Probably won’t happen before the heat death of the universe. The scariest thing about nanodevices is that they don’t evolve. A universe ruled by nanodevices is plausibly even worse (relative to human values) than one ruled by uFAI like Clippy, because it’s vastly less interesting.
(Not because paperclips are better than nanites, but because there’s at least one sophisticated mind to be found.)