Other nano-risks aren’t necessarily extinction risks, though. And while I’m sort of worried that someone might secretly use nano to rewire the brains of important people and later of everyone to absolute loyalty to them (an outcome that would be a lot better than extinction, but still pretty bad) or something along those lines it doesn’t seem obvious that there is anything effective we could spend money on now that would help protect us, unlike asteroids. At least at the levels of spending asteroid danger prevention could usefully absorb.
But now you have to catalogue all the possible risks of nanotech, and add a category for “risks I haven’t thought of”, and then claim that the total probability of all that is < 1⁄5000.
You have to consider military nanotech. You have to consider nano-terrorism and the balance of attack versus defence, you have to consider the effects of nanotech on nuclear proliferation (had you thought of that one?), etc etc etc.
I am sure that there are at least 3 nano-risk scenarios documented on the internet that you haven’t even thought of, which instantly invalidates claiming a figure as low as, say, 1⁄5000 for the extinction risk before you have considered them.
This argument reminds me of the case of physicists claiming to have an argument showing that the probability of an LHC disaster was less than 1 in a million, and Toby Ord pointing out that the probability that there was a mistake in their argument was surely > 1 in 1,000, invalidating their conclusion that total probability of an LHC disaster was < 1 in 1 million.
But now you have to catalogue all the possible risks of nanotech, and add a category for “risks I haven’t thought of”, and then claim that the total probability of all that is < 1⁄5000.
The question wasn’t whether nanotech is potentially more dangerous than asteroids overall, though. It was whether all money available for extential risk prevention/migitation would be better spend on nano than on space based dangers.
There doesn’t seem to be any good way to spent money so that all possible nano risks will be migitated (other than lobbying to ban all nano reseach everywhere, and I’m far from convinced that the potential dangers of nano are greater than the benefits). I’m not even sure there is a good way to spend money on migitation of any single nano risk.
The most obvious migitation/prevention technology would be really good detectors for autonomous nanobots, whether self reproducing or not. But until we know how they work and what energy source they use we can’t do all that much useful research in that direction, and spending after we know what we need would probably be much more efficient. This also looks like an issue where the military will spend such enourmous amounts once the possibilities are clear that money spent previously will not affect the result all that much.
you have to consider the effects of nanotech on nuclear proliferation (had you thought of that one?)
Yes, I did, that’s one of the most ovious ones.
It’s not going to be possible to prevent a nation with access to unranium from building nuclear weapons, but I think that would be the case anyway, with or without nano. The risk of private persons building them might be somewhat increased. I’m not sure if there is any need to seperate isotopes in whatever machines pre-process materials in/for nano-assemblers, or if they lead themself to be modifiable for that. Assuming they do you’d need to look at anyone who processes large amounts of sea water, or any other material that contains uranium. Perhaps you could mandate that only designs that are vulnerable to radioactivity can be sold commercially, or make the machines refuse to work with uranium in a way that is hard to remove. I don’t see how spending money now could help in any way.
This argument reminds me of the case of physicists claiming to have an argument showing that the probability of an LHC disaster was less than 1 in a million, and Toby Ord pointing out that the probability that there was a mistake in their argument was surely > 1 in 1,000, invalidating their conclusion that total probability of an LHC disaster was < 1 in 1 million.
I ’m not sure the probability of a serious error in the best avaiable argument against something can be considered a lower bound to the proability you should assign it in general. In the case of the LHC if there is a 1 in 20 chance of a mistake that doesn’t really change the conclusion much, a 1 in 100 chance of a mistake such that the real probablility is 1 in 100,000, and a 1 in 10,000 chance of a mistake such that the real probablility is 1 in 1000 then 1 in a million could still be roughly the correct estimate.
But now you have to catalogue all the possible risks of nanotech, and add a category for “risks I haven’t thought of”, and then claim that the total probability of all that is < 1⁄5000
The 1⁄5000 number only works for the really large asteroids (> 1 km in diameter). Note that as I pointed earlier, much smaller asteroids can be locally devastating. The resources that go to finding the very large asteroids also helps track the others, reducing the chance of human life lost even outside existential risk scenarios. And as I pointed out, there are a lot of other potential space based existential risks. That said, I think you’ve made a very good point above about the many non-gray goo scenarios that make nanotech a severe potential existential risk. So I think I’ll agree that if one’s comparing probability of a nanotech existential risk scenario compared to probability of a meteorite existential risk scenario, the nanotech is more likely.
Your point about the impact of nanotech on nuclear proliferation I find particularly disturbing. The potential for nanotech to greatly increase the efficiency of enriching uranium seems deeply worrisome and that’s really the main practical limitation in building fission weapons.
Upvoted for updating. I agree that smaller asteroids are an important consideration for space; we expect about one Tunguska event per century I believe, which stands a ~5% chance of hitting a populated area as far as I know. Saving a 5% chance of the next Tunguska hitting a populated area is a good thing.
Accidental grey goo doesn’t seem plausible, and purposeful destructive use of nanotech doesn’t necessarily fall in that category. We can have nanomachines that act as bioweapons, infecting people and killing them.
Are you disagreeing with something I said? I’m not sure nanotech would be better at killing that way than a designer virus, which should be a lot easier and cheaper (possibly even when accounting for the need to find a way to prevent it from spreading to your own side, if that’s necessary). Nanotech might be able to do things that a virus can’t, but that would be the sort of thing I mentioned. Anyway I don’t see how we could effectively spend money now to prevent either.
Other nano-risks aren’t necessarily extinction risks, though. And while I’m sort of worried that someone might secretly use nano to rewire the brains of important people and later of everyone to absolute loyalty to them (an outcome that would be a lot better than extinction, but still pretty bad) or something along those lines it doesn’t seem obvious that there is anything effective we could spend money on now that would help protect us, unlike asteroids. At least at the levels of spending asteroid danger prevention could usefully absorb.
But now you have to catalogue all the possible risks of nanotech, and add a category for “risks I haven’t thought of”, and then claim that the total probability of all that is < 1⁄5000.
You have to consider military nanotech. You have to consider nano-terrorism and the balance of attack versus defence, you have to consider the effects of nanotech on nuclear proliferation (had you thought of that one?), etc etc etc.
I am sure that there are at least 3 nano-risk scenarios documented on the internet that you haven’t even thought of, which instantly invalidates claiming a figure as low as, say, 1⁄5000 for the extinction risk before you have considered them.
This argument reminds me of the case of physicists claiming to have an argument showing that the probability of an LHC disaster was less than 1 in a million, and Toby Ord pointing out that the probability that there was a mistake in their argument was surely > 1 in 1,000, invalidating their conclusion that total probability of an LHC disaster was < 1 in 1 million.
The question wasn’t whether nanotech is potentially more dangerous than asteroids overall, though. It was whether all money available for extential risk prevention/migitation would be better spend on nano than on space based dangers.
There doesn’t seem to be any good way to spent money so that all possible nano risks will be migitated (other than lobbying to ban all nano reseach everywhere, and I’m far from convinced that the potential dangers of nano are greater than the benefits). I’m not even sure there is a good way to spend money on migitation of any single nano risk.
The most obvious migitation/prevention technology would be really good detectors for autonomous nanobots, whether self reproducing or not. But until we know how they work and what energy source they use we can’t do all that much useful research in that direction, and spending after we know what we need would probably be much more efficient. This also looks like an issue where the military will spend such enourmous amounts once the possibilities are clear that money spent previously will not affect the result all that much.
Yes, I did, that’s one of the most ovious ones. It’s not going to be possible to prevent a nation with access to unranium from building nuclear weapons, but I think that would be the case anyway, with or without nano. The risk of private persons building them might be somewhat increased. I’m not sure if there is any need to seperate isotopes in whatever machines pre-process materials in/for nano-assemblers, or if they lead themself to be modifiable for that. Assuming they do you’d need to look at anyone who processes large amounts of sea water, or any other material that contains uranium. Perhaps you could mandate that only designs that are vulnerable to radioactivity can be sold commercially, or make the machines refuse to work with uranium in a way that is hard to remove. I don’t see how spending money now could help in any way.
I ’m not sure the probability of a serious error in the best avaiable argument against something can be considered a lower bound to the proability you should assign it in general. In the case of the LHC if there is a 1 in 20 chance of a mistake that doesn’t really change the conclusion much, a 1 in 100 chance of a mistake such that the real probablility is 1 in 100,000, and a 1 in 10,000 chance of a mistake such that the real probablility is 1 in 1000 then 1 in a million could still be roughly the correct estimate.
The 1⁄5000 number only works for the really large asteroids (> 1 km in diameter). Note that as I pointed earlier, much smaller asteroids can be locally devastating. The resources that go to finding the very large asteroids also helps track the others, reducing the chance of human life lost even outside existential risk scenarios. And as I pointed out, there are a lot of other potential space based existential risks. That said, I think you’ve made a very good point above about the many non-gray goo scenarios that make nanotech a severe potential existential risk. So I think I’ll agree that if one’s comparing probability of a nanotech existential risk scenario compared to probability of a meteorite existential risk scenario, the nanotech is more likely.
Your point about the impact of nanotech on nuclear proliferation I find particularly disturbing. The potential for nanotech to greatly increase the efficiency of enriching uranium seems deeply worrisome and that’s really the main practical limitation in building fission weapons.
Upvoted for updating. I agree that smaller asteroids are an important consideration for space; we expect about one Tunguska event per century I believe, which stands a ~5% chance of hitting a populated area as far as I know. Saving a 5% chance of the next Tunguska hitting a populated area is a good thing.
A lot of it seems to hinge on the probability you assign to those threats being developed in the next century.
Accidental grey goo doesn’t seem plausible, and purposeful destructive use of nanotech doesn’t necessarily fall in that category. We can have nanomachines that act as bioweapons, infecting people and killing them.
Are you disagreeing with something I said? I’m not sure nanotech would be better at killing that way than a designer virus, which should be a lot easier and cheaper (possibly even when accounting for the need to find a way to prevent it from spreading to your own side, if that’s necessary). Nanotech might be able to do things that a virus can’t, but that would be the sort of thing I mentioned. Anyway I don’t see how we could effectively spend money now to prevent either.
I agree with this. I disagree that there are no clear non-goo extinction risks associated with nano, and gave an example of one.