Where are you getting your estimates of risk probability from? If by Nano you mean a nanotech gray goo scenario, then frankly that seems much less likely than 1⁄5000 in the next century. People who actually work with nanotech put that sort of scenario as extremely unlikely for a variety of reasons, including that there’s too much variation in common chemical compounds to be able to make nanotech devices which acted as universal assimilators, and there’s no clear way that such entities are going to get efficient energy resources to do so. Now, one might be able to argue that very intelligent AI might be able to solve those problems, but in that case, you’re talking about just the AI problem and nanotech becomes incidental to that.
I’m not sure what you mean by “bio”- but if you mean biological threats then this seems unlikely to be an existential level threat for the simple reason that we can see that it is very rare for a species to be wiped out by a pathogen. We might be able to make a deliberately dangerous pathogen but that requires motivation and expertise. The set of people with both the desire and capability to construct such entities is likely small, and will likely remain small for the indefinite future.
I assume “Bio, Nano, AI” to mean “any global existential threats brought on by human technology”, which is a big disjunction with plenty of unknown unknowns, and we already have one example (nuclear weapons) that could not have plausibly been predicted 50 years beforehand. Even if you discount the probabilities of hard AI takeoff or nanotech development, you’d have to have a lot of evidence in order to put such a small probability on any technological development of the next hundred years threatening global extinction.
As someone who does largely discount the threats mentioned (I believe that the operationally-significant probability for foom/grey goo is order 10^-3/10^-5, and the best-guess probability is order 10^-7/10^-7), I still endorse the logic above.
Er, maybe I was being unclear. Even if you discount a few specific scenarios, where do you get the strong evidence that no other technological existential risk with probability bigger than .001 will arise in the next hundred years, given that forecasters a century ago would have completely missed the existential risk from nuclear weapons?
I agree that cataloging near-earth objects is obviously worth a much bigger current investment than it has at present, but I think that an even bigger need exists for a well-funded group of scientists from various fields to consider such technological existential risks.
If I wanted to exterminate the human race using nanotechnology, there are two methods I would think about. First method, airborne replicators which use solar power for energy and atmospheric carbon dioxide for feedstock. Second method, nanofactories which produce large quantities of synthetic greenhouse gases. Under the first method, one should imagine a cloud of nanodust that just keeps growing until most of the CO2 is used up (at which point all plants die). Under the second method, the objective is to heat the earth until the oceans boil.
For the airborne replicator, the obvious path is “diamondoid mechanosynthesis”, as described in papers by Drexler, Merkle, Freitas and others. This is the assembly of rigid nanostructures, composed mostly of carbon atoms, through precisely coordinated deposition of small reactive clusters of atoms. To assemble diamond in this way, one might want a supply of carbon chains, which remain sequestered in narrow-diameter buckytubes until they are wanted, with the buckytubes being positioned by rigid nanomechanisms, and the carbon chains being synthesized through the capture and “cracking” of CO2 much as in plants. The replicator would have a hard-vacuum interior in which the component assembly of its progeny would occur, and a sliding or telescoping mechanism allowing temporary expansion of this interior space. The replicator would therefore have at least two configurations: a contracted minimal one, and an expanded maximal one large enough to contain a new replicator assembled in the minimal configuration.
There are surely hundreds or thousands of challenging subproblems involved in the production of such a nanoscale doomsday device—power supply, environmental viability (you would want it to disperse but to remain adrift), what to do with contaminants, to say nothing of the mechanisms and their control systems—but it would be a miracle if it was literally thermodynamically impossible to make such a thing. Cells do it, and yes they are aqueous bags of floppy proteins rather than evacuated diamond mechanisms, but I would think that has more to do with the methods available to DNA-based evolution, rather than the physical impossibility of free-living rigid nanobots. The Royal Society report to which you link hardly examines this topic. It casually cites a few qualitative criticisms made by Smalley and others, and attaches some significance to a supposed change of heart by Drexler—but in fact, Drexler simply changed his emphasis, from accident to abuse. There is no reason to expect free-living rogue replicators to emerge by accident from nanofactories, because such industrial assemblers will be tailored to operate under conditions very different to the world outside the factory. But there has been no concession that free-living nanomechanical replicators are simply impossible, and people like Freitas and Merkle who continue to work on the details of mechanosynthesis have many time expressed the worry that it looks alarmingly easy (relatively speaking) to design such devices.
As for my second method, you don’t even need free-living replicators, just mass production of the greenhouse-gas nanofactories, and a supply of appropriate ingredients.
I’m not sure if this counts as an existential threat, but I’m more concerned about a biowar wrecking civilization—enough engineered human and food diseases that civilization is unsustainable.
I can’t judge likelihood, but it’s at least a combination of plausible human motivations and technology. Your tech is plausible, but it’s hard to imagine anyone wanting not just to wipe out the human race, but also to do such damage to the biosphere.
There are a few people who’d like the human race to be gone (or at least who say they do), but as far as I know, they all want plants and animals to continue without being affected by people.
There are definitely people who would destroy the whole world if they could. Berserkers, true nihilists, people who hate life, people who simply have no empathy, dictators having a bad day. Even a few dolorous “negative utilitarians” exist who might do it as an act of mercy. But the other types are surely more numerous.
Massive overconfidence. You need to go closer to 50⁄50.
Where is your estimate coming from?
My estimate comes from the following: 1) experts suggest that the possibility is very unlikely. For example, the Royal Society official report on the dangers of nanotech decided that this sort of situation was extremely unlikely. See report here (and good Bayesians should listen to subject matter experts) 2) Every plausible form of nanotech yet investigated shows no capability of gray gooing. For example, consider DNA nanotechnology, an area we’ve had a fair bit of success both with computation and constructing machines. Yet, these work only in a small range of pH values and temperatures and often require specific specialized enzymes. Also, as with any organic nanotech, they will face competition and potentially predation from microorganisms. Inorganic nanotech faces other problems, such as less energy and far fewer options for possible chemical constructions, and already reduces the grey goo potential a lot if one isn’t using carbon.
1) experts suggest that the possibility is very unlikely.
But how did you translate “very unlikely” to “less that 1 in 5000″? Why not say 1%? or 3%? Or 1 in 10^100?
I think that I need to do an article on why one shouldn’t be so keen to assign very low probabilities to events where the only evidence is extrapolative.
Unfortunately, you often have to rule intuitively. How does complexity figure in the estimation of probability of gray goo? Useful heuristic, but no silver bullet.
I think that one has to differentiate between the perfect unbiased individual rationalist who uses heuristics but ultimately makes the final decision from first principles if necessary, and the semi-rationalist community, where individual members vary in degree of motivated cognition.
The latter works better with more rigid rules and less leeway for people to believe what they want. It’s a tradeoff: random errors induced by rough-and-ready estimates, versus systematic errors induced by wishful thinking of various forms.
Less than 1 in 5000 sounds about right to me. I’m much more worried about other nano-dangers (e. g. clandestine brain washing) than grey goo.
Not only is there the problem of the technological feasibility, but even if its possible there is the still larger problem of economic feasibility. Molecular von Neumann Machines, if possible, should be vastly more difficult to develop than vastly more efficient static nano-assemblers in a controlled environment (probably vacuum?) and integrated in an economy with mixed nano- and macrotech taking advantage of specialization, economics of scale etc. The static nano-assemblers should already be ubiquitous long before molecular von Neumann Machines start to become feasible. So why develop them in the first place? For medical applications specialized medical nanobots running on glucose and cheaply mass-produced in the static nano-assemblers should also beat them. They’d be useful in space and for sending to other planets, but there wouldn’t be all that much money in that, and sending a larger probe with nano-assemblers and assorted equipment would also do.
Since there would be no overwhelming incentive against outlawing the development of MvNM doing so would be feasible, and considering how easy it should be to scare people of the gg scenario in such a world, very likely.
That pretty much leaves secret development as some sort of weapon. That would leave gg defense a military issue. Nano-assemblers should be much better at producing nano-hunters and nano-killers (or more assemblers, mining equipment, planes, rockets, bombs) than MvNM more of themselves, and nano-hunters and nano-killers much better at finding and destroying them, and there’d also be the option of using macroscopic weapons against larger concentrations.
The original discussion was not concerned with the dangers of grey goo per se, but with any extinction risk associated with nanotech. Remember, the original question, the point of the discussion, was whether asteroids were irrelevant as an x-risk.
So whilst you make good points, it seems that we now have a lost-purpose debate rather than a purposeful collaborative discussion.
Other nano-risks aren’t necessarily extinction risks, though. And while I’m sort of worried that someone might secretly use nano to rewire the brains of important people and later of everyone to absolute loyalty to them (an outcome that would be a lot better than extinction, but still pretty bad) or something along those lines it doesn’t seem obvious that there is anything effective we could spend money on now that would help protect us, unlike asteroids. At least at the levels of spending asteroid danger prevention could usefully absorb.
But now you have to catalogue all the possible risks of nanotech, and add a category for “risks I haven’t thought of”, and then claim that the total probability of all that is < 1⁄5000.
You have to consider military nanotech. You have to consider nano-terrorism and the balance of attack versus defence, you have to consider the effects of nanotech on nuclear proliferation (had you thought of that one?), etc etc etc.
I am sure that there are at least 3 nano-risk scenarios documented on the internet that you haven’t even thought of, which instantly invalidates claiming a figure as low as, say, 1⁄5000 for the extinction risk before you have considered them.
This argument reminds me of the case of physicists claiming to have an argument showing that the probability of an LHC disaster was less than 1 in a million, and Toby Ord pointing out that the probability that there was a mistake in their argument was surely > 1 in 1,000, invalidating their conclusion that total probability of an LHC disaster was < 1 in 1 million.
But now you have to catalogue all the possible risks of nanotech, and add a category for “risks I haven’t thought of”, and then claim that the total probability of all that is < 1⁄5000.
The question wasn’t whether nanotech is potentially more dangerous than asteroids overall, though. It was whether all money available for extential risk prevention/migitation would be better spend on nano than on space based dangers.
There doesn’t seem to be any good way to spent money so that all possible nano risks will be migitated (other than lobbying to ban all nano reseach everywhere, and I’m far from convinced that the potential dangers of nano are greater than the benefits). I’m not even sure there is a good way to spend money on migitation of any single nano risk.
The most obvious migitation/prevention technology would be really good detectors for autonomous nanobots, whether self reproducing or not. But until we know how they work and what energy source they use we can’t do all that much useful research in that direction, and spending after we know what we need would probably be much more efficient. This also looks like an issue where the military will spend such enourmous amounts once the possibilities are clear that money spent previously will not affect the result all that much.
you have to consider the effects of nanotech on nuclear proliferation (had you thought of that one?)
Yes, I did, that’s one of the most ovious ones.
It’s not going to be possible to prevent a nation with access to unranium from building nuclear weapons, but I think that would be the case anyway, with or without nano. The risk of private persons building them might be somewhat increased. I’m not sure if there is any need to seperate isotopes in whatever machines pre-process materials in/for nano-assemblers, or if they lead themself to be modifiable for that. Assuming they do you’d need to look at anyone who processes large amounts of sea water, or any other material that contains uranium. Perhaps you could mandate that only designs that are vulnerable to radioactivity can be sold commercially, or make the machines refuse to work with uranium in a way that is hard to remove. I don’t see how spending money now could help in any way.
This argument reminds me of the case of physicists claiming to have an argument showing that the probability of an LHC disaster was less than 1 in a million, and Toby Ord pointing out that the probability that there was a mistake in their argument was surely > 1 in 1,000, invalidating their conclusion that total probability of an LHC disaster was < 1 in 1 million.
I ’m not sure the probability of a serious error in the best avaiable argument against something can be considered a lower bound to the proability you should assign it in general. In the case of the LHC if there is a 1 in 20 chance of a mistake that doesn’t really change the conclusion much, a 1 in 100 chance of a mistake such that the real probablility is 1 in 100,000, and a 1 in 10,000 chance of a mistake such that the real probablility is 1 in 1000 then 1 in a million could still be roughly the correct estimate.
But now you have to catalogue all the possible risks of nanotech, and add a category for “risks I haven’t thought of”, and then claim that the total probability of all that is < 1⁄5000
The 1⁄5000 number only works for the really large asteroids (> 1 km in diameter). Note that as I pointed earlier, much smaller asteroids can be locally devastating. The resources that go to finding the very large asteroids also helps track the others, reducing the chance of human life lost even outside existential risk scenarios. And as I pointed out, there are a lot of other potential space based existential risks. That said, I think you’ve made a very good point above about the many non-gray goo scenarios that make nanotech a severe potential existential risk. So I think I’ll agree that if one’s comparing probability of a nanotech existential risk scenario compared to probability of a meteorite existential risk scenario, the nanotech is more likely.
Your point about the impact of nanotech on nuclear proliferation I find particularly disturbing. The potential for nanotech to greatly increase the efficiency of enriching uranium seems deeply worrisome and that’s really the main practical limitation in building fission weapons.
Upvoted for updating. I agree that smaller asteroids are an important consideration for space; we expect about one Tunguska event per century I believe, which stands a ~5% chance of hitting a populated area as far as I know. Saving a 5% chance of the next Tunguska hitting a populated area is a good thing.
Accidental grey goo doesn’t seem plausible, and purposeful destructive use of nanotech doesn’t necessarily fall in that category. We can have nanomachines that act as bioweapons, infecting people and killing them.
Are you disagreeing with something I said? I’m not sure nanotech would be better at killing that way than a designer virus, which should be a lot easier and cheaper (possibly even when accounting for the need to find a way to prevent it from spreading to your own side, if that’s necessary). Nanotech might be able to do things that a virus can’t, but that would be the sort of thing I mentioned. Anyway I don’t see how we could effectively spend money now to prevent either.
Where are you getting your estimates of risk probability from? If by Nano you mean a nanotech gray goo scenario, then frankly that seems much less likely than 1⁄5000 in the next century. People who actually work with nanotech put that sort of scenario as extremely unlikely for a variety of reasons, including that there’s too much variation in common chemical compounds to be able to make nanotech devices which acted as universal assimilators, and there’s no clear way that such entities are going to get efficient energy resources to do so. Now, one might be able to argue that very intelligent AI might be able to solve those problems, but in that case, you’re talking about just the AI problem and nanotech becomes incidental to that.
I’m not sure what you mean by “bio”- but if you mean biological threats then this seems unlikely to be an existential level threat for the simple reason that we can see that it is very rare for a species to be wiped out by a pathogen. We might be able to make a deliberately dangerous pathogen but that requires motivation and expertise. The set of people with both the desire and capability to construct such entities is likely small, and will likely remain small for the indefinite future.
I assume “Bio, Nano, AI” to mean “any global existential threats brought on by human technology”, which is a big disjunction with plenty of unknown unknowns, and we already have one example (nuclear weapons) that could not have plausibly been predicted 50 years beforehand. Even if you discount the probabilities of hard AI takeoff or nanotech development, you’d have to have a lot of evidence in order to put such a small probability on any technological development of the next hundred years threatening global extinction.
As someone who does largely discount the threats mentioned (I believe that the operationally-significant probability for foom/grey goo is order 10^-3/10^-5, and the best-guess probability is order 10^-7/10^-7), I still endorse the logic above.
Er, maybe I was being unclear. Even if you discount a few specific scenarios, where do you get the strong evidence that no other technological existential risk with probability bigger than .001 will arise in the next hundred years, given that forecasters a century ago would have completely missed the existential risk from nuclear weapons?
I agree that cataloging near-earth objects is obviously worth a much bigger current investment than it has at present, but I think that an even bigger need exists for a well-funded group of scientists from various fields to consider such technological existential risks.
If I wanted to exterminate the human race using nanotechnology, there are two methods I would think about. First method, airborne replicators which use solar power for energy and atmospheric carbon dioxide for feedstock. Second method, nanofactories which produce large quantities of synthetic greenhouse gases. Under the first method, one should imagine a cloud of nanodust that just keeps growing until most of the CO2 is used up (at which point all plants die). Under the second method, the objective is to heat the earth until the oceans boil.
For the airborne replicator, the obvious path is “diamondoid mechanosynthesis”, as described in papers by Drexler, Merkle, Freitas and others. This is the assembly of rigid nanostructures, composed mostly of carbon atoms, through precisely coordinated deposition of small reactive clusters of atoms. To assemble diamond in this way, one might want a supply of carbon chains, which remain sequestered in narrow-diameter buckytubes until they are wanted, with the buckytubes being positioned by rigid nanomechanisms, and the carbon chains being synthesized through the capture and “cracking” of CO2 much as in plants. The replicator would have a hard-vacuum interior in which the component assembly of its progeny would occur, and a sliding or telescoping mechanism allowing temporary expansion of this interior space. The replicator would therefore have at least two configurations: a contracted minimal one, and an expanded maximal one large enough to contain a new replicator assembled in the minimal configuration.
There are surely hundreds or thousands of challenging subproblems involved in the production of such a nanoscale doomsday device—power supply, environmental viability (you would want it to disperse but to remain adrift), what to do with contaminants, to say nothing of the mechanisms and their control systems—but it would be a miracle if it was literally thermodynamically impossible to make such a thing. Cells do it, and yes they are aqueous bags of floppy proteins rather than evacuated diamond mechanisms, but I would think that has more to do with the methods available to DNA-based evolution, rather than the physical impossibility of free-living rigid nanobots. The Royal Society report to which you link hardly examines this topic. It casually cites a few qualitative criticisms made by Smalley and others, and attaches some significance to a supposed change of heart by Drexler—but in fact, Drexler simply changed his emphasis, from accident to abuse. There is no reason to expect free-living rogue replicators to emerge by accident from nanofactories, because such industrial assemblers will be tailored to operate under conditions very different to the world outside the factory. But there has been no concession that free-living nanomechanical replicators are simply impossible, and people like Freitas and Merkle who continue to work on the details of mechanosynthesis have many time expressed the worry that it looks alarmingly easy (relatively speaking) to design such devices.
As for my second method, you don’t even need free-living replicators, just mass production of the greenhouse-gas nanofactories, and a supply of appropriate ingredients.
I’m not sure if this counts as an existential threat, but I’m more concerned about a biowar wrecking civilization—enough engineered human and food diseases that civilization is unsustainable.
I can’t judge likelihood, but it’s at least a combination of plausible human motivations and technology. Your tech is plausible, but it’s hard to imagine anyone wanting not just to wipe out the human race, but also to do such damage to the biosphere.
There are a few people who’d like the human race to be gone (or at least who say they do), but as far as I know, they all want plants and animals to continue without being affected by people.
There are definitely people who would destroy the whole world if they could. Berserkers, true nihilists, people who hate life, people who simply have no empathy, dictators having a bad day. Even a few dolorous “negative utilitarians” exist who might do it as an act of mercy. But the other types are surely more numerous.
Massive overconfidence. You need to go closer to 50⁄50.
Where is your estimate coming from?
My estimate comes from the following: 1) experts suggest that the possibility is very unlikely. For example, the Royal Society official report on the dangers of nanotech decided that this sort of situation was extremely unlikely. See report here (and good Bayesians should listen to subject matter experts) 2) Every plausible form of nanotech yet investigated shows no capability of gray gooing. For example, consider DNA nanotechnology, an area we’ve had a fair bit of success both with computation and constructing machines. Yet, these work only in a small range of pH values and temperatures and often require specific specialized enzymes. Also, as with any organic nanotech, they will face competition and potentially predation from microorganisms. Inorganic nanotech faces other problems, such as less energy and far fewer options for possible chemical constructions, and already reduces the grey goo potential a lot if one isn’t using carbon.
But how did you translate “very unlikely” to “less that 1 in 5000″? Why not say 1%? or 3%? Or 1 in 10^100?
I think that I need to do an article on why one shouldn’t be so keen to assign very low probabilities to events where the only evidence is extrapolative.
Still depends on the nature of the event (Russel’s teapot). There is no default level of certainty, no magical 50⁄50.
Sure, for cases where arbitrary complexity has been added, the “default level of certainty” is 2^-(Complexity).
Unfortunately, you often have to rule intuitively. How does complexity figure in the estimation of probability of gray goo? Useful heuristic, but no silver bullet.
I think that one has to differentiate between the perfect unbiased individual rationalist who uses heuristics but ultimately makes the final decision from first principles if necessary, and the semi-rationalist community, where individual members vary in degree of motivated cognition.
The latter works better with more rigid rules and less leeway for people to believe what they want. It’s a tradeoff: random errors induced by rough-and-ready estimates, versus systematic errors induced by wishful thinking of various forms.
Less than 1 in 5000 sounds about right to me. I’m much more worried about other nano-dangers (e. g. clandestine brain washing) than grey goo.
Not only is there the problem of the technological feasibility, but even if its possible there is the still larger problem of economic feasibility. Molecular von Neumann Machines, if possible, should be vastly more difficult to develop than vastly more efficient static nano-assemblers in a controlled environment (probably vacuum?) and integrated in an economy with mixed nano- and macrotech taking advantage of specialization, economics of scale etc. The static nano-assemblers should already be ubiquitous long before molecular von Neumann Machines start to become feasible. So why develop them in the first place? For medical applications specialized medical nanobots running on glucose and cheaply mass-produced in the static nano-assemblers should also beat them. They’d be useful in space and for sending to other planets, but there wouldn’t be all that much money in that, and sending a larger probe with nano-assemblers and assorted equipment would also do.
Since there would be no overwhelming incentive against outlawing the development of MvNM doing so would be feasible, and considering how easy it should be to scare people of the gg scenario in such a world, very likely.
That pretty much leaves secret development as some sort of weapon. That would leave gg defense a military issue. Nano-assemblers should be much better at producing nano-hunters and nano-killers (or more assemblers, mining equipment, planes, rockets, bombs) than MvNM more of themselves, and nano-hunters and nano-killers much better at finding and destroying them, and there’d also be the option of using macroscopic weapons against larger concentrations.
The original discussion was not concerned with the dangers of grey goo per se, but with any extinction risk associated with nanotech. Remember, the original question, the point of the discussion, was whether asteroids were irrelevant as an x-risk.
So whilst you make good points, it seems that we now have a lost-purpose debate rather than a purposeful collaborative discussion.
Other nano-risks aren’t necessarily extinction risks, though. And while I’m sort of worried that someone might secretly use nano to rewire the brains of important people and later of everyone to absolute loyalty to them (an outcome that would be a lot better than extinction, but still pretty bad) or something along those lines it doesn’t seem obvious that there is anything effective we could spend money on now that would help protect us, unlike asteroids. At least at the levels of spending asteroid danger prevention could usefully absorb.
But now you have to catalogue all the possible risks of nanotech, and add a category for “risks I haven’t thought of”, and then claim that the total probability of all that is < 1⁄5000.
You have to consider military nanotech. You have to consider nano-terrorism and the balance of attack versus defence, you have to consider the effects of nanotech on nuclear proliferation (had you thought of that one?), etc etc etc.
I am sure that there are at least 3 nano-risk scenarios documented on the internet that you haven’t even thought of, which instantly invalidates claiming a figure as low as, say, 1⁄5000 for the extinction risk before you have considered them.
This argument reminds me of the case of physicists claiming to have an argument showing that the probability of an LHC disaster was less than 1 in a million, and Toby Ord pointing out that the probability that there was a mistake in their argument was surely > 1 in 1,000, invalidating their conclusion that total probability of an LHC disaster was < 1 in 1 million.
The question wasn’t whether nanotech is potentially more dangerous than asteroids overall, though. It was whether all money available for extential risk prevention/migitation would be better spend on nano than on space based dangers.
There doesn’t seem to be any good way to spent money so that all possible nano risks will be migitated (other than lobbying to ban all nano reseach everywhere, and I’m far from convinced that the potential dangers of nano are greater than the benefits). I’m not even sure there is a good way to spend money on migitation of any single nano risk.
The most obvious migitation/prevention technology would be really good detectors for autonomous nanobots, whether self reproducing or not. But until we know how they work and what energy source they use we can’t do all that much useful research in that direction, and spending after we know what we need would probably be much more efficient. This also looks like an issue where the military will spend such enourmous amounts once the possibilities are clear that money spent previously will not affect the result all that much.
Yes, I did, that’s one of the most ovious ones. It’s not going to be possible to prevent a nation with access to unranium from building nuclear weapons, but I think that would be the case anyway, with or without nano. The risk of private persons building them might be somewhat increased. I’m not sure if there is any need to seperate isotopes in whatever machines pre-process materials in/for nano-assemblers, or if they lead themself to be modifiable for that. Assuming they do you’d need to look at anyone who processes large amounts of sea water, or any other material that contains uranium. Perhaps you could mandate that only designs that are vulnerable to radioactivity can be sold commercially, or make the machines refuse to work with uranium in a way that is hard to remove. I don’t see how spending money now could help in any way.
I ’m not sure the probability of a serious error in the best avaiable argument against something can be considered a lower bound to the proability you should assign it in general. In the case of the LHC if there is a 1 in 20 chance of a mistake that doesn’t really change the conclusion much, a 1 in 100 chance of a mistake such that the real probablility is 1 in 100,000, and a 1 in 10,000 chance of a mistake such that the real probablility is 1 in 1000 then 1 in a million could still be roughly the correct estimate.
The 1⁄5000 number only works for the really large asteroids (> 1 km in diameter). Note that as I pointed earlier, much smaller asteroids can be locally devastating. The resources that go to finding the very large asteroids also helps track the others, reducing the chance of human life lost even outside existential risk scenarios. And as I pointed out, there are a lot of other potential space based existential risks. That said, I think you’ve made a very good point above about the many non-gray goo scenarios that make nanotech a severe potential existential risk. So I think I’ll agree that if one’s comparing probability of a nanotech existential risk scenario compared to probability of a meteorite existential risk scenario, the nanotech is more likely.
Your point about the impact of nanotech on nuclear proliferation I find particularly disturbing. The potential for nanotech to greatly increase the efficiency of enriching uranium seems deeply worrisome and that’s really the main practical limitation in building fission weapons.
Upvoted for updating. I agree that smaller asteroids are an important consideration for space; we expect about one Tunguska event per century I believe, which stands a ~5% chance of hitting a populated area as far as I know. Saving a 5% chance of the next Tunguska hitting a populated area is a good thing.
A lot of it seems to hinge on the probability you assign to those threats being developed in the next century.
Accidental grey goo doesn’t seem plausible, and purposeful destructive use of nanotech doesn’t necessarily fall in that category. We can have nanomachines that act as bioweapons, infecting people and killing them.
Are you disagreeing with something I said? I’m not sure nanotech would be better at killing that way than a designer virus, which should be a lot easier and cheaper (possibly even when accounting for the need to find a way to prevent it from spreading to your own side, if that’s necessary). Nanotech might be able to do things that a virus can’t, but that would be the sort of thing I mentioned. Anyway I don’t see how we could effectively spend money now to prevent either.
I agree with this. I disagree that there are no clear non-goo extinction risks associated with nano, and gave an example of one.