In general, the factors that govern the macroscopic strength of materials can often have surprisingly little to do with the strength of the bonds holding them together. A big part of a material’s tensile strength is down to whether it forms cracks and how those cracks propagate. I predict many LWers would enjoy reading The New Science of Strong Materials which is an excellent introduction to materials science and its history. (Cellulose is mentioned, and the most frequent complaint about it as an engineering material lies in its tendency to absorb water.)
It’s actually not clear to me why Yudkowsky thinks that ridiculously high macroscopic physical strength is so important for establishing an independent nanotech economy. Is he imagining that trees will be out-competed by solar collectors rising up on stalks of diamond taller than the tallest tree trunk? But the trees themselves can be consumed for energy, and to achieve this, nanobots need only reach them on the ground. Once the forest has been eaten, a solar collector lying flat on the ground works just as well. One legitimate application for covalently bonded structures is operating at very higher temperatures, which would cause ordinary proteins to denature. In those cases the actual strength of the individual bonds does matter more.
It’s actually not clear to me why Yudkowsky thinks that ridiculously high macroscopic physical strength is so important for establishing an independent nanotech economy.
Why do you think Yudkowsky thinks this? To me this whole conversation about material strength is a tangent from the claim that Drexlerian nanotech designed by a superintelligence could do various things way more impressive than biology.
To me this whole conversation about material strength is a tangent from the claim that Drexlerian nanotech designed by a superintelligence could do various things way more impressive than biology.
I think this interpretation is incomplete. Being able to build a material that’s much stronger than biological materials would be impressive in an absolute sense, but it doesn’t imply that you can easily kill everyone. Humans can already build strong materials, but that doesn’t mean we can presently build super-weapons in the sense Yudkowsky describes.
A technology being “way more impressive than biology” can either be interpreted weakly as “impressive because it does something interesting that biology can’t do” or more strongly as “impressive because it completely dominates biology on the relevant axes that allow you to easily kill everyone in the world.” I think the second interpretation is supported by his quote that,
It should not be very hard for a superintelligence to repurpose ribosomes to build better, more strongly bonded, more energy-dense tiny things that can then have a quite easy time killing everyone.
A single generation difference in military technology is an overwhelming advantage. The JSF F35 Lockheed Martin Lightning II cannot be missile-locked by an adversary beyond 20-30 miles. Conversely, it can see and weapon lock an opposing 4th gen fighter from >70 miles fire a beyond-visual-range missile that is almost impossible to evade for a manned fighter.
It is not at all unlikely to suppose that a machine superintelligence could not only rapidly design new materials, artificial organisms and military technologies vastly better than those constructed by humans today. These could indeed be said to form superweapons.
The idea that AI-designed nanomachines will outcompete bacteria and consume the world in a grey goo swarm perhaps may seem fanciful but that’s not at all evidence that it isn’t in the cards. Now, there are goodish technical arguments that bacteria are already at various thermodynamic limits. As bhauth notes it seems that Yudkowsky underrates the ability of evolution-by-natural-selection to find highly optimal structures.
However, I don’t see this enough evidence to prohibiting grey goo scenarios. Being somewhere at a Pareto optimum doesn’t mean you can’t be outcompeted. Evolution is much more efficient than it is sometimes given credit for but it still seems to miss obvious improvements.
Of course, nanotech is likely a superweapon even without grey goo scenarios so this is only a possible extreme. And finally of course (a) mechanical superintelligence(s) posesses many advantages over biological humans any of which may prove more relevant for a take-over scenario in the short-term.
Yes! The New Science of Strong Materialsis a marvelous book, highly recommended. It explains simply and in detail why most materials are at least an order of magnitude weaker than you’d expect if you calculated their strength theoretically from bond strengths: it’s all about how cracks or dislocations propagate.
However, Eliezer is talking about nano-tech. Using nanotech, by adding the right microstructure at the right scale, you can make a composite material that actually approaches the theoretical strength (as evolution did for spider silk), and at that point, bond strength does become the limiting factor.
On why this matters, physical strength is pretty important for things like combat, or challenging pieces of engineering like flight or reaching orbit. Nano-engineered carbon-carbon composites with a good fraction of the naively-calculated strength of (and a lot more toughness than) diamond would be very impressive in military or aerospace application. You’d have to ask Eliezer, but I suspect the point he’s trying to make is that if a human soldier was fighting a nano-tech-engineered AI infantry bot made out of these sorts of materials, the bot would win easily.
In general, the factors that govern the macroscopic strength of materials can often have surprisingly little to do with the strength of the bonds holding them together. A big part of a material’s tensile strength is down to whether it forms cracks and how those cracks propagate. I predict many LWers would enjoy reading The New Science of Strong Materials which is an excellent introduction to materials science and its history. (Cellulose is mentioned, and the most frequent complaint about it as an engineering material lies in its tendency to absorb water.)
It’s actually not clear to me why Yudkowsky thinks that ridiculously high macroscopic physical strength is so important for establishing an independent nanotech economy. Is he imagining that trees will be out-competed by solar collectors rising up on stalks of diamond taller than the tallest tree trunk? But the trees themselves can be consumed for energy, and to achieve this, nanobots need only reach them on the ground. Once the forest has been eaten, a solar collector lying flat on the ground works just as well. One legitimate application for covalently bonded structures is operating at very higher temperatures, which would cause ordinary proteins to denature. In those cases the actual strength of the individual bonds does matter more.
Why do you think Yudkowsky thinks this? To me this whole conversation about material strength is a tangent from the claim that Drexlerian nanotech designed by a superintelligence could do various things way more impressive than biology.
I think this interpretation is incomplete. Being able to build a material that’s much stronger than biological materials would be impressive in an absolute sense, but it doesn’t imply that you can easily kill everyone. Humans can already build strong materials, but that doesn’t mean we can presently build super-weapons in the sense Yudkowsky describes.
A technology being “way more impressive than biology” can either be interpreted weakly as “impressive because it does something interesting that biology can’t do” or more strongly as “impressive because it completely dominates biology on the relevant axes that allow you to easily kill everyone in the world.” I think the second interpretation is supported by his quote that,
A single generation difference in military technology is an overwhelming advantage. The JSF F35 Lockheed Martin Lightning II cannot be missile-locked by an adversary beyond 20-30 miles. Conversely, it can see and weapon lock an opposing 4th gen fighter from >70 miles fire a beyond-visual-range missile that is almost impossible to evade for a manned fighter.
In realistic scenarios with adequate preparation and competent deployment a generation difference in aircraft can lead to 20⁄1 K/D ratios. 5th generations fighters are much better than 4th generation fighters are much better than 3rd generation fighters etc. Same for tanks, ships, artillery, etc. This difference is primarily technological.
It is not at all unlikely to suppose that a machine superintelligence could not only rapidly design new materials, artificial organisms and military technologies vastly better than those constructed by humans today. These could indeed be said to form superweapons.
The idea that AI-designed nanomachines will outcompete bacteria and consume the world in a grey goo swarm perhaps may seem fanciful but that’s not at all evidence that it isn’t in the cards. Now, there are goodish technical arguments that bacteria are already at various thermodynamic limits. As bhauth notes it seems that Yudkowsky underrates the ability of evolution-by-natural-selection to find highly optimal structures.
However, I don’t see this enough evidence to prohibiting grey goo scenarios. Being somewhere at a Pareto optimum doesn’t mean you can’t be outcompeted. Evolution is much more efficient than it is sometimes given credit for but it still seems to miss obvious improvements.
Of course, nanotech is likely a superweapon even without grey goo scenarios so this is only a possible extreme. And finally of course (a) mechanical superintelligence(s) posesses many advantages over biological humans any of which may prove more relevant for a take-over scenario in the short-term.
Yes! The New Science of Strong Materials is a marvelous book, highly recommended. It explains simply and in detail why most materials are at least an order of magnitude weaker than you’d expect if you calculated their strength theoretically from bond strengths: it’s all about how cracks or dislocations propagate.
However, Eliezer is talking about nano-tech. Using nanotech, by adding the right microstructure at the right scale, you can make a composite material that actually approaches the theoretical strength (as evolution did for spider silk), and at that point, bond strength does become the limiting factor.
On why this matters, physical strength is pretty important for things like combat, or challenging pieces of engineering like flight or reaching orbit. Nano-engineered carbon-carbon composites with a good fraction of the naively-calculated strength of (and a lot more toughness than) diamond would be very impressive in military or aerospace application. You’d have to ask Eliezer, but I suspect the point he’s trying to make is that if a human soldier was fighting a nano-tech-engineered AI infantry bot made out of these sorts of materials, the bot would win easily.