It’s actually not clear to me why Yudkowsky thinks that ridiculously high macroscopic physical strength is so important for establishing an independent nanotech economy.
Why do you think Yudkowsky thinks this? To me this whole conversation about material strength is a tangent from the claim that Drexlerian nanotech designed by a superintelligence could do various things way more impressive than biology.
To me this whole conversation about material strength is a tangent from the claim that Drexlerian nanotech designed by a superintelligence could do various things way more impressive than biology.
I think this interpretation is incomplete. Being able to build a material that’s much stronger than biological materials would be impressive in an absolute sense, but it doesn’t imply that you can easily kill everyone. Humans can already build strong materials, but that doesn’t mean we can presently build super-weapons in the sense Yudkowsky describes.
A technology being “way more impressive than biology” can either be interpreted weakly as “impressive because it does something interesting that biology can’t do” or more strongly as “impressive because it completely dominates biology on the relevant axes that allow you to easily kill everyone in the world.” I think the second interpretation is supported by his quote that,
It should not be very hard for a superintelligence to repurpose ribosomes to build better, more strongly bonded, more energy-dense tiny things that can then have a quite easy time killing everyone.
A single generation difference in military technology is an overwhelming advantage. The JSF F35 Lockheed Martin Lightning II cannot be missile-locked by an adversary beyond 20-30 miles. Conversely, it can see and weapon lock an opposing 4th gen fighter from >70 miles fire a beyond-visual-range missile that is almost impossible to evade for a manned fighter.
It is not at all unlikely to suppose that a machine superintelligence could not only rapidly design new materials, artificial organisms and military technologies vastly better than those constructed by humans today. These could indeed be said to form superweapons.
The idea that AI-designed nanomachines will outcompete bacteria and consume the world in a grey goo swarm perhaps may seem fanciful but that’s not at all evidence that it isn’t in the cards. Now, there are goodish technical arguments that bacteria are already at various thermodynamic limits. As bhauth notes it seems that Yudkowsky underrates the ability of evolution-by-natural-selection to find highly optimal structures.
However, I don’t see this enough evidence to prohibiting grey goo scenarios. Being somewhere at a Pareto optimum doesn’t mean you can’t be outcompeted. Evolution is much more efficient than it is sometimes given credit for but it still seems to miss obvious improvements.
Of course, nanotech is likely a superweapon even without grey goo scenarios so this is only a possible extreme. And finally of course (a) mechanical superintelligence(s) posesses many advantages over biological humans any of which may prove more relevant for a take-over scenario in the short-term.
Why do you think Yudkowsky thinks this? To me this whole conversation about material strength is a tangent from the claim that Drexlerian nanotech designed by a superintelligence could do various things way more impressive than biology.
I think this interpretation is incomplete. Being able to build a material that’s much stronger than biological materials would be impressive in an absolute sense, but it doesn’t imply that you can easily kill everyone. Humans can already build strong materials, but that doesn’t mean we can presently build super-weapons in the sense Yudkowsky describes.
A technology being “way more impressive than biology” can either be interpreted weakly as “impressive because it does something interesting that biology can’t do” or more strongly as “impressive because it completely dominates biology on the relevant axes that allow you to easily kill everyone in the world.” I think the second interpretation is supported by his quote that,
A single generation difference in military technology is an overwhelming advantage. The JSF F35 Lockheed Martin Lightning II cannot be missile-locked by an adversary beyond 20-30 miles. Conversely, it can see and weapon lock an opposing 4th gen fighter from >70 miles fire a beyond-visual-range missile that is almost impossible to evade for a manned fighter.
In realistic scenarios with adequate preparation and competent deployment a generation difference in aircraft can lead to 20⁄1 K/D ratios. 5th generations fighters are much better than 4th generation fighters are much better than 3rd generation fighters etc. Same for tanks, ships, artillery, etc. This difference is primarily technological.
It is not at all unlikely to suppose that a machine superintelligence could not only rapidly design new materials, artificial organisms and military technologies vastly better than those constructed by humans today. These could indeed be said to form superweapons.
The idea that AI-designed nanomachines will outcompete bacteria and consume the world in a grey goo swarm perhaps may seem fanciful but that’s not at all evidence that it isn’t in the cards. Now, there are goodish technical arguments that bacteria are already at various thermodynamic limits. As bhauth notes it seems that Yudkowsky underrates the ability of evolution-by-natural-selection to find highly optimal structures.
However, I don’t see this enough evidence to prohibiting grey goo scenarios. Being somewhere at a Pareto optimum doesn’t mean you can’t be outcompeted. Evolution is much more efficient than it is sometimes given credit for but it still seems to miss obvious improvements.
Of course, nanotech is likely a superweapon even without grey goo scenarios so this is only a possible extreme. And finally of course (a) mechanical superintelligence(s) posesses many advantages over biological humans any of which may prove more relevant for a take-over scenario in the short-term.