Zubon,
Your model assumes that these ‘nano-assemblers’ will be able to reproduce themselves using any nearby molecules and not some specific kind of molecule/substance. It would seem obviously unwise to invent something that could eat away any matter you put near it for the sake of self-reproduction. Why would we ever design such a thing? Even Kurt Vonnegut’s hypothetical Ice-Nine could only crystalize water and only at certain temperatures- creating something that essentially crystalizes EVERYTHING does not seem trivial, easy, or advisable to anyone. Maybe you should be clamouring for regulation of who can use nano-scale design technology so mad-men don’t do this to deliberately destroy everything. Maybe this should be a top national-security issue. Heck- Maybe it IS a top national security issue and you just don’t know it. Changing security opinions still seems safer and easier than initiating a self-recursively improving general AI.
The scenario you propose is, as I understand it, “Grey Goo,” and I was under the impression that this was not considered a primary extinction risk (though I could be wrong there).
Zubon, Your model assumes that these ‘nano-assemblers’ will be able to reproduce themselves using any nearby molecules and not some specific kind of molecule/substance. It would seem obviously unwise to invent something that could eat away any matter you put near it for the sake of self-reproduction. Why would we ever design such a thing? Even Kurt Vonnegut’s hypothetical Ice-Nine could only crystalize water and only at certain temperatures- creating something that essentially crystalizes EVERYTHING does not seem trivial, easy, or advisable to anyone. Maybe you should be clamouring for regulation of who can use nano-scale design technology so mad-men don’t do this to deliberately destroy everything. Maybe this should be a top national-security issue. Heck- Maybe it IS a top national security issue and you just don’t know it. Changing security opinions still seems safer and easier than initiating a self-recursively improving general AI.
The scenario you propose is, as I understand it, “Grey Goo,” and I was under the impression that this was not considered a primary extinction risk (though I could be wrong there).