Another key factor here is that the machine that does this would operate inside an alien environment compared to existing life—it would operate in a clean vacuum, possibly at low temperatures, and would use extremely stiff subunits made of covalently bonded silicon or carbon
If you have to do this, then the threat of nanotech looks a lot smaller. Replicators that need a nearly perfect vacuum aren’t much of a threat.
Also, this is one place where AI comes in. The universe doesn’t have any trouble modeling the energetics of a large network of atoms. If we have trouble doing the same, even using gigantic computers made of many many of these same atoms, then maybe the problem is we are doing it a hugely inefficient way. An entity smarter that humans might find a way to re-formulate the math for many orders of magnitude more efficient calculations, or it might find a way to build a computer that more efficiently uses the atoms it is composed of.
This sounds very close to a default assumption that these processes are genuinely easy to not just compute, but to actually work out what solutions one wants. Answering “how will this protein most likely fold?” is computationally much easier (as far as we can tell) than answering “what protein will fold like this?” It may well be that these are substantially computationally easier than we currently think. Heck, it could be that P=NP, or it could be that even with P != NP that there’s still some extremely slow growing algorithm that solves NP complete problems. But these don’t seem like likely scenarios unless one has some evidence for them.
Answering “how will this protein most likely fold?” is computationally much easier (as far as we can tell) than answering “what protein will fold like this?”
Got a reference for that? It’s not obvious to me (CS background, not bio).
What if you have an algorithm that attempts to solve the “how will this protein most likely fold?” problem, but is only tractable on 1% of possible inputs, and just gives up on the other 99%? As long as the 1% contains enough interesting structures, it’ll still work as a subroutine for the “what protein will fold like this?” problem. The search algorithm just has to avoid the proteins that it doesn’t know how to evaluate. That’s how human engineers work, anyway: “what does this pile of spaghetti code do?” is uncomputable in the worst case, but that doesn’t stop programmers from solving “write a program that does X”.
Sure, see for example here which discusses some of the issues involved. Although your essential point may still have merit, because it is likely that many of the proteins we would want will have much more restricted shapes than those in general problem. Also, I don’t know much about what work has been done in the last few years, so it is possible that the state of the art has changed substantially.
Sure, but a lot of the hypothetical nanotech disasters are things that require nanotech devices that are themselves very small (e.g. the grey goo scenarios). If one requires a macroscopic object to keep a stable vacuum then the set of threats goes down by a lot. Obviously a lot of them are still possibly present (such as the possibility that almost anyone will be able to refine uranium), but many of them don’t, and many of the obvious scenarios connected to AI would then look less likely.
If you have to do this, then the threat of nanotech looks a lot smaller. Replicators that need a nearly perfect vacuum aren’t much of a threat.
This sounds very close to a default assumption that these processes are genuinely easy to not just compute, but to actually work out what solutions one wants. Answering “how will this protein most likely fold?” is computationally much easier (as far as we can tell) than answering “what protein will fold like this?” It may well be that these are substantially computationally easier than we currently think. Heck, it could be that P=NP, or it could be that even with P != NP that there’s still some extremely slow growing algorithm that solves NP complete problems. But these don’t seem like likely scenarios unless one has some evidence for them.
Got a reference for that? It’s not obvious to me (CS background, not bio).
What if you have an algorithm that attempts to solve the “how will this protein most likely fold?” problem, but is only tractable on 1% of possible inputs, and just gives up on the other 99%? As long as the 1% contains enough interesting structures, it’ll still work as a subroutine for the “what protein will fold like this?” problem. The search algorithm just has to avoid the proteins that it doesn’t know how to evaluate. That’s how human engineers work, anyway: “what does this pile of spaghetti code do?” is uncomputable in the worst case, but that doesn’t stop programmers from solving “write a program that does X”.
Sure, see for example here which discusses some of the issues involved. Although your essential point may still have merit, because it is likely that many of the proteins we would want will have much more restricted shapes than those in general problem. Also, I don’t know much about what work has been done in the last few years, so it is possible that the state of the art has changed substantially.
The idea is to have a vacuum inside the machinery, a macroscopic nanofactory can still exist in an atmosphere.
Sure, but a lot of the hypothetical nanotech disasters are things that require nanotech devices that are themselves very small (e.g. the grey goo scenarios). If one requires a macroscopic object to keep a stable vacuum then the set of threats goes down by a lot. Obviously a lot of them are still possibly present (such as the possibility that almost anyone will be able to refine uranium), but many of them don’t, and many of the obvious scenarios connected to AI would then look less likely.
I don’t know.. I think ‘grey goo’ scenarios would still work even if the individual goolets are insect-sized.