Hard nanotech (the kind usually envisioned in sci-fi) may be physically impossible, and at the very least is extremely difficult. The type of nanotech that is more feasible is 1.) top-down lithography (ie chips), and 2.) bottom up cellular biology, or some combinations thereof.
Biological cells are already near optimal nanotech robots in both practical storage density and computational energy efficiency (landauer limit). Even a superintelligence, no matter how clever, will not be able to design nanobots that are vastly more generally capable than biological cells. Robots are fundamentally limited by energy efficiency and storage density and biology is already operating at the physical limits for those key constraints. So plausible bottom up nanotech just looks like more ‘boring’ advanced biotech.
It would make evolutionary sense for current cells to be near optimal, and therefore for there to not be much opportunity for biotech/nanotech to do big powerful stuff. However, I notice that this leaves me confused about two things.
First, common rhetoric in the rationalist community is that this is a big risk. E.g. Robin Hanson advocated banning mirror cells, and I regularly hear people suggest working on preventing pandemics as an x-risk, or talk about how gain-of-function research is dangerous.
Secondly, there’s the personal experience that we just had this huge pandemic thing. If existing biology exploits opportunities to their limit with there being no space for novel mechanisms to compete, then it seems like we shouldn’t have had a pandemic.
If I had to guess at why these counterarguments fall apart, then it’s that unaligned AGI wouldn’t design a pandemic by mistake, because a germ capable of causing pandemics would have to specifically be designed for targetting human biology?
As an aside, mirror cells aren’t actually a problem, and non-mirror digestive systems and immune systems can break them down, albeit with less efficiency. Church’s early speculation that these cells would not be digestible by non-mirror life forms doesn’t actually make work, per several molecular biologists I have spoken to since then.
Mirror cells and novel viruses are well within ‘boring’ advanced biotech, which can be quite dangerous. My argument of implausibility was directed at sci-fi hard nanotech, like grey goo.
If I had to guess at why these counterarguments fall apart, then it’s that unaligned AGI wouldn’t design a pandemic by mistake, because a germ capable of causing pandemics would have to specifically be designed for targetting human biology?
That seems plausible. The risk is that an unaligned AGI could kill or weaken humanity through advanced biotech. I don’t think this is the most plausible outcome of unaligned AGI; more likely it would instead just soft takeover the world without killing us. If it did kill humanity that would come later, but it probably wouldn’t need to.
My answer is fundamentally due to what I think is Jacob Cannell’s world model for how AGI is going to go:
It’s built solely or mostly out of classical computing, so no exotica is there. This also disallows practical superconductors.
Thus, Landauer bounds it, including it’s bio/nano capabilities, which are already very close to what biology has.
Thus, no major improvements can be made.
I largely agree with this as an average or modal path. I’d assign it a 95-99% credence in this playing out for the first AGI.
My major reservations with his model of nanotechnology is that I would be much more careful in assuming that means AGI isn’t going to be able to do so, primarily because of the tail risk that exotic computers have, and I would place much lower credences on the statement, “Even a superintelligence, no matter how clever, will not be able to design nanobots that are vastly more generally capable than biological cells.”, more in the realm of 20% than his seeming certainty.
Don’t get me wrong, it still matters since it mentions limitations that matter for the discussions, and even if we do end up with the assumptions being wrong, it does give a way to control the capabilities of AGI while we’re aligning it.
I guess maybe I’m misunderstanding/overreading things. I read jacob_cannell as implying that the biotech/nanotech route would not be more dangerous than natural biology but maybe the point was just to provide a tangentially relevant piece of information without commenting on the danger relative danger?
I guess my point is that it doesn’t seem like you can start at “any biotech/nanotech built via support from an AI will still be bounded by the Landauer limit” to “no biotech/nanotech built via support from an AI will lead to something like a worldwide pandemic or atmospheric change that kills everyone”.
I might be missing something but the Landauer limit doesn’t seem that relevant to killing everyone to me. It’s a limit to computation, but I’m not suggesting the biotech/nanotech is computing some dangerous function, I’m suggesting it might produce something dangerous, such as more copies of itself.
Of course due to evolution it seems like in theory, there should be “efficient markets” in producing more copies of oneself, so maybe there is a blocker there. But as I said it seems like that blocker doesn’t really hold because we just had a pandemic.
Of course due to evolution it seems like in theory, there should be “efficient markets” in producing more copies of oneself, so maybe there is a blocker there. But as I said it seems like that blocker doesn’t really hold because we just had a pandemic.
Yeah basically this—there already is an efficient market for nanotech replicators. The most recent pandemic was only a minor blip in the grand scheme of things, it would take far more to kill humanity or seriously derail progress, and unaligned AGI would not want to do that anyway vs just soft covert takeover.
Hard nanotech (the kind usually envisioned in sci-fi) may be physically impossible, and at the very least is extremely difficult. The type of nanotech that is more feasible is 1.) top-down lithography (ie chips), and 2.) bottom up cellular biology, or some combinations thereof.
Biological cells are already near optimal nanotech robots in both practical storage density and computational energy efficiency (landauer limit). Even a superintelligence, no matter how clever, will not be able to design nanobots that are vastly more generally capable than biological cells. Robots are fundamentally limited by energy efficiency and storage density and biology is already operating at the physical limits for those key constraints. So plausible bottom up nanotech just looks like more ‘boring’ advanced biotech.
It would make evolutionary sense for current cells to be near optimal, and therefore for there to not be much opportunity for biotech/nanotech to do big powerful stuff. However, I notice that this leaves me confused about two things.
First, common rhetoric in the rationalist community is that this is a big risk. E.g. Robin Hanson advocated banning mirror cells, and I regularly hear people suggest working on preventing pandemics as an x-risk, or talk about how gain-of-function research is dangerous.
Secondly, there’s the personal experience that we just had this huge pandemic thing. If existing biology exploits opportunities to their limit with there being no space for novel mechanisms to compete, then it seems like we shouldn’t have had a pandemic.
If I had to guess at why these counterarguments fall apart, then it’s that unaligned AGI wouldn’t design a pandemic by mistake, because a germ capable of causing pandemics would have to specifically be designed for targetting human biology?
As an aside, mirror cells aren’t actually a problem, and non-mirror digestive systems and immune systems can break them down, albeit with less efficiency. Church’s early speculation that these cells would not be digestible by non-mirror life forms doesn’t actually make work, per several molecular biologists I have spoken to since then.
Mirror cells and novel viruses are well within ‘boring’ advanced biotech, which can be quite dangerous. My argument of implausibility was directed at sci-fi hard nanotech, like grey goo.
That seems plausible. The risk is that an unaligned AGI could kill or weaken humanity through advanced biotech. I don’t think this is the most plausible outcome of unaligned AGI; more likely it would instead just soft takeover the world without killing us. If it did kill humanity that would come later, but it probably wouldn’t need to.
My answer is fundamentally due to what I think is Jacob Cannell’s world model for how AGI is going to go:
It’s built solely or mostly out of classical computing, so no exotica is there. This also disallows practical superconductors.
Thus, Landauer bounds it, including it’s bio/nano capabilities, which are already very close to what biology has.
Thus, no major improvements can be made.
I largely agree with this as an average or modal path. I’d assign it a 95-99% credence in this playing out for the first AGI.
My major reservations with his model of nanotechnology is that I would be much more careful in assuming that means AGI isn’t going to be able to do so, primarily because of the tail risk that exotic computers have, and I would place much lower credences on the statement, “Even a superintelligence, no matter how clever, will not be able to design nanobots that are vastly more generally capable than biological cells.”, more in the realm of 20% than his seeming certainty.
Don’t get me wrong, it still matters since it mentions limitations that matter for the discussions, and even if we do end up with the assumptions being wrong, it does give a way to control the capabilities of AGI while we’re aligning it.
I guess maybe I’m misunderstanding/overreading things. I read jacob_cannell as implying that the biotech/nanotech route would not be more dangerous than natural biology but maybe the point was just to provide a tangentially relevant piece of information without commenting on the danger relative danger?
I guess my point is that it doesn’t seem like you can start at “any biotech/nanotech built via support from an AI will still be bounded by the Landauer limit” to “no biotech/nanotech built via support from an AI will lead to something like a worldwide pandemic or atmospheric change that kills everyone”.
I might be missing something but the Landauer limit doesn’t seem that relevant to killing everyone to me. It’s a limit to computation, but I’m not suggesting the biotech/nanotech is computing some dangerous function, I’m suggesting it might produce something dangerous, such as more copies of itself.
Of course due to evolution it seems like in theory, there should be “efficient markets” in producing more copies of oneself, so maybe there is a blocker there. But as I said it seems like that blocker doesn’t really hold because we just had a pandemic.
Yeah basically this—there already is an efficient market for nanotech replicators. The most recent pandemic was only a minor blip in the grand scheme of things, it would take far more to kill humanity or seriously derail progress, and unaligned AGI would not want to do that anyway vs just soft covert takeover.