My answer is fundamentally due to what I think is Jacob Cannell’s world model for how AGI is going to go:
It’s built solely or mostly out of classical computing, so no exotica is there. This also disallows practical superconductors.
Thus, Landauer bounds it, including it’s bio/nano capabilities, which are already very close to what biology has.
Thus, no major improvements can be made.
I largely agree with this as an average or modal path. I’d assign it a 95-99% credence in this playing out for the first AGI.
My major reservations with his model of nanotechnology is that I would be much more careful in assuming that means AGI isn’t going to be able to do so, primarily because of the tail risk that exotic computers have, and I would place much lower credences on the statement, “Even a superintelligence, no matter how clever, will not be able to design nanobots that are vastly more generally capable than biological cells.”, more in the realm of 20% than his seeming certainty.
Don’t get me wrong, it still matters since it mentions limitations that matter for the discussions, and even if we do end up with the assumptions being wrong, it does give a way to control the capabilities of AGI while we’re aligning it.
I guess maybe I’m misunderstanding/overreading things. I read jacob_cannell as implying that the biotech/nanotech route would not be more dangerous than natural biology but maybe the point was just to provide a tangentially relevant piece of information without commenting on the danger relative danger?
I guess my point is that it doesn’t seem like you can start at “any biotech/nanotech built via support from an AI will still be bounded by the Landauer limit” to “no biotech/nanotech built via support from an AI will lead to something like a worldwide pandemic or atmospheric change that kills everyone”.
I might be missing something but the Landauer limit doesn’t seem that relevant to killing everyone to me. It’s a limit to computation, but I’m not suggesting the biotech/nanotech is computing some dangerous function, I’m suggesting it might produce something dangerous, such as more copies of itself.
Of course due to evolution it seems like in theory, there should be “efficient markets” in producing more copies of oneself, so maybe there is a blocker there. But as I said it seems like that blocker doesn’t really hold because we just had a pandemic.
Of course due to evolution it seems like in theory, there should be “efficient markets” in producing more copies of oneself, so maybe there is a blocker there. But as I said it seems like that blocker doesn’t really hold because we just had a pandemic.
Yeah basically this—there already is an efficient market for nanotech replicators. The most recent pandemic was only a minor blip in the grand scheme of things, it would take far more to kill humanity or seriously derail progress, and unaligned AGI would not want to do that anyway vs just soft covert takeover.
My answer is fundamentally due to what I think is Jacob Cannell’s world model for how AGI is going to go:
It’s built solely or mostly out of classical computing, so no exotica is there. This also disallows practical superconductors.
Thus, Landauer bounds it, including it’s bio/nano capabilities, which are already very close to what biology has.
Thus, no major improvements can be made.
I largely agree with this as an average or modal path. I’d assign it a 95-99% credence in this playing out for the first AGI.
My major reservations with his model of nanotechnology is that I would be much more careful in assuming that means AGI isn’t going to be able to do so, primarily because of the tail risk that exotic computers have, and I would place much lower credences on the statement, “Even a superintelligence, no matter how clever, will not be able to design nanobots that are vastly more generally capable than biological cells.”, more in the realm of 20% than his seeming certainty.
Don’t get me wrong, it still matters since it mentions limitations that matter for the discussions, and even if we do end up with the assumptions being wrong, it does give a way to control the capabilities of AGI while we’re aligning it.
I guess maybe I’m misunderstanding/overreading things. I read jacob_cannell as implying that the biotech/nanotech route would not be more dangerous than natural biology but maybe the point was just to provide a tangentially relevant piece of information without commenting on the danger relative danger?
I guess my point is that it doesn’t seem like you can start at “any biotech/nanotech built via support from an AI will still be bounded by the Landauer limit” to “no biotech/nanotech built via support from an AI will lead to something like a worldwide pandemic or atmospheric change that kills everyone”.
I might be missing something but the Landauer limit doesn’t seem that relevant to killing everyone to me. It’s a limit to computation, but I’m not suggesting the biotech/nanotech is computing some dangerous function, I’m suggesting it might produce something dangerous, such as more copies of itself.
Of course due to evolution it seems like in theory, there should be “efficient markets” in producing more copies of oneself, so maybe there is a blocker there. But as I said it seems like that blocker doesn’t really hold because we just had a pandemic.
Yeah basically this—there already is an efficient market for nanotech replicators. The most recent pandemic was only a minor blip in the grand scheme of things, it would take far more to kill humanity or seriously derail progress, and unaligned AGI would not want to do that anyway vs just soft covert takeover.