Basically, as far as I can tell, the answer is no, except with a bunch of qualifiers. Jacob Cannell has at least given some evidence that biology reliably finds pareto optimalish designs, but not global maximums.
In particular, his claims about biology never being improved by nanotech are subject to Extremal Goodhart.
For example, quantum computing/reversible computing or superconductors would entirely break his statement about optimal nanobots.
Ultimate limits from reversible computing/quantum computers come here:
No, it’s not. As I said, a skyscraper of assumptions each more dubious than the last. The entire line of reasoning from fundamental physics is useless because all you get is vacuous bounds like ‘if a kg of mass can do 5.4e50 quantum operations per second and the earth is 6e24 kg then that bounds available operations at 3e65 operations per second’ - which is completely useless because why would you constrain it to just the earth? (Not even going to bother trying to find a classical number to use as an example—they are all, to put it technically, ‘very big’.) Why are the numbers spat out by appeal to fundamental limits of reversible computation, such as but far from limited to, 3e75 ops/s, not enough to do pretty much anything compared to the status quo of systems topping out at ~1.1 exaflops or 1.1e18, 57 orders of magnitude below that one random guess? Why shouldn’t we say “there’s plenty of room at the top”? Even if there wasn’t and you could ‘only’ go another 20 orders of magnitude, so what? what, exactly, would it be unable to do that it would if you subtracted or added 10 orders of magnitude* and how do you know that? why would this not decisively change economics, technology, politics, recursive AI scaling research, and everything else? if you argue that this means it can’t do something in seconds and would instead take hours, how is that not an ‘intelligence explosion’ in the Vingean sense of being an asymptote and happening far faster than prior human transitions taking millennia or centuries, and being a singularity past which humans cannot see nor plan? Is it not an intelligence explosion but an ‘intelligence gust of warm wind’ if it takes a week instead of a day? Should we talk about the intelligence sirocco instead? This is why I say the most reliable part of your ‘proof’ are also the least important, which is the opposite of what you need, and serves only to dazzle and ‘Eulerize’ the innumerate.
btw I lied; that multiplies to 3e75, not 3e65. Did you notice?
Landauer’s limit only ‘proves’ that when you stack it on a pile of assumptions a mile high about how everything works, all of which are more questionable than it. It is about as reliable a proof as saying ‘random task X is NP-hard, therefore, no x-risk from AI’; to paraphrase Russell, arguments from complexity or Landauer have all the advantages of theft over honest toil...
One important implication is that in practice, it doesn’t matter whether biology has found a pareto optimal solution, since we can usually remove at least one constraint that applies to biology and evolution, even if it’s as simple as editing many, many genes at once to completely redesign the body.
This also regulates my Foom probabilities. My view is that I hold a 1-3% chance that the first AI will foom by 2100. Contra Jacob Cannell, Foom is possible, if improbable. Inside the model, everything checks out, but outside the model, it’s where he goes wrong.
For example, quantum computing/reversible computing or superconductors would entirely break his statement about optimal nanobots.
Reversible/Quantum computing is not as general as irreversible computing. Those paradigms only accelerate specific types of computations, and they don’t help at all with bit erasing/copying. The core function of a biological cell is to replicate, which requires copying/erasing bits, which reversible/quantum computing simply don’t help with at all, and in fact just add enormous extra complexity.
Basically, as far as I can tell, the answer is no, except with a bunch of qualifiers. Jacob Cannell has at least given some evidence that biology reliably finds pareto optimalish designs, but not global maximums.
In particular, his claims about biology never being improved by nanotech are subject to Extremal Goodhart.
For example, quantum computing/reversible computing or superconductors would entirely break his statement about optimal nanobots.
Ultimate limits from reversible computing/quantum computers come here:
https://arxiv.org/abs/quant-ph/9908043
From Gwern:
Links to comments here:
https://www.lesswrong.com/posts/yenr6Zp83PHd6Beab/?commentId=PacDMbztz5spAk57d
https://www.lesswrong.com/posts/yenr6Zp83PHd6Beab/?commentId=HH4xETDtJ7ZwvShtg
One important implication is that in practice, it doesn’t matter whether biology has found a pareto optimal solution, since we can usually remove at least one constraint that applies to biology and evolution, even if it’s as simple as editing many, many genes at once to completely redesign the body.
This also regulates my Foom probabilities. My view is that I hold a 1-3% chance that the first AI will foom by 2100. Contra Jacob Cannell, Foom is possible, if improbable. Inside the model, everything checks out, but outside the model, it’s where he goes wrong.
Reversible/Quantum computing is not as general as irreversible computing. Those paradigms only accelerate specific types of computations, and they don’t help at all with bit erasing/copying. The core function of a biological cell is to replicate, which requires copying/erasing bits, which reversible/quantum computing simply don’t help with at all, and in fact just add enormous extra complexity.