This seems to me as claiming that an AI that has some or all of the boundary conditions of the human brain can’t become any more efficient with respect to its power requirements, rather than saying that it’s theoretically impossible to construct a computer smarter than human brain that requires less power, which is what EY’s statement was about.
Which brings me to the second line of very obvious-seeming reasoning that converges upon the same conclusion—that it is in principle possible to build an AGI much more computationally efficient than a human brain … The result is that the brain’s computation is something like half a million times less efficient than the thermodynamic limit for its temperature
Also, the brain being Pareto efficient would mean that one property of the brain can’t be improved without another property worsening. It wouldn’t mean there is no n-tuple of values of the n properties such that, if those properties had those values, the brain would become more intelligent with the same power requirements.
This seems to me as claiming that an AI that has some or all of the boundary conditions of the human brain can’t become any more efficient with respect to its power requirements, rather than saying that it’s theoretically impossible to construct a computer smarter than human brain that requires less power, which is what EY’s statement was about.
Which specific statement? There’s a few
Also, the brain being Pareto efficient would mean that one property of the brain can’t be improved without another property worsening. It wouldn’t mean there is no n-tuple of values of the n properties such that, if those properties had those values, the brain would become more intelligent with the same power requirements.