Richard Kennaway: I don’t think we actually disagree about this. It’s entirely possible that doubling the N of a brain—whatever the relevant N would be, I don’t know, but we can double it—would mean taking up much more than twice as many processor cycles (how fast do neurons run?) to run the same amount of processing.
In fact, if it’s exponential, the speed would drop by orders of magnitude for every constant increase. That would kill superintelligent AI as effectively as the laws of thermodynamics killed perpetual motion machines.
On the other hand, if you believe Richard Dawkins, Anatole France’s brain was less that 1000 cc, and brains bigger than 2000 cc aren’t unheard of (he lists Oliver Cromwell as an unverified potential example). Even if people are exchanging metaphorical clock rate for metaphorical instruction set size and vice-versa, and even if people have different neuron densities, this would seem to suggest the algorithm isn’t particularly high-order, or if it is the high-order bottlenecks haven’t kicked in at our current scale.
Richard Kennaway: I don’t think we actually disagree about this. It’s entirely possible that doubling the N of a brain—whatever the relevant N would be, I don’t know, but we can double it—would mean taking up much more than twice as many processor cycles (how fast do neurons run?) to run the same amount of processing.
In fact, if it’s exponential, the speed would drop by orders of magnitude for every constant increase. That would kill superintelligent AI as effectively as the laws of thermodynamics killed perpetual motion machines.
On the other hand, if you believe Richard Dawkins, Anatole France’s brain was less that 1000 cc, and brains bigger than 2000 cc aren’t unheard of (he lists Oliver Cromwell as an unverified potential example). Even if people are exchanging metaphorical clock rate for metaphorical instruction set size and vice-versa, and even if people have different neuron densities, this would seem to suggest the algorithm isn’t particularly high-order, or if it is the high-order bottlenecks haven’t kicked in at our current scale.