Thx. But I still don’t see why you said “asymptotic limit” and “grew … then petered out”. There is no reason why H. sap. could not grow to the size of a gorilla over the next few million years, nor any reason why the bottlenose dolphin could not grow to the size of an orca. With corresponding brain size increases in both cases. I don’t see that our brain size growth has petered out.
The fact that mammal brains reached similar upper neuron counts (200-100 billion neurons) in three separate unrelated lineages with widely varying body sizes to me is a strong hint of an asymptotic limit.
Also, Neanderthals had significantly larger brains and perhaps twice as many neurons (just a guess based on size) - and yet they were out-competed by smaller brained homo-sapiens.
The bottle-nose could grow to the size of the orca, but its not clear at all that its brain would grow over a few hundred billion neurons.
The biggest whale brains are several times heavier than human or elephant brains, but the extra mass is glial cells, not neurons.
And if you look at how the brain actually works, a size limit makes perfect sense due to wiring constraints and signal propagation delays mentioned earlier.
Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.
Projections of the future always disappoint and always surprise. During my childhood in the 1950s, I fully expected to see rocket belts and interplanetary travel within my lifetime. I didn’t even imagine personal computers and laser surgery as an alternative to eyeglasses.
Fifty years before that, they imagined that folks today would have light-weight muscle-powered aircraft in their garages. Jules Verne predicted atomic submarines and time machines.
So, based on how miserable our facilities of prediction really are, the reasonable thing to do would be to assign finite probabilities to both cyborg humans and gorilla-sized humans. The future could go either way.
Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.
Hmm. We came pretty close with nuclear weapons and two super-powers, and yet we are still here. The dangerous toys are going to get even more dangerous this century, but I don’t see the rationale for assigning > 50% to Doom.
In regards to your expectations. You are still alive and we do have jetpacks today, we have traveled to numerous planets, we do have muscle powered gliders at least, and atomic submarines.
The only mistaken predictions were that humans were useful to send to other planets (they are not), and that time travel is tractable.
And ultimately just because some people make inaccurate predictions does not somehow invalidate prediction itself.
Well, of course I didn’t mean to suggest p=0. I don’t think the collapse of technological civilization is very likely, though—and would assign permanent setbacks a < 1% chance of happening.
Thx. But I still don’t see why you said “asymptotic limit” and “grew … then petered out”. There is no reason why H. sap. could not grow to the size of a gorilla over the next few million years, nor any reason why the bottlenose dolphin could not grow to the size of an orca. With corresponding brain size increases in both cases. I don’t see that our brain size growth has petered out.
The fact that mammal brains reached similar upper neuron counts (200-100 billion neurons) in three separate unrelated lineages with widely varying body sizes to me is a strong hint of an asymptotic limit.
Also, Neanderthals had significantly larger brains and perhaps twice as many neurons (just a guess based on size) - and yet they were out-competed by smaller brained homo-sapiens.
The bottle-nose could grow to the size of the orca, but its not clear at all that its brain would grow over a few hundred billion neurons.
The biggest whale brains are several times heavier than human or elephant brains, but the extra mass is glial cells, not neurons.
And if you look at how the brain actually works, a size limit makes perfect sense due to wiring constraints and signal propagation delays mentioned earlier.
Surely there is every reason—machine intelligence, nanotechnology, and the engineered future will mean that humans will be history.
Surely there is every reason to expect technological civilization to collapse before any of those things come to fruition.
Projections of the future always disappoint and always surprise. During my childhood in the 1950s, I fully expected to see rocket belts and interplanetary travel within my lifetime. I didn’t even imagine personal computers and laser surgery as an alternative to eyeglasses.
Fifty years before that, they imagined that folks today would have light-weight muscle-powered aircraft in their garages. Jules Verne predicted atomic submarines and time machines.
So, based on how miserable our facilities of prediction really are, the reasonable thing to do would be to assign finite probabilities to both cyborg humans and gorilla-sized humans. The future could go either way.
Hmm. We came pretty close with nuclear weapons and two super-powers, and yet we are still here. The dangerous toys are going to get even more dangerous this century, but I don’t see the rationale for assigning > 50% to Doom.
In regards to your expectations. You are still alive and we do have jetpacks today, we have traveled to numerous planets, we do have muscle powered gliders at least, and atomic submarines.
The only mistaken predictions were that humans were useful to send to other planets (they are not), and that time travel is tractable.
And ultimately just because some people make inaccurate predictions does not somehow invalidate prediction itself.
Well, of course I didn’t mean to suggest p=0. I don’t think the collapse of technological civilization is very likely, though—and would assign permanent setbacks a < 1% chance of happening.