Hanson makes so many assumptions that defy intuition. He’s talking a civilization with the capacity to support trillions of individuals, in which these individuals are largely entirely disposable and can be duplicated at a moment’s notice, and he doesn’t think evolutionary pressures are going to come into play? We’ve seen random natural selection significantly improve human intelligence in as few as tens of generations. With Ems, you could probably cook up tailor-made superintelligences in a weekend using nothing but the right selection pressures. Or, at least, I see no reason to be confident in the converse proposition.
He claims we don’t know enough about the brain to select usefully nonrandom changes, yet assumes that we’ll know enough to emulate them to high fidelity. This is roughly like saying that I can perfectly replicate a working car but I somehow don’t understand anything about how it works. What about the fact that we already know some useful nonrandom changes that we could make, such as the increased dendritic branching observable in specific intelligence-associated alleles?
It doesn’t matter. Deepmind is planning to have a rat-level AI before the end of 2017 and Demis doesn’t tend to make overly optimistic predictions. How many doublings is a rat away from a human?
he doesn’t think evolutionary pressures are going to come into play?
He actually does think evolutionary pressures are going to be important, and in fact, in his book, he talks a lot about which directions he expects ems to evolve in. He just thinks that the evolutionary pressures, at least in the medium-term (he doesn’t try to make predictions about what comes after the Em era), will not be so severe that we cannot use modern social science to predict em behavior.
We’ve seen random natural selection significantly improve human intelligence in as few as tens of generations.
Source? I’m aware of the Flynn effect, but I was under the impression that the consensus was that it is probably not due natural selection.
He claims we don’t know enough about the brain to select usefully nonrandom changes, yet assumes that we’ll know enough to emulate them to high fidelity. This is roughly like saying that I can perfectly replicate a working car but I somehow don’t understand anything about how it works.
To emulate a brain, you need to have a good enough model of neurons and synapses, be able to scan brains in enough detail, and have the computing power to run the scan. Understanding how intelligent behavior arises from the interaction of neurons is not necessary.
It doesn’t matter. Deepmind is planning to have a rat-level AI before the end of 2017 and Demis doesn’t tend to make overly optimistic predictions.
If that actually happens, I would take that as significant evidence that AGI will come before WBE. I am kind of skeptical that it will, though. It wouldn’t surprise me that much if Deepmind produces some AI in 2017 that gets touted as a “rat-level AI” in the media, but I’d be shocked if the claim is justified.
We’ve seen random natural selection significantly improve human intelligence in as few as tens of generations.
“Random natural selection” is almost a contradiction in terms. Yes, we’ve seen dramatic boosts in Ashkenazi intelligence on that timescale, but that’s due to very non-random selection pressure.
Fair enough. My lazy use of terminology aside, I’m pretty sure you could “breed” an Em via replication-with-random-variation followed by selection according to performance-based criteria.
Hanson makes so many assumptions that defy intuition. He’s talking a civilization with the capacity to support trillions of individuals, in which these individuals are largely entirely disposable and can be duplicated at a moment’s notice, and he doesn’t think evolutionary pressures are going to come into play? We’ve seen random natural selection significantly improve human intelligence in as few as tens of generations. With Ems, you could probably cook up tailor-made superintelligences in a weekend using nothing but the right selection pressures. Or, at least, I see no reason to be confident in the converse proposition.
He claims we don’t know enough about the brain to select usefully nonrandom changes, yet assumes that we’ll know enough to emulate them to high fidelity. This is roughly like saying that I can perfectly replicate a working car but I somehow don’t understand anything about how it works. What about the fact that we already know some useful nonrandom changes that we could make, such as the increased dendritic branching observable in specific intelligence-associated alleles?
It doesn’t matter. Deepmind is planning to have a rat-level AI before the end of 2017 and Demis doesn’t tend to make overly optimistic predictions. How many doublings is a rat away from a human?
He actually does think evolutionary pressures are going to be important, and in fact, in his book, he talks a lot about which directions he expects ems to evolve in. He just thinks that the evolutionary pressures, at least in the medium-term (he doesn’t try to make predictions about what comes after the Em era), will not be so severe that we cannot use modern social science to predict em behavior.
Source? I’m aware of the Flynn effect, but I was under the impression that the consensus was that it is probably not due natural selection.
To emulate a brain, you need to have a good enough model of neurons and synapses, be able to scan brains in enough detail, and have the computing power to run the scan. Understanding how intelligent behavior arises from the interaction of neurons is not necessary.
If that actually happens, I would take that as significant evidence that AGI will come before WBE. I am kind of skeptical that it will, though. It wouldn’t surprise me that much if Deepmind produces some AI in 2017 that gets touted as a “rat-level AI” in the media, but I’d be shocked if the claim is justified.
“Random natural selection” is almost a contradiction in terms. Yes, we’ve seen dramatic boosts in Ashkenazi intelligence on that timescale, but that’s due to very non-random selection pressure.
Mutations occur randomly and environmental pressure perform selection on them.
Obviously, but “natural selection” is the non-random part of evolution. Using it as a byword for evolution as a whole is bad terminology.
Fair enough. My lazy use of terminology aside, I’m pretty sure you could “breed” an Em via replication-with-random-variation followed by selection according to performance-based criteria.