Have you read The Age of Em? Robin Hanson thinks that mind uploading is likely to happen before de novo AI, but also the reasons why that’s the case mean that we won’t get much in the way of modifications to ems until the end of the Em era.
(That is, if you can just use ‘evolutionary algorithms’ to muck around with uploads and make some of them better at thinking, it’s likely you understand intelligence well enough to build a de novo AI to begin with.)
I’ve read Age of Em. IIRC, Robin argues that it will be likely be difficult to get a lot of progress from evolutionary algorithms applied to emulations because brains are fragile to random changes, and we don’t understand brains well enough to select usefully nonrandom changes, so all changes we make are likely to be negative.
But brains actually seem to be surprisingly resilient, given that many people with brain damage or deformed brains are still functional, including such dramatic changes as removing a hemisphere of the brain, and there are already known ways to improve brain performance in narrow domains with electric stimulation (source), which seems similar. So it seems fairly likely to me that getting significant improvements from evolutionary algorithms is possible.
Also, I talked with Robin about this, and he didn’t actually seem very confident about his prediction that evolutionary algorithms would not be used to increase the intelligence of emulations significantly, but he did think that such enhancements would not have a dramatic effect on em society.
Hanson makes so many assumptions that defy intuition. He’s talking a civilization with the capacity to support trillions of individuals, in which these individuals are largely entirely disposable and can be duplicated at a moment’s notice, and he doesn’t think evolutionary pressures are going to come into play? We’ve seen random natural selection significantly improve human intelligence in as few as tens of generations. With Ems, you could probably cook up tailor-made superintelligences in a weekend using nothing but the right selection pressures. Or, at least, I see no reason to be confident in the converse proposition.
He claims we don’t know enough about the brain to select usefully nonrandom changes, yet assumes that we’ll know enough to emulate them to high fidelity. This is roughly like saying that I can perfectly replicate a working car but I somehow don’t understand anything about how it works. What about the fact that we already know some useful nonrandom changes that we could make, such as the increased dendritic branching observable in specific intelligence-associated alleles?
It doesn’t matter. Deepmind is planning to have a rat-level AI before the end of 2017 and Demis doesn’t tend to make overly optimistic predictions. How many doublings is a rat away from a human?
he doesn’t think evolutionary pressures are going to come into play?
He actually does think evolutionary pressures are going to be important, and in fact, in his book, he talks a lot about which directions he expects ems to evolve in. He just thinks that the evolutionary pressures, at least in the medium-term (he doesn’t try to make predictions about what comes after the Em era), will not be so severe that we cannot use modern social science to predict em behavior.
We’ve seen random natural selection significantly improve human intelligence in as few as tens of generations.
Source? I’m aware of the Flynn effect, but I was under the impression that the consensus was that it is probably not due natural selection.
He claims we don’t know enough about the brain to select usefully nonrandom changes, yet assumes that we’ll know enough to emulate them to high fidelity. This is roughly like saying that I can perfectly replicate a working car but I somehow don’t understand anything about how it works.
To emulate a brain, you need to have a good enough model of neurons and synapses, be able to scan brains in enough detail, and have the computing power to run the scan. Understanding how intelligent behavior arises from the interaction of neurons is not necessary.
It doesn’t matter. Deepmind is planning to have a rat-level AI before the end of 2017 and Demis doesn’t tend to make overly optimistic predictions.
If that actually happens, I would take that as significant evidence that AGI will come before WBE. I am kind of skeptical that it will, though. It wouldn’t surprise me that much if Deepmind produces some AI in 2017 that gets touted as a “rat-level AI” in the media, but I’d be shocked if the claim is justified.
We’ve seen random natural selection significantly improve human intelligence in as few as tens of generations.
“Random natural selection” is almost a contradiction in terms. Yes, we’ve seen dramatic boosts in Ashkenazi intelligence on that timescale, but that’s due to very non-random selection pressure.
Fair enough. My lazy use of terminology aside, I’m pretty sure you could “breed” an Em via replication-with-random-variation followed by selection according to performance-based criteria.
Have you read The Age of Em? Robin Hanson thinks that mind uploading is likely to happen before de novo AI, but also the reasons why that’s the case mean that we won’t get much in the way of modifications to ems until the end of the Em era.
(That is, if you can just use ‘evolutionary algorithms’ to muck around with uploads and make some of them better at thinking, it’s likely you understand intelligence well enough to build a de novo AI to begin with.)
I’ve read Age of Em. IIRC, Robin argues that it will be likely be difficult to get a lot of progress from evolutionary algorithms applied to emulations because brains are fragile to random changes, and we don’t understand brains well enough to select usefully nonrandom changes, so all changes we make are likely to be negative.
But brains actually seem to be surprisingly resilient, given that many people with brain damage or deformed brains are still functional, including such dramatic changes as removing a hemisphere of the brain, and there are already known ways to improve brain performance in narrow domains with electric stimulation (source), which seems similar. So it seems fairly likely to me that getting significant improvements from evolutionary algorithms is possible.
Also, I talked with Robin about this, and he didn’t actually seem very confident about his prediction that evolutionary algorithms would not be used to increase the intelligence of emulations significantly, but he did think that such enhancements would not have a dramatic effect on em society.
Hanson makes so many assumptions that defy intuition. He’s talking a civilization with the capacity to support trillions of individuals, in which these individuals are largely entirely disposable and can be duplicated at a moment’s notice, and he doesn’t think evolutionary pressures are going to come into play? We’ve seen random natural selection significantly improve human intelligence in as few as tens of generations. With Ems, you could probably cook up tailor-made superintelligences in a weekend using nothing but the right selection pressures. Or, at least, I see no reason to be confident in the converse proposition.
He claims we don’t know enough about the brain to select usefully nonrandom changes, yet assumes that we’ll know enough to emulate them to high fidelity. This is roughly like saying that I can perfectly replicate a working car but I somehow don’t understand anything about how it works. What about the fact that we already know some useful nonrandom changes that we could make, such as the increased dendritic branching observable in specific intelligence-associated alleles?
It doesn’t matter. Deepmind is planning to have a rat-level AI before the end of 2017 and Demis doesn’t tend to make overly optimistic predictions. How many doublings is a rat away from a human?
He actually does think evolutionary pressures are going to be important, and in fact, in his book, he talks a lot about which directions he expects ems to evolve in. He just thinks that the evolutionary pressures, at least in the medium-term (he doesn’t try to make predictions about what comes after the Em era), will not be so severe that we cannot use modern social science to predict em behavior.
Source? I’m aware of the Flynn effect, but I was under the impression that the consensus was that it is probably not due natural selection.
To emulate a brain, you need to have a good enough model of neurons and synapses, be able to scan brains in enough detail, and have the computing power to run the scan. Understanding how intelligent behavior arises from the interaction of neurons is not necessary.
If that actually happens, I would take that as significant evidence that AGI will come before WBE. I am kind of skeptical that it will, though. It wouldn’t surprise me that much if Deepmind produces some AI in 2017 that gets touted as a “rat-level AI” in the media, but I’d be shocked if the claim is justified.
“Random natural selection” is almost a contradiction in terms. Yes, we’ve seen dramatic boosts in Ashkenazi intelligence on that timescale, but that’s due to very non-random selection pressure.
Mutations occur randomly and environmental pressure perform selection on them.
Obviously, but “natural selection” is the non-random part of evolution. Using it as a byword for evolution as a whole is bad terminology.
Fair enough. My lazy use of terminology aside, I’m pretty sure you could “breed” an Em via replication-with-random-variation followed by selection according to performance-based criteria.