I’m confused too. But I really didn’t take any liberties with my paraphrasing. Here’s an exact EY quote, albeit from a deprecated essay:
any simple intelligence enhancement will be a net evolutionary disadvantage—if enhancing intelligence were a matter of a simple surgical procedure, it would have long ago occurred as a natural mutation.
This seems to be his most recent writing on the subject. In a comment on that post, he says that the formulation you refer to is “part of an even stranger phase of [his] earlier wild and reckless youth, age fifteen or thereabouts”, so it probably doesn’t make sense to argue against this as something that “Eliezer argues”; possibly better to just say something like “some have argued...” or show why it’s an intuitively appealing idea and then argue that there are counterexamples.
Thanks for the link! Glad to see people have discussed this.
The principle stated there, that any “genetically easy” modification to humans should be expected to cause a net reduction of fitness, seems useful and unimpeachable. With, of course, the caveat that we’re not in the ancestral environment. Smart people can get all the calories, antibiotics, and c-sections they need.
But calling it the Algernon Principle implies that we should equate “physically easy” with “genetically easy.” That seems unlikely to be true in general.
Pardon me while I have a strange interlude. We can show that the range of humanoids that you can build from proteins is astronomically greater than the range of humanoids you can build from genomes. Let N be the number of possible humanoid genotypes. For each n of those N genotypes, Prometheus can build a man who is somatically Walter Cronkite but whose germline DNA is derived from genotype n. Thus we have N distinct viable humanoids, all of whom look like Walter Cronkite. Since we know that not all viable humanoids look like Walter Cronkite, we know that there are more than N viable humanoids. Therefore, there are viable humanoid blueprints that are not accessible via mutation. Now imagine that I bothered to extend this proof in a bunch of combinatorial and exponential directions, and we get the astronomical part.
So, yes, we can deduce that any brain modification which evolution could easily do on its own is very unlikely to improve the subject’s fitness. But we should not confuse this with the more general case, and the more general case is large. Which means that it’s an unfair maligning of the experimenters in Algernon and Lensman to suggest that they should have known their efforts would end in disaster, any more than the Montgolfier brothers should have known that any hot air balloon must inevitably kill its passengers.
(nods) Fair enough… that sure does sound like it’s saying what you understand it to be saying.
I can imagine ways to rescue that quote by taking very strict interpretations of “simple surgical procedure,” I guess. E.g., maybe a simple surgical procedure can’t enhance intelligence, any more than simple mathematics can predict trajectories in atmosphere, fine, but I can’t see why that’s an interesting question. But I’m really uninterested in exegesis, let alone eisegesis.
For my own part, I can imagine several technological procedures to enhance human intelligence that seem plausibly within the reach of applied cognitive science in my lifetime.
I’m confused too. But I really didn’t take any liberties with my paraphrasing. Here’s an exact EY quote, albeit from a deprecated essay:
This seems to be his most recent writing on the subject. In a comment on that post, he says that the formulation you refer to is “part of an even stranger phase of [his] earlier wild and reckless youth, age fifteen or thereabouts”, so it probably doesn’t make sense to argue against this as something that “Eliezer argues”; possibly better to just say something like “some have argued...” or show why it’s an intuitively appealing idea and then argue that there are counterexamples.
Thanks for the link! Glad to see people have discussed this.
The principle stated there, that any “genetically easy” modification to humans should be expected to cause a net reduction of fitness, seems useful and unimpeachable. With, of course, the caveat that we’re not in the ancestral environment. Smart people can get all the calories, antibiotics, and c-sections they need.
But calling it the Algernon Principle implies that we should equate “physically easy” with “genetically easy.” That seems unlikely to be true in general.
Pardon me while I have a strange interlude. We can show that the range of humanoids that you can build from proteins is astronomically greater than the range of humanoids you can build from genomes. Let N be the number of possible humanoid genotypes. For each n of those N genotypes, Prometheus can build a man who is somatically Walter Cronkite but whose germline DNA is derived from genotype n. Thus we have N distinct viable humanoids, all of whom look like Walter Cronkite. Since we know that not all viable humanoids look like Walter Cronkite, we know that there are more than N viable humanoids. Therefore, there are viable humanoid blueprints that are not accessible via mutation. Now imagine that I bothered to extend this proof in a bunch of combinatorial and exponential directions, and we get the astronomical part.
So, yes, we can deduce that any brain modification which evolution could easily do on its own is very unlikely to improve the subject’s fitness. But we should not confuse this with the more general case, and the more general case is large. Which means that it’s an unfair maligning of the experimenters in Algernon and Lensman to suggest that they should have known their efforts would end in disaster, any more than the Montgolfier brothers should have known that any hot air balloon must inevitably kill its passengers.
(nods) Fair enough… that sure does sound like it’s saying what you understand it to be saying.
I can imagine ways to rescue that quote by taking very strict interpretations of “simple surgical procedure,” I guess. E.g., maybe a simple surgical procedure can’t enhance intelligence, any more than simple mathematics can predict trajectories in atmosphere, fine, but I can’t see why that’s an interesting question. But I’m really uninterested in exegesis, let alone eisegesis.
For my own part, I can imagine several technological procedures to enhance human intelligence that seem plausibly within the reach of applied cognitive science in my lifetime.