I realized that comparing machines with augmented humans on an individual level doesn’t make much sense, and edited in “in aggregate intelligence/power/wealth”, but apparently after you already started your reply. Does the new version seem more reasonable?
As I see it, your proposed classification scheme perpetuates the notion that Intelligence augmentation and machine intelligence are alternatives to each other. If you see them as complementary, using the distinction between them as the basis of a classification scheme makes little sense. They are complementary—and are not really alternatives.
Yes, it would be fun if there was some kind of viable intelligence augmentation-only way forwards—but that idea just seems delusional to me. There’s no such path. Convergence means nanotech and robotics converge. It also means that intelligence augmentation and machine intelligence converge.
They are complementary—and are not really alternatives.
The fact that they are complementary doesn’t exclude the possibility that one could occur earlier than the other. For example do you think there is negligible chance that genetic engineering or pharmaceuticals could significantly improve human intelligence before machine intelligence gets very far off the ground? Or that roughly baseline humans could create (or stumble onto) a recursively-improving AI?
On the other hand, the “too close to call” case does seem to deserve it’s own category, so I’ve added one. Thanks!
Embryo selection technology already exists, all that’s needed is knowledge of the relevant alleles to select for, which should be forthcoming shortly given falling sequencing costs. Within 30 years we should see the first grown-up offspring of such selection. The effect will be greatly amplified if stem cell technology makes it possible to produce viable gametes from stem cells, which the Hinxton report estimates to be within a decade too.
For example do you think there is negligible chance that genetic engineering or pharmaceuticals could significantly improve human intelligence before machine intelligence gets very far off the ground?
Germ-line genetic engineering is almost totally impotent—since it is too slow. Gene therapy is potentially faster—but faces considerably more technical challenges. It is also irrelevant, I figure.
Pharmaceuticals might increase alertness or stamina, but their effects on productivity seem likely to be relatively minor. People have been waiting for a pharmaceutical revolution since the 1960s. We already have many of the most important drugs, it is mostly a case of figuring out how best to intelligently deploy them.
The main player on the intelligence augmentation front that doesn’t involve machines very much is education—where there is lots of potential. Again, this is not really competition for machine intelligence. We have education now. It would make little sense to ask if it will be “first”.
I realized that comparing machines with augmented humans on an individual level doesn’t make much sense, and edited in “in aggregate intelligence/power/wealth”, but apparently after you already started your reply. Does the new version seem more reasonable?
As I see it, your proposed classification scheme perpetuates the notion that Intelligence augmentation and machine intelligence are alternatives to each other. If you see them as complementary, using the distinction between them as the basis of a classification scheme makes little sense. They are complementary—and are not really alternatives.
Yes, it would be fun if there was some kind of viable intelligence augmentation-only way forwards—but that idea just seems delusional to me. There’s no such path. Convergence means nanotech and robotics converge. It also means that intelligence augmentation and machine intelligence converge.
The fact that they are complementary doesn’t exclude the possibility that one could occur earlier than the other. For example do you think there is negligible chance that genetic engineering or pharmaceuticals could significantly improve human intelligence before machine intelligence gets very far off the ground? Or that roughly baseline humans could create (or stumble onto) a recursively-improving AI?
On the other hand, the “too close to call” case does seem to deserve it’s own category, so I’ve added one. Thanks!
Embryo selection technology already exists, all that’s needed is knowledge of the relevant alleles to select for, which should be forthcoming shortly given falling sequencing costs. Within 30 years we should see the first grown-up offspring of such selection. The effect will be greatly amplified if stem cell technology makes it possible to produce viable gametes from stem cells, which the Hinxton report estimates to be within a decade too.
Germ-line genetic engineering is almost totally impotent—since it is too slow. Gene therapy is potentially faster—but faces considerably more technical challenges. It is also irrelevant, I figure.
Pharmaceuticals might increase alertness or stamina, but their effects on productivity seem likely to be relatively minor. People have been waiting for a pharmaceutical revolution since the 1960s. We already have many of the most important drugs, it is mostly a case of figuring out how best to intelligently deploy them.
The main player on the intelligence augmentation front that doesn’t involve machines very much is education—where there is lots of potential. Again, this is not really competition for machine intelligence. We have education now. It would make little sense to ask if it will be “first”.