Today, we have both machine-amplified human intelligence and machine intelligence—and that situation is likely to persist until we have intelligent machines that are roughly as smart as humans.
I think the natural way to classify that is to look at when the pure machine intelligences exceed the augmented humans in aggregate intelligence/power/wealth. If it happens at significantly higher than baseline human level intelligence, then I’d classify that as IA first, otherwise I’d classify it as upload or code first depending on the nature of the machine intelligences. (And of course there will always be “too close to call” cases.)
I think the natural way to classify that is to look at when the pure machine intelligences exceed the augmented humans.
So: by far the most important human augmentation in the future is going to involve preprocessing sensory inputs using machines, post-processing motor outputs by machines, and doing processing that bypasses the human brain entirely. Not drugs, or eduction, or anything else.
In such scenarios, the machines won’t ever really “overtake” the augmented humans, they will just catch up with them. So, for instance, a human with a robot army is not functionally very much different from a robot army. Eventually the human becomes unnecessary and becomes a small burden, but that hardly seems very significant. So: the point that you are talking about seems to be far future, difficult to measure, and seems to me to be inappropriate as the basis of a classification scheme.
I realized that comparing machines with augmented humans on an individual level doesn’t make much sense, and edited in “in aggregate intelligence/power/wealth”, but apparently after you already started your reply. Does the new version seem more reasonable?
As I see it, your proposed classification scheme perpetuates the notion that Intelligence augmentation and machine intelligence are alternatives to each other. If you see them as complementary, using the distinction between them as the basis of a classification scheme makes little sense. They are complementary—and are not really alternatives.
Yes, it would be fun if there was some kind of viable intelligence augmentation-only way forwards—but that idea just seems delusional to me. There’s no such path. Convergence means nanotech and robotics converge. It also means that intelligence augmentation and machine intelligence converge.
They are complementary—and are not really alternatives.
The fact that they are complementary doesn’t exclude the possibility that one could occur earlier than the other. For example do you think there is negligible chance that genetic engineering or pharmaceuticals could significantly improve human intelligence before machine intelligence gets very far off the ground? Or that roughly baseline humans could create (or stumble onto) a recursively-improving AI?
On the other hand, the “too close to call” case does seem to deserve it’s own category, so I’ve added one. Thanks!
Embryo selection technology already exists, all that’s needed is knowledge of the relevant alleles to select for, which should be forthcoming shortly given falling sequencing costs. Within 30 years we should see the first grown-up offspring of such selection. The effect will be greatly amplified if stem cell technology makes it possible to produce viable gametes from stem cells, which the Hinxton report estimates to be within a decade too.
For example do you think there is negligible chance that genetic engineering or pharmaceuticals could significantly improve human intelligence before machine intelligence gets very far off the ground?
Germ-line genetic engineering is almost totally impotent—since it is too slow. Gene therapy is potentially faster—but faces considerably more technical challenges. It is also irrelevant, I figure.
Pharmaceuticals might increase alertness or stamina, but their effects on productivity seem likely to be relatively minor. People have been waiting for a pharmaceutical revolution since the 1960s. We already have many of the most important drugs, it is mostly a case of figuring out how best to intelligently deploy them.
The main player on the intelligence augmentation front that doesn’t involve machines very much is education—where there is lots of potential. Again, this is not really competition for machine intelligence. We have education now. It would make little sense to ask if it will be “first”.
I think the natural way to classify that is to look at when the pure machine intelligences exceed the augmented humans in aggregate intelligence/power/wealth. If it happens at significantly higher than baseline human level intelligence, then I’d classify that as IA first, otherwise I’d classify it as upload or code first depending on the nature of the machine intelligences. (And of course there will always be “too close to call” cases.)
So: by far the most important human augmentation in the future is going to involve preprocessing sensory inputs using machines, post-processing motor outputs by machines, and doing processing that bypasses the human brain entirely. Not drugs, or eduction, or anything else.
In such scenarios, the machines won’t ever really “overtake” the augmented humans, they will just catch up with them. So, for instance, a human with a robot army is not functionally very much different from a robot army. Eventually the human becomes unnecessary and becomes a small burden, but that hardly seems very significant. So: the point that you are talking about seems to be far future, difficult to measure, and seems to me to be inappropriate as the basis of a classification scheme.
I realized that comparing machines with augmented humans on an individual level doesn’t make much sense, and edited in “in aggregate intelligence/power/wealth”, but apparently after you already started your reply. Does the new version seem more reasonable?
As I see it, your proposed classification scheme perpetuates the notion that Intelligence augmentation and machine intelligence are alternatives to each other. If you see them as complementary, using the distinction between them as the basis of a classification scheme makes little sense. They are complementary—and are not really alternatives.
Yes, it would be fun if there was some kind of viable intelligence augmentation-only way forwards—but that idea just seems delusional to me. There’s no such path. Convergence means nanotech and robotics converge. It also means that intelligence augmentation and machine intelligence converge.
The fact that they are complementary doesn’t exclude the possibility that one could occur earlier than the other. For example do you think there is negligible chance that genetic engineering or pharmaceuticals could significantly improve human intelligence before machine intelligence gets very far off the ground? Or that roughly baseline humans could create (or stumble onto) a recursively-improving AI?
On the other hand, the “too close to call” case does seem to deserve it’s own category, so I’ve added one. Thanks!
Embryo selection technology already exists, all that’s needed is knowledge of the relevant alleles to select for, which should be forthcoming shortly given falling sequencing costs. Within 30 years we should see the first grown-up offspring of such selection. The effect will be greatly amplified if stem cell technology makes it possible to produce viable gametes from stem cells, which the Hinxton report estimates to be within a decade too.
Germ-line genetic engineering is almost totally impotent—since it is too slow. Gene therapy is potentially faster—but faces considerably more technical challenges. It is also irrelevant, I figure.
Pharmaceuticals might increase alertness or stamina, but their effects on productivity seem likely to be relatively minor. People have been waiting for a pharmaceutical revolution since the 1960s. We already have many of the most important drugs, it is mostly a case of figuring out how best to intelligently deploy them.
The main player on the intelligence augmentation front that doesn’t involve machines very much is education—where there is lots of potential. Again, this is not really competition for machine intelligence. We have education now. It would make little sense to ask if it will be “first”.