What I mean is, a lot of the transhumanist stuff is predicated on these things working properly. But we know how badly wrong computers can sometimes go, and that’s in everyone’s experience, so much so that “switch it off and switch it on again” is part of common, everyday lore now.
Imagine being so intimately connected with a computerized thingummybob that part of your conscious processing, what makes you you, is tied up with it—and it’s prone to crashing. Or hacking, or any of the other ills that can befall computery things. Potential horrorshow.
Similar for bio enhancements, etc. For example, physical enhancement like steroids, but safer and easier to use, are still a long way off, and until they come, people are just not going to go for it. We really only have a very sketchy understanding of how the body and brain work at the moment. It’s developing, but it’s still early days.
So ultimately, I think for the foreseeable future, people are still going to go for things that are separable, that the natural organic body can use as tools that can be put away, that the natural organic body can easily separate itself from, at will, if they go wrong.
They’re not going to go for any more intimate connections until such things work much, much better than anything we’ve got now.
And I think it’s actually debatable whether that’s ever going to happen. It may be the case that there are limits on complexity, and that the “messy” quality of organics is actually the best way of having extremely complex thinking, moving objects—or that there’s a trade-off between having stupid things that do massive processing well, and clever things that do simple processing well, and you can’t have both in one physical (information processing) entity (but the latter can use the former as tools).
Another angle to look at this would be to look at the rickety nature of high IQ and/or genius—it’s six and a half dozen whether a hyper-intelligent being is going to be of any use at all, or just go off the rails as soon as it’s booted up. It’s probably the same for “AI”.
I don’t think any of this is insurmountable, but I think people are massively underestimating the time it’s going to take to get there; and we’ll already have naturally evolved into quite different beings by that time (maybe as different as early homonids from us), so by that time, this particular question is moot (as there will have been co-evolution with the developing tech anyway, only it will have been very gradual).
I think that it’s acceptable when it works.
What I mean is, a lot of the transhumanist stuff is predicated on these things working properly. But we know how badly wrong computers can sometimes go, and that’s in everyone’s experience, so much so that “switch it off and switch it on again” is part of common, everyday lore now.
Imagine being so intimately connected with a computerized thingummybob that part of your conscious processing, what makes you you, is tied up with it—and it’s prone to crashing. Or hacking, or any of the other ills that can befall computery things. Potential horrorshow.
Similar for bio enhancements, etc. For example, physical enhancement like steroids, but safer and easier to use, are still a long way off, and until they come, people are just not going to go for it. We really only have a very sketchy understanding of how the body and brain work at the moment. It’s developing, but it’s still early days.
So ultimately, I think for the foreseeable future, people are still going to go for things that are separable, that the natural organic body can use as tools that can be put away, that the natural organic body can easily separate itself from, at will, if they go wrong.
They’re not going to go for any more intimate connections until such things work much, much better than anything we’ve got now.
And I think it’s actually debatable whether that’s ever going to happen. It may be the case that there are limits on complexity, and that the “messy” quality of organics is actually the best way of having extremely complex thinking, moving objects—or that there’s a trade-off between having stupid things that do massive processing well, and clever things that do simple processing well, and you can’t have both in one physical (information processing) entity (but the latter can use the former as tools).
Another angle to look at this would be to look at the rickety nature of high IQ and/or genius—it’s six and a half dozen whether a hyper-intelligent being is going to be of any use at all, or just go off the rails as soon as it’s booted up. It’s probably the same for “AI”.
I don’t think any of this is insurmountable, but I think people are massively underestimating the time it’s going to take to get there; and we’ll already have naturally evolved into quite different beings by that time (maybe as different as early homonids from us), so by that time, this particular question is moot (as there will have been co-evolution with the developing tech anyway, only it will have been very gradual).