I think that you are underestimating the efficiency of intersystem communication in a world where a lot of organizational communication is handled through information technology.
Speech and reading seem to be at most 60 bits per second. A single neuron is faster than that.
Compare to the human brain. The optic nerve transmits 10 million bits per second and I’d expect interconnections between brain areas to generally fall within a few orders of magnitude.
I’d call five orders of magnitude a serious bottleneck and don’t really see how it could be significantly improved without cutting humans out of the loop. That’s what your data mining example does, but it’s only as good as the algorithms behind it. And when those approach human level we get AI.
I don’t understand your point about specialization. Can you elaborate?
Individual humans have ridiculous amounts of overlap in skills and abilities. Basic levels of housekeeping, social skills etc. are pretty much assumed. A lot of that is necessary given our social instincts and organizational structures: a savant may outperform anyone in a specific field, but good luck integrating them in an organization.
I’m not sure how much specialization can be improved with baseline humans, but relaxing the constraint that everyone should be able to function independently in the wider society might help. Also, focused training from a young age could be useful in creating genius-level specialists, but that takes time.
Also, I don’t understand what the difference between a ‘superintelligence’ and a ‘sped-up human’ would be that would be pertinent to the argument.
Given a large enough speedup and indefinite lifespan, pretty much none. The analogy may have been poorly chosen.
Wait...one sec. Isn’t all that redundancy in human society a good thing, from the perspective of saving it from existential risk?
If I were an AI, wouldn’t one of the first things I do be to create a lot of redundant subsystems loosely coordinating in some way, so that if have of me is destroyed, the rest lives on?
It looks to me like there’s a continuum within organizations as to whether they do most of their information processing using hardware or wetware.
I acknowledge that improvements in machine intelligence may shift the burden of things to machines.
But I don’t think that changes the fact that many organizations already are superintelligences, and are in the process of cognitively enhancing themselves.
I guess I’d argue that organizations, in pursuit of cognitive enhancement, would coordinate their human and machine subsystems as efficiently as possible. There are certainly cases where specialists are taken care of by their organizations (ever visited a Google office, for example?). While there may be overlap in skills, there’s also lots of heterogeneity in society that reflects, at least in part, economic constraints.
Speech and reading seem to be at most 60 bits per second. A single neuron is faster than that.
Compare to the human brain. The optic nerve transmits 10 million bits per second and I’d expect interconnections between brain areas to generally fall within a few orders of magnitude.
I’d call five orders of magnitude a serious bottleneck and don’t really see how it could be significantly improved without cutting humans out of the loop. That’s what your data mining example does, but it’s only as good as the algorithms behind it. And when those approach human level we get AI.
Individual humans have ridiculous amounts of overlap in skills and abilities. Basic levels of housekeeping, social skills etc. are pretty much assumed. A lot of that is necessary given our social instincts and organizational structures: a savant may outperform anyone in a specific field, but good luck integrating them in an organization.
I’m not sure how much specialization can be improved with baseline humans, but relaxing the constraint that everyone should be able to function independently in the wider society might help. Also, focused training from a young age could be useful in creating genius-level specialists, but that takes time.
Given a large enough speedup and indefinite lifespan, pretty much none. The analogy may have been poorly chosen.
Wait...one sec. Isn’t all that redundancy in human society a good thing, from the perspective of saving it from existential risk?
If I were an AI, wouldn’t one of the first things I do be to create a lot of redundant subsystems loosely coordinating in some way, so that if have of me is destroyed, the rest lives on?
It looks to me like there’s a continuum within organizations as to whether they do most of their information processing using hardware or wetware.
I acknowledge that improvements in machine intelligence may shift the burden of things to machines.
But I don’t think that changes the fact that many organizations already are superintelligences, and are in the process of cognitively enhancing themselves.
I guess I’d argue that organizations, in pursuit of cognitive enhancement, would coordinate their human and machine subsystems as efficiently as possible. There are certainly cases where specialists are taken care of by their organizations (ever visited a Google office, for example?). While there may be overlap in skills, there’s also lots of heterogeneity in society that reflects, at least in part, economic constraints.