It qualifies in some respects, but also fails in many of the respects that are usually assumed when people talk about superintelligences. E.g. Nick Bostrom:
By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.
Entities such as companies or the scientific community are not superintelligences according to this definition. Although they can perform a number of tasks of which no individual human is capable, they are not intellects and there are many fields in which they perform much worse than a human brain—for example, you can’t have real-time conversation with “the scientific community”.
Bach (2010) argues that like AGIs, human organizations such as corporations, administrative and governmental bodies, churches and universities are intelligent agents
that are more powerful than individual humans, and that the development of AGI would
increase the power of organizations in a quantitative way but not cause a qualitative
change.
Humans grouping into organizations are to some degree capable of taking advantage
of increased parallel (but not serial) speed by adding more individuals. While organizations can institute guidelines such as peer review that help combat bias, working in
an organization can introduce biases of its own, such as groupthink (Esser 1998). They
cannot design new mental modules or benefit from any of the co-operative advantages
digital minds may enjoy. Possibly their largest shortcoming is their reduced efficiency as
the size of the organization grows and their general susceptibility to having their original
goals hijacked by smaller interest groups within the organization (Olson 1965).
I get the point, but the last paragraph is kind of excessively reductive. It’s simply untrue that the only advantage accrued by putting multiple minds to work on a problem is a “parallel” one. Experts complement one another’s functions. The aggregation of optimization power can be extremely nonlinear.
Take a geologist, a geophysicist, and a petroleum engineer. Assume that they’re all experienced experts. Together these three people stand a good chance of economically finding and producing some oil. Remove any one of the three and the odds of success crater. Add more experts and productivity goes up, but there is a threshold number past which efficiency goes down—too many engineers on the same project end up impeding one another.
Another example would be pair coding. A coding team of two is at least allegedly better than having two individual coders. The advantage of cooperation is not merely parallel.
It qualifies in some respects, but also fails in many of the respects that are usually assumed when people talk about superintelligences. E.g. Nick Bostrom:
Or me:
I get the point, but the last paragraph is kind of excessively reductive. It’s simply untrue that the only advantage accrued by putting multiple minds to work on a problem is a “parallel” one. Experts complement one another’s functions. The aggregation of optimization power can be extremely nonlinear.
Take a geologist, a geophysicist, and a petroleum engineer. Assume that they’re all experienced experts. Together these three people stand a good chance of economically finding and producing some oil. Remove any one of the three and the odds of success crater. Add more experts and productivity goes up, but there is a threshold number past which efficiency goes down—too many engineers on the same project end up impeding one another.
Another example would be pair coding. A coding team of two is at least allegedly better than having two individual coders. The advantage of cooperation is not merely parallel.