Why aren’t [big number] of educated people a superintelligence now?
They are. Many collections of individuals (e.g. tech companies, hedge funds, PACs, etc.) seem to do rather a lot more than an individual human could. Likewise, humanity as a whole could be classified as a superintelligence (and possibly a recursively self-improving one: see the Flynn effect). The idea is not that large numbers of intelligent people aren’t a superintelligence, it’s that 10000 von Neumanns would be a more powerful superintelligence than most groups of highly intelligent people.
Superintelligences are not “any powerful entity”; humanity is not “recursively self-improving”. This conversation was over some time in 2009 when Eliezer finally got Tim Tyler to stop applying those terms to things that already exist, as though that meant anything.
Insofar as I have seen it defined here, an intelligence is that which produces optimization given a certain amount of resources, and higher intelligences exert more optimization power that lower intelligence given the same starting conditions. Since many organizations, especially tech companies, do rather a lot of optimizing given their resources. Apple, a company of 60000 employees, made profits of 30 billion last year. Apple, effectively a profit maximizer, is doing rather more than 60000 independent individuals would (they’re making $500000/employee/year in profits). Considering that they are doing a lot of optimization given their starting conditions, I would say that they are at least a weakly superhuman intelligence.
Humanity is working to improve its own intelligence, and succeeding. So we have the “self-improving” right there. As we get smarter/more able, we are finding new and interesting ways to improve. Hence, “recursively”. Evidently, “self improving in such a way that the entity can find new ways to self improve” isn’t “recursive self improvement”. I really don’t know what the term would mean, and would appreciate if someone would enlighten me.
It is possible for the Wise Master to be mistaken, you know. He doesn’t articulate in that article his reasons for drawing lines where he does, he just says “don’t get me started”. That makes it not a great article to cite in support of those lines, since it means you are basically just appealing to his authority, rather than referencing his arguments.
They are. Many collections of individuals (e.g. tech companies, hedge funds, PACs, etc.) seem to do rather a lot more than an individual human could. Likewise, humanity as a whole could be classified as a superintelligence (and possibly a recursively self-improving one: see the Flynn effect). The idea is not that large numbers of intelligent people aren’t a superintelligence, it’s that 10000 von Neumanns would be a more powerful superintelligence than most groups of highly intelligent people.
Downvoted for using terms imprecisely; see The Virtue of Narrowness.
Superintelligences are not “any powerful entity”; humanity is not “recursively self-improving”. This conversation was over some time in 2009 when Eliezer finally got Tim Tyler to stop applying those terms to things that already exist, as though that meant anything.
Insofar as I have seen it defined here, an intelligence is that which produces optimization given a certain amount of resources, and higher intelligences exert more optimization power that lower intelligence given the same starting conditions. Since many organizations, especially tech companies, do rather a lot of optimizing given their resources. Apple, a company of 60000 employees, made profits of 30 billion last year. Apple, effectively a profit maximizer, is doing rather more than 60000 independent individuals would (they’re making $500000/employee/year in profits). Considering that they are doing a lot of optimization given their starting conditions, I would say that they are at least a weakly superhuman intelligence.
Humanity is working to improve its own intelligence, and succeeding. So we have the “self-improving” right there. As we get smarter/more able, we are finding new and interesting ways to improve. Hence, “recursively”. Evidently, “self improving in such a way that the entity can find new ways to self improve” isn’t “recursive self improvement”. I really don’t know what the term would mean, and would appreciate if someone would enlighten me.
It is possible for the Wise Master to be mistaken, you know. He doesn’t articulate in that article his reasons for drawing lines where he does, he just says “don’t get me started”. That makes it not a great article to cite in support of those lines, since it means you are basically just appealing to his authority, rather than referencing his arguments.