Are you or anyone else aware of any work along these lines, showing the intelligence of groups of people?
Any sense of what the intelligence of the planet as a whole, or the largest effective intelligence of any group on the planet might be?
If groups of up to 5 scale well, and we get sublinear returns above 5, but positive returns up to some point anyway, does this prove that AI won’t FOOM until it has an intelligence larger than the largest intelligence of a group of humans? That is, until AI has a higher intelligence than the group, that the group of humans will dominate the rate at which new AI’s are improved?
Are you or anyone else aware of any work along these lines, showing the intelligence of groups of people?
Any sense of what the intelligence of the planet as a whole, or the largest effective intelligence of any group on the planet might be?
If groups of up to 5 scale well, and we get sublinear returns above 5, but positive returns up to some point anyway, does this prove that AI won’t FOOM until it has an intelligence larger than the largest intelligence of a group of humans? That is, until AI has a higher intelligence than the group, that the group of humans will dominate the rate at which new AI’s are improved?
There is the MIT Center for Collective Intelligence.
Update: this is a pretty large field of research now. The Collective Intelligence Conference is going into its 7th year.