I guess I can be happy if there is any data on this at all and see what parameters are available.
Yeah.
Well, taking your question completely literally (a group of N people doing an IQ test together), there are essentially two ways how to fail at an IQ test. Either you can solve each individual problem given enough time, but you run out of time before the entire test is finished. Or there is a problem that you cannot solve (better than guessing randomly) regardless of how much time you have.
The first case should scale linearly, because N people can simply split the test and do each their own part. The second scale would probably be logarithmic, because it requires a different approach, and many people will keep trying the same thing.
...but this is still about how “the number of solved problems” scales, and we need to convert that value to IQ. And the standard way is “what fraction of population would do worse than you”. But this depends on the nature of the test. If the test is “zillion simple questions, not enough time”, then dozen random students together will do better than Einstein. But if the test is “a few very hard questions”, then perhaps Einstein could do better than a team of million people, if some wrong answer seems more convincing than the right one to most people.
This reminds me of chess; how great chess players play against groups of people, sometimes against the entire world. Not the same thing that you want, but you might be able to get more data here: the records of such games, and the ratings of the chess players.
Sure, it depends on the type of task. But I guess we would learn a lot about human performance it we tried such experiments. For example, consider your “many small tasks” task: Even a single person will finish the last one faster than the first one in most cases.
Yeah.
Well, taking your question completely literally (a group of N people doing an IQ test together), there are essentially two ways how to fail at an IQ test. Either you can solve each individual problem given enough time, but you run out of time before the entire test is finished. Or there is a problem that you cannot solve (better than guessing randomly) regardless of how much time you have.
The first case should scale linearly, because N people can simply split the test and do each their own part. The second scale would probably be logarithmic, because it requires a different approach, and many people will keep trying the same thing.
...but this is still about how “the number of solved problems” scales, and we need to convert that value to IQ. And the standard way is “what fraction of population would do worse than you”. But this depends on the nature of the test. If the test is “zillion simple questions, not enough time”, then dozen random students together will do better than Einstein. But if the test is “a few very hard questions”, then perhaps Einstein could do better than a team of million people, if some wrong answer seems more convincing than the right one to most people.
This reminds me of chess; how great chess players play against groups of people, sometimes against the entire world. Not the same thing that you want, but you might be able to get more data here: the records of such games, and the ratings of the chess players.
Sure, it depends on the type of task. But I guess we would learn a lot about human performance it we tried such experiments. For example, consider your “many small tasks” task: Even a single person will finish the last one faster than the first one in most cases.
I like your chess against a group example.