An attempt to even find Einsteins is doomed unless the number of them is large enough as a fraction of the population. (cf: Eliezer’s introduction to Bayes.)
On the other hand, a purely aggregate approach is a dirty hack that somehow assumes no (irrational) individual is ever able to be a bottleneck to (aggregate) good sense. It’s also fragile to societal breakdown.
It seems evident to me that what’s really urgent is to “raise the tide” and have it “lift all boats”. Because then, tests start working and the individual bottleneck is rational.
I predict that aggregate approaches are going to be more common in the future than waiting around for an Einstein-level intelligence to be born.
For example, Timothy Gowers recently began a project (Polymath1) to solve an open problem in combinatorics through distributed proof methods. Current opinion is that they were probably successful; unfortunately, the math is too hard for me to render judgment.
Now, it’s possible that they were successful because the project attracted the notice of Terence Tao, who probably qualifies as an Einstein-level mathematician. If you look at the discussion, Tao and Gowers both dominate it. On the other hand, many of the major breakthroughs in the project didn’t come from either of them directly, but from other anonymous or pseudo-anonymous comments.
The time of an Einstein or Tao is too valuable for them to do all the thinking by themselves. We agree that raising the tide is absolutely necessary for this kind of project to grow.
For Polymath the kind of desired result of collaboration is clear to me: a (new) (dis-) proof of a mathematical statement.
What is the kind of desired result of collaborating rationalists?
From the talk about prediction markets it seems that “accurate predictions” might be one answer. But predictions of what? Would we need to aggregate our values to decide what we want to predict?
The phrase in Robin’s post was “join together to believe truth”, so perhaps the desired result is more true beliefs (in more heads)? Did you envision making things that are more likely to be true more visible, so that they become defaults? In other words, caching the results of truth-seeking so they can be easily shared by more people?
An attempt to even find Einsteins is doomed unless the number of them is large enough as a fraction of the population. (cf: Eliezer’s introduction to Bayes.)
On the other hand, a purely aggregate approach is a dirty hack that somehow assumes no (irrational) individual is ever able to be a bottleneck to (aggregate) good sense. It’s also fragile to societal breakdown.
It seems evident to me that what’s really urgent is to “raise the tide” and have it “lift all boats”. Because then, tests start working and the individual bottleneck is rational.
I predict that aggregate approaches are going to be more common in the future than waiting around for an Einstein-level intelligence to be born.
For example, Timothy Gowers recently began a project (Polymath1) to solve an open problem in combinatorics through distributed proof methods. Current opinion is that they were probably successful; unfortunately, the math is too hard for me to render judgment.
Now, it’s possible that they were successful because the project attracted the notice of Terence Tao, who probably qualifies as an Einstein-level mathematician. If you look at the discussion, Tao and Gowers both dominate it. On the other hand, many of the major breakthroughs in the project didn’t come from either of them directly, but from other anonymous or pseudo-anonymous comments.
The time of an Einstein or Tao is too valuable for them to do all the thinking by themselves. We agree that raising the tide is absolutely necessary for this kind of project to grow.
For Polymath the kind of desired result of collaboration is clear to me: a (new) (dis-) proof of a mathematical statement.
What is the kind of desired result of collaborating rationalists?
From the talk about prediction markets it seems that “accurate predictions” might be one answer. But predictions of what? Would we need to aggregate our values to decide what we want to predict?
The phrase in Robin’s post was “join together to believe truth”, so perhaps the desired result is more true beliefs (in more heads)? Did you envision making things that are more likely to be true more visible, so that they become defaults? In other words, caching the results of truth-seeking so they can be easily shared by more people?