metaculus did a study where they compared prediction markets with a small number of participants to those with a large number and found that you get most of the benefit at relative small numbers (10 or so). So if you randomly sample 10 AI experts and survey their opinions, you’re doing almost as good as a full prediction market. The fact that multiple AI markets (metaculus, manifold) and surveys all agree on the same 5-10% suggests that none of these methodologies is wildly flawed.
I mean it only suggests that they’re highly correlated. I agree that it seems likely they represent the views of the average “AI expert” in this case. (I should take a look to check who was actually sampled)
My main point regarding this is that we probably shouldn’t be paying this particular prediction market too much attention in place of e.g. the survey you mention. I probably also wouldn’t give the survey too much weight compared to opinions of particularly thoughtful people, but I agree that this needs to be argued.
metaculus did a study where they compared prediction markets with a small number of participants to those with a large number and found that you get most of the benefit at relative small numbers (10 or so). So if you randomly sample 10 AI experts and survey their opinions, you’re doing almost as good as a full prediction market. The fact that multiple AI markets (metaculus, manifold) and surveys all agree on the same 5-10% suggests that none of these methodologies is wildly flawed.
I mean it only suggests that they’re highly correlated. I agree that it seems likely they represent the views of the average “AI expert” in this case. (I should take a look to check who was actually sampled)
My main point regarding this is that we probably shouldn’t be paying this particular prediction market too much attention in place of e.g. the survey you mention. I probably also wouldn’t give the survey too much weight compared to opinions of particularly thoughtful people, but I agree that this needs to be argued.