As long as you have an existing set of questions with known answers that are unknown to participants of the game you can have instead feedback.
Public knowledge that you can find on Wikidata works if you have an offline tournament. For an online tournament, it can use data from nonpublic experiments.
The CASP tournament for protein structure prediction uses that method. For our purposes, I think surveys make good experimental data.
But in that case, it isn’t really about prediction anymore. A game like that rewards knowledge, not the ability to do research and deal with probabilistic information.
Someone who has read a lot of Wikipedia, or who happens to have read papers on topics similar to the experiment in question, could outperform someone who constructs predictions very rationally but from a different set of domain knowledge facts. This makes it closer to a quiz show, i.e. a less original and less interesting event.
A slow, online tournament (where everyone has the same internet to do research in) greatly reduces the value of blunt knowledge and makes success more dependent on the ability to weigh evidence.
I don’t know why you think that. Quiz shows need huge production values and very valuable prizes to still be interesting.
With the kind of budget that’s conceivable for a startup group of amateur organizers, you have to be novel/creative to be found worth noticing outside the immediate circle of participants. Sure you could run a quiz show on a shoestring budget, but nobody is going to talk about it after.
And since this is about reaching people with ideas of thinking in probabilities and updating on evidence, everything that doesn’t get talked about after is a failure. Even if the event itself was entertaining.
I hold that opinion because a variety of Quiz shows are commercially successful. I think most entertainment has experiences with short feedback circles.
I don’t see how the event you propose is about updating on evidence. Updating on evidence in the sense it was done in the Good Judgement Project needs longer time frames than a tournament of a few days.
I see that the offline model doesn’t let people compete on research abilities but competition on calibration still gives you an event that’s about probabilities. It has the advantage that the players can make a lot more predictions in a short time frame and it’s less likely that the tournament gets won by lucky overconfident participants.
A 2-day event where people do 1 hour research per question likely doesn’t give you a dataset that allows you to pick a winner based on skill.
As long as you have an existing set of questions with known answers that are unknown to participants of the game you can have instead feedback.
Public knowledge that you can find on Wikidata works if you have an offline tournament. For an online tournament, it can use data from nonpublic experiments. The CASP tournament for protein structure prediction uses that method. For our purposes, I think surveys make good experimental data.
But in that case, it isn’t really about prediction anymore. A game like that rewards knowledge, not the ability to do research and deal with probabilistic information.
Someone who has read a lot of Wikipedia, or who happens to have read papers on topics similar to the experiment in question, could outperform someone who constructs predictions very rationally but from a different set of domain knowledge facts. This makes it closer to a quiz show, i.e. a less original and less interesting event.
A slow, online tournament (where everyone has the same internet to do research in) greatly reduces the value of blunt knowledge and makes success more dependent on the ability to weigh evidence.
I’m not sure why you consider quiz shows to be uninteresting. It’s a quite successful format when it comes to gathering an audience.
I don’t know why you think that. Quiz shows need huge production values and very valuable prizes to still be interesting.
With the kind of budget that’s conceivable for a startup group of amateur organizers, you have to be novel/creative to be found worth noticing outside the immediate circle of participants. Sure you could run a quiz show on a shoestring budget, but nobody is going to talk about it after.
And since this is about reaching people with ideas of thinking in probabilities and updating on evidence, everything that doesn’t get talked about after is a failure. Even if the event itself was entertaining.
I hold that opinion because a variety of Quiz shows are commercially successful. I think most entertainment has experiences with short feedback circles.
I don’t see how the event you propose is about updating on evidence. Updating on evidence in the sense it was done in the Good Judgement Project needs longer time frames than a tournament of a few days.
I see that the offline model doesn’t let people compete on research abilities but competition on calibration still gives you an event that’s about probabilities. It has the advantage that the players can make a lot more predictions in a short time frame and it’s less likely that the tournament gets won by lucky overconfident participants.
A 2-day event where people do 1 hour research per question likely doesn’t give you a dataset that allows you to pick a winner based on skill.