Better to concretise 3 ways than 1 if you have the time.
Here’s a tale I’ve heard but not verified: in the good old days, Intrade had a prediction market on whether Obamacare would become law, which resolved negative, due to the market’s definition of Obamacare.
Sometimes you’re interested in answering a vague question, like ‘Did Donald Trump enact a Muslim ban in his first term’ or ‘Will I be single next Valentine’s day’. Standard advice is to make the question more specific and concrete into something that can be more objectively evaluated. I think that this is good advice. However, it’s inevitable that your concretisation may miss out on aspects of the original vague question that you cared about. As such, it’s probably better to concretise the question multiple ways which have different failure modes. This is sort of obvious for evaluating questions about things that have already happened, like whether a Muslim ban was enacted, but seems to be less obvious or standard in the forecasting setting. That being said, sometimes it is done—OpenPhil’s animal welfare series of questions seems to me to basically be an example—to good effect.
This procedure does have real costs. Firstly, it’s hard to concretise vague questions, and concretising multiple times is harder than concretising once. It’s also hard to predict multiple questions, especially if they’re somewhat independent as is necessary to get the benefits, meaning that each question will be predicted less well. In a prediction market context, this may well manifest in having multiple thin, unreliable markets instead of one thick and reliable one.
Better to concretise 3 ways than 1 if you have the time.
Here’s a tale I’ve heard but not verified: in the good old days, Intrade had a prediction market on whether Obamacare would become law, which resolved negative, due to the market’s definition of Obamacare.
Sometimes you’re interested in answering a vague question, like ‘Did Donald Trump enact a Muslim ban in his first term’ or ‘Will I be single next Valentine’s day’. Standard advice is to make the question more specific and concrete into something that can be more objectively evaluated. I think that this is good advice. However, it’s inevitable that your concretisation may miss out on aspects of the original vague question that you cared about. As such, it’s probably better to concretise the question multiple ways which have different failure modes. This is sort of obvious for evaluating questions about things that have already happened, like whether a Muslim ban was enacted, but seems to be less obvious or standard in the forecasting setting. That being said, sometimes it is done—OpenPhil’s animal welfare series of questions seems to me to basically be an example—to good effect.
This procedure does have real costs. Firstly, it’s hard to concretise vague questions, and concretising multiple times is harder than concretising once. It’s also hard to predict multiple questions, especially if they’re somewhat independent as is necessary to get the benefits, meaning that each question will be predicted less well. In a prediction market context, this may well manifest in having multiple thin, unreliable markets instead of one thick and reliable one.