Here’s an idea I’ve been ruminating on: create a bunch of nearly identical forecast questions, all worded slightly differently, and grade with maximum inflexibility. Sometimes a pair of nearly identical questions will come to opposite resolutions. In such cases, forecasters who pay close attention to the words may be able to get both questions right, whereas people who treated them the same will get one right and one wrong.
It’s an interesting idea, but one that seems to have very high costs for forecasters in keeping the predictions updated and coherent.
If we imagine that we pay forecasters the market value of their time, an active forecasting question with a couple dozen people spending a half hour each updating their forecast “costs” thousands of dollars per week. Multiplying that, even when accounting for reduced costs for similar questions, seems not worth the cost.
Hm okay. And is this a problem for prediction markets too, even though participants expect to profit from their time spent?
The way I imagine it, sloppier traders will treat a batch of nearly identical questions as identical, arbitraging among them and causing the prices to converge. Meanwhile, the more literal-minded traders will think carefully about how the small changes in the wording might imply large changes in probability, and they will occasionally profit by pushing the batch of prices apart.
But maybe most traders won’t be that patient, and will prefer meta-resolution or offloading.
Generally agree that there’s something interesting here, but I’m still skeptical that in most prediction market cases there would be enough money across questions, and enough variance in probabilities, for this to work well.
Here’s an idea I’ve been ruminating on: create a bunch of nearly identical forecast questions, all worded slightly differently, and grade with maximum inflexibility. Sometimes a pair of nearly identical questions will come to opposite resolutions. In such cases, forecasters who pay close attention to the words may be able to get both questions right, whereas people who treated them the same will get one right and one wrong.
On average, wouldn’t this help things a bit?
It’s an interesting idea, but one that seems to have very high costs for forecasters in keeping the predictions updated and coherent.
If we imagine that we pay forecasters the market value of their time, an active forecasting question with a couple dozen people spending a half hour each updating their forecast “costs” thousands of dollars per week. Multiplying that, even when accounting for reduced costs for similar questions, seems not worth the cost.
Hm okay. And is this a problem for prediction markets too, even though participants expect to profit from their time spent?
The way I imagine it, sloppier traders will treat a batch of nearly identical questions as identical, arbitraging among them and causing the prices to converge. Meanwhile, the more literal-minded traders will think carefully about how the small changes in the wording might imply large changes in probability, and they will occasionally profit by pushing the batch of prices apart.
But maybe most traders won’t be that patient, and will prefer meta-resolution or offloading.
I still feel like I’m onto something here...
Generally agree that there’s something interesting here, but I’m still skeptical that in most prediction market cases there would be enough money across questions, and enough variance in probabilities, for this to work well.