“I’d be willing to bet $1,000 with anyone that the eventual total error of my forecasts will be less than the 65th percentile of my specified predicted error.”
I think this is equivalent to applying a non-linear transformation to your proper scoring rule. When things settle, you get paid S(p) both based on the outcome of your object-level prediction p, and your meta prediction q(S(p)).
Hence:
S(p)+B(q(S(p)))
where B is the “betting scoring function”.
This means getting the scoring rules to work while preserving properness will be tricky (though not necessarily impossible).
One mechanism that might help is that if each player makes one object prediction p and one meta prediction q, but for resolution you randomly sample one and only one of the two to actually pay out.
Interesting, thanks! Yea, agreed it’s not proper. Coming up with interesting payment / betting structures for “package-of-forecast” combinations seems pretty great to me.
Abstract. A potential downside of prediction markets is that they may incentivize agents to take undesirable actions in the real world. For example, a prediction market for whether a terrorist attack will happen may incentivize terrorism, and an in-house prediction market for whether a product will be successfully released may incentivize sabotage. In this paper, we study principal-aligned prediction mechanisms– mechanisms that do not incentivize undesirable actions. We characterize all principal-aligned proper scoring rules, and we show an “overpayment” result, which roughly states that with n agents, any prediction mechanism that is principal-aligned will, in the worst case, require the principal to pay Θ(n) times as much as a mechanism that is not. We extend our model to allow uncertainties about the principal’s utility and restrictions on agents’ actions, showing a richer characterization and a similar “overpayment” result.
I think this is equivalent to applying a non-linear transformation to your proper scoring rule. When things settle, you get paid S(p) both based on the outcome of your object-level prediction p, and your meta prediction q(S(p)).
Hence:
S(p)+B(q(S(p)))
where B is the “betting scoring function”.
This means getting the scoring rules to work while preserving properness will be tricky (though not necessarily impossible).
One mechanism that might help is that if each player makes one object prediction p and one meta prediction q, but for resolution you randomly sample one and only one of the two to actually pay out.
Interesting, thanks! Yea, agreed it’s not proper. Coming up with interesting payment / betting structures for “package-of-forecast” combinations seems pretty great to me.
I think this paper might be relevant: https://users.cs.duke.edu/~conitzer/predictionWINE09.pdf