CFAR (in 2012, at least) had a market where your scoring was based on how much you updated the previous bet towards the truth. I really enjoyed the interactional nature of it.
Unfortunately, this would be easy to abuse online. Create a sockpuppet account, make a stupid prediction, and then quickly fix the prediction using your real account. This is equivalent to moving bits from one account to the other.
At CFAR workshop all participants were real people. But they still missed an existing opportunity to abuse the system: there were rewards for winning, but no punishment for losing. So two people could agree to transfer a lot of bits from one to another, and split the price afterwards.
Maybe a system more difficult to abuse can be designed, but a direct copy of algorithm used at CFAR isn’t it.
At CFAR workshop all participants were real people. But they still missed an existing opportunity to abuse the system: there were rewards for winning, but no punishment for losing. So two people could agree to transfer a lot of bits from one to another, and split the price afterwards.
Not quite true. I ran the markets, and I did threaten to fearsomely glare at people who were abusing the system. (And my glare is very fearsome).
You’re right. Gaming the system is feasible, though I believe it is very low-value.
What exactly would the point of gaming a prediction thread be? The only point-keeping would be informal, so if you’re making a bunch of points off of idiotic puppets bets it’s still visible as because you were up against an idiotic bet. It’d be like lying on the group diary, almost.
Do note, there was actually a HUGE punishment for losing. You could get into the negative pretty easily by being stupidly overconfident. The scoring was 100 × log2(Your probability of outcome/Previous Bet probability of outcome).. For example: if you updated a 50% house bet to 99% being correct would give you 98.55 “bits”, while being wrong would give you −564.39
Do note, there was actually a HUGE punishment for losing.
Only to the extent that you care about points, whereas the winner was given a tangible prize (in my case, a book).
Actually, I’m now remembering that that isn’t entirely true: there was a prize for the person with most points, but also a prize that was assigned randomly, weighted according to ((player points) - (least number of points of any player) + 1), or something. So the more you lose by, the less chance you have of winning that prize. But if you’re near the back anyway, your chance of winning is so small that this is a very small punishment.
(I think we might have had someone who was convinced to get many negative points, to reduce the effective spread among everyone else. Or I might be making that up.)
But they still missed an existing opportunity to abuse the system: there were rewards for winning, but no punishment for losing. So two people could agree to transfer a lot of bits from one to another, and split the price afterwards.
I raised this possibility, but an instructor said they’d use human judgment to stop us from doing that.
(My actual idea was along the lines of “if two of us decide that we aren’t going to come to agreement on a market, we can just repeatedly alternate our bets, and each expect that we’re getting arbitrarily many points from this”. The instructor said something like, they’d just ignore all but the final bets if they thought we were doing that.)
Unfortunately, this would be easy to abuse online. Create a sockpuppet account, make a stupid prediction, and then quickly fix the prediction using your real account. This is equivalent to moving bits from one account to the other.
At CFAR workshop all participants were real people. But they still missed an existing opportunity to abuse the system: there were rewards for winning, but no punishment for losing. So two people could agree to transfer a lot of bits from one to another, and split the price afterwards.
Maybe a system more difficult to abuse can be designed, but a direct copy of algorithm used at CFAR isn’t it.
Not quite true. I ran the markets, and I did threaten to fearsomely glare at people who were abusing the system. (And my glare is very fearsome).
You’re right. Gaming the system is feasible, though I believe it is very low-value.
What exactly would the point of gaming a prediction thread be? The only point-keeping would be informal, so if you’re making a bunch of points off of idiotic puppets bets it’s still visible as because you were up against an idiotic bet. It’d be like lying on the group diary, almost.
Do note, there was actually a HUGE punishment for losing. You could get into the negative pretty easily by being stupidly overconfident. The scoring was 100 × log2(Your probability of outcome/Previous Bet probability of outcome).. For example: if you updated a 50% house bet to 99% being correct would give you 98.55 “bits”, while being wrong would give you −564.39
Only to the extent that you care about points, whereas the winner was given a tangible prize (in my case, a book).
Actually, I’m now remembering that that isn’t entirely true: there was a prize for the person with most points, but also a prize that was assigned randomly, weighted according to ((player points) - (least number of points of any player) + 1), or something. So the more you lose by, the less chance you have of winning that prize. But if you’re near the back anyway, your chance of winning is so small that this is a very small punishment.
(I think we might have had someone who was convinced to get many negative points, to reduce the effective spread among everyone else. Or I might be making that up.)
Ah, I do not believe there was such a prize system at my minicamp.
I raised this possibility, but an instructor said they’d use human judgment to stop us from doing that.
(My actual idea was along the lines of “if two of us decide that we aren’t going to come to agreement on a market, we can just repeatedly alternate our bets, and each expect that we’re getting arbitrarily many points from this”. The instructor said something like, they’d just ignore all but the final bets if they thought we were doing that.)