As I’m wont to do, I was thinking about how to make that theory pay rent. It occurred to me that this could definitely be exploitable. If the typical mind fallacy is correct, we should be able to have it go the other way; we can derive information about a person’s proclivities based on what they think about other people.
Yep! This is actually a standard method: ask people to estimate what they think other people do. A version of this is the ‘Bayesian truth serum’ trick.
The ‘truth serum’ property of the method is only proved for infinite populations. Intuitively it seems quite clear to me that for small populations, the method can be gamed easily. Do you know of any results on the robustness of the method regarding population size when there is incentive to mislead?
Prelec’s formal results hold for large populations, but it held up well experimentally with 30-50 participants
Witkowski and Parkes develop a truth serum for binary questions with as few as 3 participants. Their mechanism also avoid the potentially unbounded payments required by Prelec’s BTS. Unfortunately the WP truth serum seems very sensitive to the common prior assumption.
Prelec’s formal results hold for large populations, but it held up well experimentally with 30-50 participants
Wait, wait, let me understand this. It’s the robust knowledge aggregation part that held up experimentally, not the truth serum part, right? In this experiment the participants had very few incentives to game the system, and they didn’t even have a full understanding of the system’s internals. In contrast, prediction markets are supposed to work even if everybody tries to game them constantly.
Manipulability is addressed experimentally in a different working paper. The participants weren’t told the internals and the manipulations were mostly hypothetical, but honesty was the highest scoring strategy in what they considered.
In some sense, it’s easy to manipulate BTS to give a particular answer. The only problem is you might end up owing the operator incredibly large sums of money. If payments to and from the mechanism aren’t being made, BTS is worthless if people try to game it. I should have a post up shortly about a better mechanism.
No. In one of the posts or papers, I know I saw some comments or discussion that it is deceivable (and so you wouldn’t necessarily want to explain the procedure) but the obvious way doesn’t work.
Actually, that’s Yvain’s post, not mine...
Yep! This is actually a standard method: ask people to estimate what they think other people do. A version of this is the ‘Bayesian truth serum’ trick.
The ‘truth serum’ property of the method is only proved for infinite populations. Intuitively it seems quite clear to me that for small populations, the method can be gamed easily. Do you know of any results on the robustness of the method regarding population size when there is incentive to mislead?
Prelec’s formal results hold for large populations, but it held up well experimentally with 30-50 participants
Witkowski and Parkes develop a truth serum for binary questions with as few as 3 participants. Their mechanism also avoid the potentially unbounded payments required by Prelec’s BTS. Unfortunately the WP truth serum seems very sensitive to the common prior assumption.
Wait, wait, let me understand this. It’s the robust knowledge aggregation part that held up experimentally, not the truth serum part, right? In this experiment the participants had very few incentives to game the system, and they didn’t even have a full understanding of the system’s internals. In contrast, prediction markets are supposed to work even if everybody tries to game them constantly.
Manipulability is addressed experimentally in a different working paper. The participants weren’t told the internals and the manipulations were mostly hypothetical, but honesty was the highest scoring strategy in what they considered.
In some sense, it’s easy to manipulate BTS to give a particular answer. The only problem is you might end up owing the operator incredibly large sums of money. If payments to and from the mechanism aren’t being made, BTS is worthless if people try to game it. I should have a post up shortly about a better mechanism.
No. In one of the posts or papers, I know I saw some comments or discussion that it is deceivable (and so you wouldn’t necessarily want to explain the procedure) but the obvious way doesn’t work.
Whoops, edited.
And that’s exactly the search term I was missing. Good to see it is a real thing.