It also adds an attack vector, both for those willing to spend to influence the automation
I’m optimistic that we can cope with this in a very robust way (e.g. by ensuring that when there is disagreement, the disagreeing parties end up putting in enough money that the arbitrage can be used to fund moderation).
and for those wanting to make a profit on their influence over the moderators
This seems harder to deal with convincingly address.
But I don’t think there’s any solution that doesn’t involve a lot more ground-truthing by trusted evaluators.
So far I don’t see any lower bounds on the amount of ground truth required. I expect that there aren’t really theoretical limits—if the moderator was only willing to moderate in return for very large sums of money, then the cost per comment would be quite high, but they would potentially have to moderate very few times. I see two fundamental limits:
Moderation is required in order to reveal info about the moderator’s behavior, which is needed by sophisticated bettors. This could also be provided in other ways.
Moderation is required in order to actually move money from the bad predictors to the good predictors. (This doesn’t seem important for “small” forums, since then the incentive effects are always the main thing, i.e. the relevant movement of funds from bad- to good- predictors is happening at the scale of the world at large, not at the scale of a particular small forum).
I’m optimistic that we can cope with this in a very robust way (e.g. by ensuring that when there is disagreement, the disagreeing parties end up putting in enough money that the arbitrage can be used to fund moderation).
That assumes that many people are away of a given post over which there are disagreements in the first place.
I’m optimistic that we can cope with this in a very robust way (e.g. by ensuring that when there is disagreement, the disagreeing parties end up putting in enough money that the arbitrage can be used to fund moderation).
This seems harder to deal with convincingly address.
So far I don’t see any lower bounds on the amount of ground truth required. I expect that there aren’t really theoretical limits—if the moderator was only willing to moderate in return for very large sums of money, then the cost per comment would be quite high, but they would potentially have to moderate very few times. I see two fundamental limits:
Moderation is required in order to reveal info about the moderator’s behavior, which is needed by sophisticated bettors. This could also be provided in other ways.
Moderation is required in order to actually move money from the bad predictors to the good predictors. (This doesn’t seem important for “small” forums, since then the incentive effects are always the main thing, i.e. the relevant movement of funds from bad- to good- predictors is happening at the scale of the world at large, not at the scale of a particular small forum).
That assumes that many people are away of a given post over which there are disagreements in the first place.