It also adds an attack vector, both for those willing to spend to influence the automation, and for those wanting to make a profit on their influence over the moderators.
I’d love to see a model displayed alongside the actual karma and results, and I’d like to be able to set my thresholds for each mechanism independently. But I don’t think there’s any solution that doesn’t involve a lot more ground-truthing by trusted evaluators.
Note that we could move one level of abstraction out—use algorithms (possibly ML, possibly simple analytics) to identify trust level in moderators, which the actual owners (those who pick the moderators and algorithms) can use to spread the moderation load more widely.
It also adds an attack vector, both for those willing to spend to influence the automation
I’m optimistic that we can cope with this in a very robust way (e.g. by ensuring that when there is disagreement, the disagreeing parties end up putting in enough money that the arbitrage can be used to fund moderation).
and for those wanting to make a profit on their influence over the moderators
This seems harder to deal with convincingly address.
But I don’t think there’s any solution that doesn’t involve a lot more ground-truthing by trusted evaluators.
So far I don’t see any lower bounds on the amount of ground truth required. I expect that there aren’t really theoretical limits—if the moderator was only willing to moderate in return for very large sums of money, then the cost per comment would be quite high, but they would potentially have to moderate very few times. I see two fundamental limits:
Moderation is required in order to reveal info about the moderator’s behavior, which is needed by sophisticated bettors. This could also be provided in other ways.
Moderation is required in order to actually move money from the bad predictors to the good predictors. (This doesn’t seem important for “small” forums, since then the incentive effects are always the main thing, i.e. the relevant movement of funds from bad- to good- predictors is happening at the scale of the world at large, not at the scale of a particular small forum).
I’m optimistic that we can cope with this in a very robust way (e.g. by ensuring that when there is disagreement, the disagreeing parties end up putting in enough money that the arbitrage can be used to fund moderation).
That assumes that many people are away of a given post over which there are disagreements in the first place.
But I don’t think there’s any solution that doesn’t involve a lot more ground-truthing by trusted evaluators.
100 to 1000 votes might not be enough votes by trusted elevators. On the other hand I think the amount of votes needed is small enough, to have a stable system.
When a given post is very unclear because there are strong signals that it should be hidden and also strong signals that it should be displayed prominently, that post could go to a special assessment list that the moderator prunes from time to time.
I don’t think the effort would be to high for a blogger like Scott.
It also adds an attack vector, both for those willing to spend to influence the automation, and for those wanting to make a profit on their influence over the moderators.
I’d love to see a model displayed alongside the actual karma and results, and I’d like to be able to set my thresholds for each mechanism independently. But I don’t think there’s any solution that doesn’t involve a lot more ground-truthing by trusted evaluators.
Note that we could move one level of abstraction out—use algorithms (possibly ML, possibly simple analytics) to identify trust level in moderators, which the actual owners (those who pick the moderators and algorithms) can use to spread the moderation load more widely.
I’m optimistic that we can cope with this in a very robust way (e.g. by ensuring that when there is disagreement, the disagreeing parties end up putting in enough money that the arbitrage can be used to fund moderation).
This seems harder to deal with convincingly address.
So far I don’t see any lower bounds on the amount of ground truth required. I expect that there aren’t really theoretical limits—if the moderator was only willing to moderate in return for very large sums of money, then the cost per comment would be quite high, but they would potentially have to moderate very few times. I see two fundamental limits:
Moderation is required in order to reveal info about the moderator’s behavior, which is needed by sophisticated bettors. This could also be provided in other ways.
Moderation is required in order to actually move money from the bad predictors to the good predictors. (This doesn’t seem important for “small” forums, since then the incentive effects are always the main thing, i.e. the relevant movement of funds from bad- to good- predictors is happening at the scale of the world at large, not at the scale of a particular small forum).
That assumes that many people are away of a given post over which there are disagreements in the first place.
100 to 1000 votes might not be enough votes by trusted elevators. On the other hand I think the amount of votes needed is small enough, to have a stable system.
When a given post is very unclear because there are strong signals that it should be hidden and also strong signals that it should be displayed prominently, that post could go to a special assessment list that the moderator prunes from time to time.
I don’t think the effort would be to high for a blogger like Scott.