I have a similar problem with contest-labor. I have less of a problem with it for non-profits. But my reasoning is actually particularly relevant to an organization that is (among other things), promoting rationality. (You could argue that it is either more or less concerning, given your pool of volunteers’ propensity for rationality)
My problem with contest labor is that it exploits people’s probability biases. They see “I could get $1000!”. They don’t see “the expected value for this labor is about $1.00/hour” (or less). Which is usually the case (especially for stuff like logo design). I don’t know what the expected value is for a contest like this—the numbers are high enough and the people contributing will probably be low enough that it may be a pretty good deal.
I don’t think this is wrong per se, but it’s Dark Arts-ish. (Approximately as Dark Arts as using anchoring in your advertising, but I’m not sure how bad I consider that in the first place)
(Bonus points to anyone who (for some reason?) has been following my posts closely and can point out inconsistencies in my previous comments on similar issues. I have no justification for the inconsistency)
I trust LWers to do expected utility calculations, but it’s actually much worse than this.
We may decide whether or not to enter based on our probabilities about how many other people will enter: if I think many people will enter, I shouldn’t waste my time, but if I think few people will enter, I have a good chance and should enter. But we also know all of our potential competitors will be thinking the same, and possibly making predictions with a similar algorithm to ourselves.
That makes this an anticoordination problem similar to the El Farol Bar, which is an especially nasty class of game because it means the majority of people inevitably regret their choice. If we predict few people will enter, then that prediction will make many people enter, and we will regret our prediction. If we predict many people will enter, that prediction will make few people enter, and we will again regret our prediction. As long as our choices are correlated, there’s no good option!
The proper response would be to pursue a mixed strategy in which we randomly enter or do not enter the contest based on some calculations and a coin flip, but this would unfairly privilege defectors and be a bit mean to the Singularity Institute, especially if people were to settle on a solution like only one person entering each contest—which might end up optimal since more people entering not only linearly decreases chance of winning, but also increases effort you have to put into your entry, eg if you were the only entrant you could just write a single sentence and win by default.
And you might think: then just let everyone know exactly how many people have entered at any one time. But that turns it into a Malthusianism: people will gain no utility by entering the contest, because utility of entering the contest is a function of how many other people are in the contest, and if there were still utility to be gained, more people would enter the contest until that stopped being true.
(Although this comment isn’t entirely serious, I honestly worried about some of these issues before I entered the efficient charity contest and the nutrition contest. And, uh, won both of them, which I guess makes me a dirty rotten defector and totally ruins my point.)
And you might think: then just let everyone know exactly how many people have entered at any one time. But that turns it into a Malthusianism: people will gain no utility by entering the contest, because utility of entering the contest is a function of how many other people are in the contest, and if there were still utility to be gained, more people would enter the contest until that stopped being true.
In fairness, this is only true if expected utility is purely a function of the number of participants, as in the El Farol Bar game. Here you also need to consider your strength relative to the field: if you and I both see that 10 people have entered then you might see opportunity where I would not, because you’ve won two of these and I haven’t.
This is more helpful than it sounds at first, because this is really a two-stage game: first you sign up to write the paper, and then you actually write one. Entrants will decide whether to advance to the second stage by an assessment of their own strength relative to the field, which should tend to decrease as the field of entrants grows larger. People with low assessed EVs are thus discouraged from investing—exactly the result we want, so long as their assessments are accurate.
My problem with contest labor is that it exploits people’s probability biases.
On the other hand, there could plausibly be many people who want to help SI but are suffering from akrasia issues, partially due to the lack of a concrete reward. Offering a reward, even one that people knew was illusionary, might play two biases against each other and get people to do what they’d endorse doing for free anyway.
I don’t know how many people fall into this category, but it would at least somewhat describe me. (Or at least would describe me if I weren’t currently getting paid to do writing for SI anyway.)
When I entered the Quantified Health contest, I calculated my expected return. I thought it would take me maybe 20-30 hours. I was right. I thought I had a 10% chance of winning $5000, 10% chance of winning $1000, and 50% chance of winning $500. That’s $850 expected return. That’s about $34 an hour to do something that I enjoyed doing, thought was a valuable use of my time, and taught me research skills and nutrition. I had just graduated high school, so that was way more than the wage I would have gotten in any mind-numbing part time I could have gotten in the small town where I was living. So entering was totally worthwhile.
I only won $500, which was an actual return of $20 an hour, but that’s still more than you get flipping burgers.
So I think that there’s nothing wrong with running these contests. People enter them if they think they should, and they’re relatively cheap ways of getting stuff done.
I do think with those numbers make it a fairly reasonable decision to enter, in that instance. A lot of my concern about contest-labor stems from how it affects the art industry, in which returns end up being less than minimum wage.
I don’t know how to expect this to play out over multiple iterations, either.
I have a similar problem with contest-labor. I have less of a problem with it for non-profits. But my reasoning is actually particularly relevant to an organization that is (among other things), promoting rationality. (You could argue that it is either more or less concerning, given your pool of volunteers’ propensity for rationality)
My problem with contest labor is that it exploits people’s probability biases. They see “I could get $1000!”. They don’t see “the expected value for this labor is about $1.00/hour” (or less). Which is usually the case (especially for stuff like logo design). I don’t know what the expected value is for a contest like this—the numbers are high enough and the people contributing will probably be low enough that it may be a pretty good deal.
I don’t think this is wrong per se, but it’s Dark Arts-ish. (Approximately as Dark Arts as using anchoring in your advertising, but I’m not sure how bad I consider that in the first place)
(Bonus points to anyone who (for some reason?) has been following my posts closely and can point out inconsistencies in my previous comments on similar issues. I have no justification for the inconsistency)
I trust LWers to do expected utility calculations, but it’s actually much worse than this.
We may decide whether or not to enter based on our probabilities about how many other people will enter: if I think many people will enter, I shouldn’t waste my time, but if I think few people will enter, I have a good chance and should enter. But we also know all of our potential competitors will be thinking the same, and possibly making predictions with a similar algorithm to ourselves.
That makes this an anticoordination problem similar to the El Farol Bar, which is an especially nasty class of game because it means the majority of people inevitably regret their choice. If we predict few people will enter, then that prediction will make many people enter, and we will regret our prediction. If we predict many people will enter, that prediction will make few people enter, and we will again regret our prediction. As long as our choices are correlated, there’s no good option!
The proper response would be to pursue a mixed strategy in which we randomly enter or do not enter the contest based on some calculations and a coin flip, but this would unfairly privilege defectors and be a bit mean to the Singularity Institute, especially if people were to settle on a solution like only one person entering each contest—which might end up optimal since more people entering not only linearly decreases chance of winning, but also increases effort you have to put into your entry, eg if you were the only entrant you could just write a single sentence and win by default.
And you might think: then just let everyone know exactly how many people have entered at any one time. But that turns it into a Malthusianism: people will gain no utility by entering the contest, because utility of entering the contest is a function of how many other people are in the contest, and if there were still utility to be gained, more people would enter the contest until that stopped being true.
(Although this comment isn’t entirely serious, I honestly worried about some of these issues before I entered the efficient charity contest and the nutrition contest. And, uh, won both of them, which I guess makes me a dirty rotten defector and totally ruins my point.)
In fairness, this is only true if expected utility is purely a function of the number of participants, as in the El Farol Bar game. Here you also need to consider your strength relative to the field: if you and I both see that 10 people have entered then you might see opportunity where I would not, because you’ve won two of these and I haven’t.
This is more helpful than it sounds at first, because this is really a two-stage game: first you sign up to write the paper, and then you actually write one. Entrants will decide whether to advance to the second stage by an assessment of their own strength relative to the field, which should tend to decrease as the field of entrants grows larger. People with low assessed EVs are thus discouraged from investing—exactly the result we want, so long as their assessments are accurate.
So what other ways could the Game be constructed to avoid this problem?
On the other hand, there could plausibly be many people who want to help SI but are suffering from akrasia issues, partially due to the lack of a concrete reward. Offering a reward, even one that people knew was illusionary, might play two biases against each other and get people to do what they’d endorse doing for free anyway.
I don’t know how many people fall into this category, but it would at least somewhat describe me. (Or at least would describe me if I weren’t currently getting paid to do writing for SI anyway.)
When I entered the Quantified Health contest, I calculated my expected return. I thought it would take me maybe 20-30 hours. I was right. I thought I had a 10% chance of winning $5000, 10% chance of winning $1000, and 50% chance of winning $500. That’s $850 expected return. That’s about $34 an hour to do something that I enjoyed doing, thought was a valuable use of my time, and taught me research skills and nutrition. I had just graduated high school, so that was way more than the wage I would have gotten in any mind-numbing part time I could have gotten in the small town where I was living. So entering was totally worthwhile.
I only won $500, which was an actual return of $20 an hour, but that’s still more than you get flipping burgers.
So I think that there’s nothing wrong with running these contests. People enter them if they think they should, and they’re relatively cheap ways of getting stuff done.
I do think with those numbers make it a fairly reasonable decision to enter, in that instance. A lot of my concern about contest-labor stems from how it affects the art industry, in which returns end up being less than minimum wage.
I don’t know how to expect this to play out over multiple iterations, either.
Thank you for explaining to me what I was thinking. This is exactly my concern.
Exactly—this is what I understood to be the point of running contests. So presenting such a contest to LessWrong is odd (to put it politely).