This is all great except for the contest part, which I might currently have moderate ethical objections to. In general I’m concerned by contests which are held as an alternative to just paying someone to do the work for you; I objected to the contest that SI used to select their new logo (which is great) for the same reasons.
Essentially what you’re doing is asking some unknown number of people to work for highly unpredictable pay, which is mostly likely to be (assuming at least a half-dozen entries), no pay at all. This tactic makes lots of financial sense and I understand why it would appeal to a cash-strapped non-profit, but it seems to me that if you’re going to ask someone to do work for your benefit, you should pay them for it. This is a slightly muddier ideal when it comes to non-profits because I certainly don’t think there’s anything wrong with asking people to volunteer their time. Perhaps it’s the uncertainty that’s bothering me; it’s as though you’re asking people to gamble with their time.
So perhaps it’s ethically equivalent to a charity-sponsored raffle, which I don’t object to. Is my reasoning wrong, or am I just inconsistent? I’m not sure.
I have a similar problem with contest-labor. I have less of a problem with it for non-profits. But my reasoning is actually particularly relevant to an organization that is (among other things), promoting rationality. (You could argue that it is either more or less concerning, given your pool of volunteers’ propensity for rationality)
My problem with contest labor is that it exploits people’s probability biases. They see “I could get $1000!”. They don’t see “the expected value for this labor is about $1.00/hour” (or less). Which is usually the case (especially for stuff like logo design). I don’t know what the expected value is for a contest like this—the numbers are high enough and the people contributing will probably be low enough that it may be a pretty good deal.
I don’t think this is wrong per se, but it’s Dark Arts-ish. (Approximately as Dark Arts as using anchoring in your advertising, but I’m not sure how bad I consider that in the first place)
(Bonus points to anyone who (for some reason?) has been following my posts closely and can point out inconsistencies in my previous comments on similar issues. I have no justification for the inconsistency)
I trust LWers to do expected utility calculations, but it’s actually much worse than this.
We may decide whether or not to enter based on our probabilities about how many other people will enter: if I think many people will enter, I shouldn’t waste my time, but if I think few people will enter, I have a good chance and should enter. But we also know all of our potential competitors will be thinking the same, and possibly making predictions with a similar algorithm to ourselves.
That makes this an anticoordination problem similar to the El Farol Bar, which is an especially nasty class of game because it means the majority of people inevitably regret their choice. If we predict few people will enter, then that prediction will make many people enter, and we will regret our prediction. If we predict many people will enter, that prediction will make few people enter, and we will again regret our prediction. As long as our choices are correlated, there’s no good option!
The proper response would be to pursue a mixed strategy in which we randomly enter or do not enter the contest based on some calculations and a coin flip, but this would unfairly privilege defectors and be a bit mean to the Singularity Institute, especially if people were to settle on a solution like only one person entering each contest—which might end up optimal since more people entering not only linearly decreases chance of winning, but also increases effort you have to put into your entry, eg if you were the only entrant you could just write a single sentence and win by default.
And you might think: then just let everyone know exactly how many people have entered at any one time. But that turns it into a Malthusianism: people will gain no utility by entering the contest, because utility of entering the contest is a function of how many other people are in the contest, and if there were still utility to be gained, more people would enter the contest until that stopped being true.
(Although this comment isn’t entirely serious, I honestly worried about some of these issues before I entered the efficient charity contest and the nutrition contest. And, uh, won both of them, which I guess makes me a dirty rotten defector and totally ruins my point.)
And you might think: then just let everyone know exactly how many people have entered at any one time. But that turns it into a Malthusianism: people will gain no utility by entering the contest, because utility of entering the contest is a function of how many other people are in the contest, and if there were still utility to be gained, more people would enter the contest until that stopped being true.
In fairness, this is only true if expected utility is purely a function of the number of participants, as in the El Farol Bar game. Here you also need to consider your strength relative to the field: if you and I both see that 10 people have entered then you might see opportunity where I would not, because you’ve won two of these and I haven’t.
This is more helpful than it sounds at first, because this is really a two-stage game: first you sign up to write the paper, and then you actually write one. Entrants will decide whether to advance to the second stage by an assessment of their own strength relative to the field, which should tend to decrease as the field of entrants grows larger. People with low assessed EVs are thus discouraged from investing—exactly the result we want, so long as their assessments are accurate.
My problem with contest labor is that it exploits people’s probability biases.
On the other hand, there could plausibly be many people who want to help SI but are suffering from akrasia issues, partially due to the lack of a concrete reward. Offering a reward, even one that people knew was illusionary, might play two biases against each other and get people to do what they’d endorse doing for free anyway.
I don’t know how many people fall into this category, but it would at least somewhat describe me. (Or at least would describe me if I weren’t currently getting paid to do writing for SI anyway.)
When I entered the Quantified Health contest, I calculated my expected return. I thought it would take me maybe 20-30 hours. I was right. I thought I had a 10% chance of winning $5000, 10% chance of winning $1000, and 50% chance of winning $500. That’s $850 expected return. That’s about $34 an hour to do something that I enjoyed doing, thought was a valuable use of my time, and taught me research skills and nutrition. I had just graduated high school, so that was way more than the wage I would have gotten in any mind-numbing part time I could have gotten in the small town where I was living. So entering was totally worthwhile.
I only won $500, which was an actual return of $20 an hour, but that’s still more than you get flipping burgers.
So I think that there’s nothing wrong with running these contests. People enter them if they think they should, and they’re relatively cheap ways of getting stuff done.
I do think with those numbers make it a fairly reasonable decision to enter, in that instance. A lot of my concern about contest-labor stems from how it affects the art industry, in which returns end up being less than minimum wage.
I don’t know how to expect this to play out over multiple iterations, either.
Definitely not an ethical issue anyway. Dark Artists, though, present all sorts of stuff they disagree with as “ethical” issues to claim the “high ground” and try to head off debate.
An alternative that you might consider more ethical is to limit the number of contestants and determine payment (or lack thereof) based on an absolute measure of quality rather than through competition.
Knowing about your biases does not automatically make you immune to them, and saying “but I told them the bias I was exploiting” doesn’t excuse you from responsibility for knowingly exploiting a bias.
I didn’t claim automatic immunity, I said “can”. While deontologists might object to “knowingly exploiting a bias” full stop and virtue ethicists might claim that a person who does such things is probably vicious, a consequentialist must determine whether, in this case, using the Dark Arts might lead to better or worse outcomes (which seems non-obvious to me).
This is all great except for the contest part, which I might currently have moderate ethical objections to. In general I’m concerned by contests which are held as an alternative to just paying someone to do the work for you; I objected to the contest that SI used to select their new logo (which is great) for the same reasons.
Essentially what you’re doing is asking some unknown number of people to work for highly unpredictable pay, which is mostly likely to be (assuming at least a half-dozen entries), no pay at all. This tactic makes lots of financial sense and I understand why it would appeal to a cash-strapped non-profit, but it seems to me that if you’re going to ask someone to do work for your benefit, you should pay them for it. This is a slightly muddier ideal when it comes to non-profits because I certainly don’t think there’s anything wrong with asking people to volunteer their time. Perhaps it’s the uncertainty that’s bothering me; it’s as though you’re asking people to gamble with their time.
So perhaps it’s ethically equivalent to a charity-sponsored raffle, which I don’t object to. Is my reasoning wrong, or am I just inconsistent? I’m not sure.
I have a similar problem with contest-labor. I have less of a problem with it for non-profits. But my reasoning is actually particularly relevant to an organization that is (among other things), promoting rationality. (You could argue that it is either more or less concerning, given your pool of volunteers’ propensity for rationality)
My problem with contest labor is that it exploits people’s probability biases. They see “I could get $1000!”. They don’t see “the expected value for this labor is about $1.00/hour” (or less). Which is usually the case (especially for stuff like logo design). I don’t know what the expected value is for a contest like this—the numbers are high enough and the people contributing will probably be low enough that it may be a pretty good deal.
I don’t think this is wrong per se, but it’s Dark Arts-ish. (Approximately as Dark Arts as using anchoring in your advertising, but I’m not sure how bad I consider that in the first place)
(Bonus points to anyone who (for some reason?) has been following my posts closely and can point out inconsistencies in my previous comments on similar issues. I have no justification for the inconsistency)
I trust LWers to do expected utility calculations, but it’s actually much worse than this.
We may decide whether or not to enter based on our probabilities about how many other people will enter: if I think many people will enter, I shouldn’t waste my time, but if I think few people will enter, I have a good chance and should enter. But we also know all of our potential competitors will be thinking the same, and possibly making predictions with a similar algorithm to ourselves.
That makes this an anticoordination problem similar to the El Farol Bar, which is an especially nasty class of game because it means the majority of people inevitably regret their choice. If we predict few people will enter, then that prediction will make many people enter, and we will regret our prediction. If we predict many people will enter, that prediction will make few people enter, and we will again regret our prediction. As long as our choices are correlated, there’s no good option!
The proper response would be to pursue a mixed strategy in which we randomly enter or do not enter the contest based on some calculations and a coin flip, but this would unfairly privilege defectors and be a bit mean to the Singularity Institute, especially if people were to settle on a solution like only one person entering each contest—which might end up optimal since more people entering not only linearly decreases chance of winning, but also increases effort you have to put into your entry, eg if you were the only entrant you could just write a single sentence and win by default.
And you might think: then just let everyone know exactly how many people have entered at any one time. But that turns it into a Malthusianism: people will gain no utility by entering the contest, because utility of entering the contest is a function of how many other people are in the contest, and if there were still utility to be gained, more people would enter the contest until that stopped being true.
(Although this comment isn’t entirely serious, I honestly worried about some of these issues before I entered the efficient charity contest and the nutrition contest. And, uh, won both of them, which I guess makes me a dirty rotten defector and totally ruins my point.)
In fairness, this is only true if expected utility is purely a function of the number of participants, as in the El Farol Bar game. Here you also need to consider your strength relative to the field: if you and I both see that 10 people have entered then you might see opportunity where I would not, because you’ve won two of these and I haven’t.
This is more helpful than it sounds at first, because this is really a two-stage game: first you sign up to write the paper, and then you actually write one. Entrants will decide whether to advance to the second stage by an assessment of their own strength relative to the field, which should tend to decrease as the field of entrants grows larger. People with low assessed EVs are thus discouraged from investing—exactly the result we want, so long as their assessments are accurate.
So what other ways could the Game be constructed to avoid this problem?
On the other hand, there could plausibly be many people who want to help SI but are suffering from akrasia issues, partially due to the lack of a concrete reward. Offering a reward, even one that people knew was illusionary, might play two biases against each other and get people to do what they’d endorse doing for free anyway.
I don’t know how many people fall into this category, but it would at least somewhat describe me. (Or at least would describe me if I weren’t currently getting paid to do writing for SI anyway.)
When I entered the Quantified Health contest, I calculated my expected return. I thought it would take me maybe 20-30 hours. I was right. I thought I had a 10% chance of winning $5000, 10% chance of winning $1000, and 50% chance of winning $500. That’s $850 expected return. That’s about $34 an hour to do something that I enjoyed doing, thought was a valuable use of my time, and taught me research skills and nutrition. I had just graduated high school, so that was way more than the wage I would have gotten in any mind-numbing part time I could have gotten in the small town where I was living. So entering was totally worthwhile.
I only won $500, which was an actual return of $20 an hour, but that’s still more than you get flipping burgers.
So I think that there’s nothing wrong with running these contests. People enter them if they think they should, and they’re relatively cheap ways of getting stuff done.
I do think with those numbers make it a fairly reasonable decision to enter, in that instance. A lot of my concern about contest-labor stems from how it affects the art industry, in which returns end up being less than minimum wage.
I don’t know how to expect this to play out over multiple iterations, either.
Thank you for explaining to me what I was thinking. This is exactly my concern.
Exactly—this is what I understood to be the point of running contests. So presenting such a contest to LessWrong is odd (to put it politely).
As long as the process is clear and people know what they’re getting into, I don’t think there’s an issue with this.
Definitely not an ethical issue anyway. Dark Artists, though, present all sorts of stuff they disagree with as “ethical” issues to claim the “high ground” and try to head off debate.
An alternative that you might consider more ethical is to limit the number of contestants and determine payment (or lack thereof) based on an absolute measure of quality rather than through competition.
Yeah, that would definitely allay my concerns. I think it’s the uncertainty surrounding the number and quality of other entrants that bothers me.
Is it still unethical if competitors know the field?
I have a hard time seeing how offering someone any deal they aren’t being deceived about can be unethical...
See Raemon’s comment. The Dark Arts are involved, mere honesty is no defense.
Yes, but those who read this thread know the Dark Arts are involved and can adjust their beliefs accordingly.
Knowing about your biases does not automatically make you immune to them, and saying “but I told them the bias I was exploiting” doesn’t excuse you from responsibility for knowingly exploiting a bias.
I didn’t claim automatic immunity, I said “can”. While deontologists might object to “knowingly exploiting a bias” full stop and virtue ethicists might claim that a person who does such things is probably vicious, a consequentialist must determine whether, in this case, using the Dark Arts might lead to better or worse outcomes (which seems non-obvious to me).