[...] the marginal difference between hiring you and hiring the next bioinformatician in line is (to us) negligible. Whether or not you (personally) choose to work for us will produce an insignificant net effect on our operations. The impact on your personal finances, however, will be significant. You could easily offset the marginal negative impact of working for us by donating a fraction of your surplus income to altruistic causes instead,”
Double standard: when considering the negative effect of her work, he compares her with the next in line, but when considering the positive effect of her donations, he doesn’t.
At any given moment, usually an organization wants a particular set of employees. If she doesn’t take the job, they’ll hire a different person for the role that would have been hers rather than just getting by with one person fewer.
At any moment, usually a charitable organization wants as much money as possible. If she doesn’t make the donations, the Against Malaria Foundation (or whatever) will just have that much less money.
It’s not quite that simple: maybe Effective Evil has trouble hiring (can’t imagine why) and so on average if she doesn’t take the job they have 0.3 bioinformaticians fewer in expectation; maybe the AMF works harder on fundraising if they get less than they hoped for and so on average if she doesn’t make the donations they’re only down by 0.9x what she would have given. But I would strongly expect that taking-the-job-or-not has much more of a substitution effect than giving-the-money-or-not.
Another problem is that he doesn’t account for the positive (less evil) effect of her donations as a reason to not hire her. EE would only hire her if the value she would provide in service of their goals exceeds the disvalue of her donations by at least as much as the next available candidate would. Likewise she would only work for them if the value of her donations for altruism exceeds the disvalue of her service to EE by at least as much as if she took a job at a normal organization. There’s no way her employment at EE is a +EV proposition for both of them.
Yeah, if it’s a net goal, then they can’t both be right. But strictly speaking he never says he wants to make the world worse on net. She says she wants to change the world for the better, but he just says he wants to change the world, period. They could deontologically or virtue-ethics-esque value the skillful doing of evil and changing the world from what it would otherwise have been, which is completely consistent with unleashing giant mice to rampage through NYC even as malaria is cured by donations from guilty employees—everyone gets what they want. Effective Evil gets to do evil while changing the world (all those skyscrapers ruined by overgrown rodents are certainly evil, and a visible change in the world), and the employees know they offset the evil by good elsewhere while keeping a handsome salary for themselves.
They could also easily just desire different things (“have different utility functions”). This is the basis for gains from trade, and, more germane to this example, political parties.
If Effective Evil thinks the most efficient way to do evil is assaulting people’s eyeballs with dust specks, and I think the most effective way to do evil would be increasing torture, I can use the money they give me to engineer aeroplane dust distribution technology to reduce torture. If they think 1000 specks equals 1 minute of torture, but I think 10e9 specks equals 1 minute of torture, there is a wide latitude for us to make a trade where I reduce 10 minutes of torture in expectation, and they get more than 10,000 specks-in-eyes. Their conception of evil is maximized, and mine is minimized.
He is evil, so he makes it look like she could compensate it, but in fact sets up incentives that she doesn’t. At least in expectation—which would be effective.
Probably, and it’s not a bad assumption. I’d imagine that donation to charities would vary wildly between candidates. But it’s still an assumption, and his argument is not as airtight as he makes it appear.
Double standard: when considering the negative effect of her work, he compares her with the next in line, but when considering the positive effect of her donations, he doesn’t.
At any given moment, usually an organization wants a particular set of employees. If she doesn’t take the job, they’ll hire a different person for the role that would have been hers rather than just getting by with one person fewer.
At any moment, usually a charitable organization wants as much money as possible. If she doesn’t make the donations, the Against Malaria Foundation (or whatever) will just have that much less money.
It’s not quite that simple: maybe Effective Evil has trouble hiring (can’t imagine why) and so on average if she doesn’t take the job they have 0.3 bioinformaticians fewer in expectation; maybe the AMF works harder on fundraising if they get less than they hoped for and so on average if she doesn’t make the donations they’re only down by 0.9x what she would have given. But I would strongly expect that taking-the-job-or-not has much more of a substitution effect than giving-the-money-or-not.
Another problem is that he doesn’t account for the positive (less evil) effect of her donations as a reason to not hire her. EE would only hire her if the value she would provide in service of their goals exceeds the disvalue of her donations by at least as much as the next available candidate would. Likewise she would only work for them if the value of her donations for altruism exceeds the disvalue of her service to EE by at least as much as if she took a job at a normal organization. There’s no way her employment at EE is a +EV proposition for both of them.
Yeah, if it’s a net goal, then they can’t both be right. But strictly speaking he never says he wants to make the world worse on net. She says she wants to change the world for the better, but he just says he wants to change the world, period. They could deontologically or virtue-ethics-esque value the skillful doing of evil and changing the world from what it would otherwise have been, which is completely consistent with unleashing giant mice to rampage through NYC even as malaria is cured by donations from guilty employees—everyone gets what they want. Effective Evil gets to do evil while changing the world (all those skyscrapers ruined by overgrown rodents are certainly evil, and a visible change in the world), and the employees know they offset the evil by good elsewhere while keeping a handsome salary for themselves.
They could also easily just desire different things (“have different utility functions”). This is the basis for gains from trade, and, more germane to this example, political parties.
If Effective Evil thinks the most efficient way to do evil is assaulting people’s eyeballs with dust specks, and I think the most effective way to do evil would be increasing torture, I can use the money they give me to engineer aeroplane dust distribution technology to reduce torture. If they think 1000 specks equals 1 minute of torture, but I think 10e9 specks equals 1 minute of torture, there is a wide latitude for us to make a trade where I reduce 10 minutes of torture in expectation, and they get more than 10,000 specks-in-eyes. Their conception of evil is maximized, and mine is minimized.
So it’s an Evil argument?
He is evil, so he makes it look like she could compensate it, but in fact sets up incentives that she doesn’t. At least in expectation—which would be effective.
I think Doug is making the assumption that the next in line is less likely to donate than Dr. Connor would be.
Probably, and it’s not a bad assumption. I’d imagine that donation to charities would vary wildly between candidates. But it’s still an assumption, and his argument is not as airtight as he makes it appear.