A question many people in the effective altruism movement have
struggled with around earning to give is how
to handlepotentially
harmful careers. It’s obviously self-defeating if you cause more
harm in earning your money than the good it does when you donate it,
but we want a higher threshold than that. As humans we need to have
approaches that account for our self-serving biases, where we tend to
underestimate the harm we cause and overestimate the good we do.
Additionally, some kinds of harm (ex: murder) do not seem like the
kind of thing you ought to be able to “cancel out” through donation,
even if the donation clearly has larger benefits (ex: saves vastly
many lives).
Unfortunately for most jobs, even questionable ones, the social impact
is very hard to work out. Consider someone deciding to go into the oil
industry: how much would they contribute to carbon emissions, after
considering the oil company’s elasticity of labor and the elasticity
of production? Does cheaper oil displace even more carbon-intensive
coal? How likely are extreme climate outcomes? Is the benefit of
cheaper energy in lifting people out of poverty enough to make it
positive on its own? Making a high-quality impact estimate for a
career is a huge amount of work, and there are a lot of potential
careers, especially when you consider that some roles in the oil
industry might be far more replaceable than others.
What should we do in cases where the benefits seem much larger than
the harms, but the harms are still significant? A potential rule I’ve
been kicking
around is, “don’t do work that is illegal, or that would be
illegal if the public knew what you were really doing.” The idea is,
we have a system for declaring profitable activities with negative
externalities off limits, one that is intended for the more common
case when someone is keeping what they earn for their own benefit. But
we can’t just use “don’t do work that is illegal” because our
legislative system can be slow to react to changes in the world or
information that isn’t yet widely available. For example, if most
people understood the cost-benefit tradeoffs in research to assess
the pandemic potential of viruses or create very powerful AI systems
I expect both would be prohibited.
It is, however, only a heuristic. For example, it gives the wrong
answer in cases where:
Crafting a law
prohibiting the versions of an activity that are net negative would
unavoidably cause people to stop doing closely related beneficial
activities.
The law is wrong and carefully considered civil
disobedience is needed to convince others.
I expect there are other areas where this rule permits careers
altruistically-minded people should avoid (even if the benefits seem
to dramatically outweigh the costs) or rejects ones that are very
important. Suggesting examples of either would be helpful!
Choosing a career is the kind of large-consequences decision where going beyond our heuristics
and thinking carefully about outcomes is often warranted. Still,
I see a bunch of value in sorting out a framework of general rules and
common exceptions, where people can think through about how their
particular situation fits.
Legality as a Career Harm Assessment Heuristic
Link post
A question many people in the effective altruism movement have struggled with around earning to give is how to handle potentially harmful careers. It’s obviously self-defeating if you cause more harm in earning your money than the good it does when you donate it, but we want a higher threshold than that. As humans we need to have approaches that account for our self-serving biases, where we tend to underestimate the harm we cause and overestimate the good we do. Additionally, some kinds of harm (ex: murder) do not seem like the kind of thing you ought to be able to “cancel out” through donation, even if the donation clearly has larger benefits (ex: saves vastly many lives).
Unfortunately for most jobs, even questionable ones, the social impact is very hard to work out. Consider someone deciding to go into the oil industry: how much would they contribute to carbon emissions, after considering the oil company’s elasticity of labor and the elasticity of production? Does cheaper oil displace even more carbon-intensive coal? How likely are extreme climate outcomes? Is the benefit of cheaper energy in lifting people out of poverty enough to make it positive on its own? Making a high-quality impact estimate for a career is a huge amount of work, and there are a lot of potential careers, especially when you consider that some roles in the oil industry might be far more replaceable than others.
What should we do in cases where the benefits seem much larger than the harms, but the harms are still significant? A potential rule I’ve been kicking around is, “don’t do work that is illegal, or that would be illegal if the public knew what you were really doing.” The idea is, we have a system for declaring profitable activities with negative externalities off limits, one that is intended for the more common case when someone is keeping what they earn for their own benefit. But we can’t just use “don’t do work that is illegal” because our legislative system can be slow to react to changes in the world or information that isn’t yet widely available. For example, if most people understood the cost-benefit tradeoffs in research to assess the pandemic potential of viruses or create very powerful AI systems I expect both would be prohibited.
It is, however, only a heuristic. For example, it gives the wrong answer in cases where:
Crafting a law prohibiting the versions of an activity that are net negative would unavoidably cause people to stop doing closely related beneficial activities.
The law is wrong and carefully considered civil disobedience is needed to convince others.
I expect there are other areas where this rule permits careers altruistically-minded people should avoid (even if the benefits seem to dramatically outweigh the costs) or rejects ones that are very important. Suggesting examples of either would be helpful!
Choosing a career is the kind of large-consequences decision where going beyond our heuristics and thinking carefully about outcomes is often warranted. Still, I see a bunch of value in sorting out a framework of general rules and common exceptions, where people can think through about how their particular situation fits.
Comment via: facebook, lesswrong, the EA Forum, mastodon