If we stop doing something that almost all first world humans are doing (say 1 billion people), then our impact will be about a billionth of the size of the problem. Given the size of impact that an effective altruist can hope to have, this tells us why non actions don’t have super high utilities in comparison. If there were 100 000 effective altruists (probably an overestimate), This would mean that all effective altruists refraining from doing X, would make the problem 0.01% better. Both how hard it is to refrain, and the impact if you manage it depend on the problem size, all pollution vs plastic straws. Assuming that this change took only 0.01% of the effective altruists time. (10 seconds per day, 4 of which you are asleep for). Clearly this change has to be something as small as avoiding plastic straws, of smaller. Assume linearity in work and reward, the normal assumption being diminishing returns. This makes the payoff equivalent to all effective altruists working full time on solving the problem, and solving it.
Technically, you need to evaluate the marginal value of one more effective altruist. If it was vitally important that someone worked on AI, but you have far more people than you need to do that, and the rest are twiddling their thumbs, get them reusing straws (Actually get them looking for other cause areas, reusing straws only makes sense if you are confidant that no other priority causes exist)
Suppose omega came to you and said that if you started a compostable straw buisness, there was an 0.001% chance of success, by which omega means solving the problem without any externalities. (The straws are the same price, just as easy to use, don’t taste funny ect.) Otherwise, the buisness will waste all your time and do nothing.
If this doesn’t seem like a promising opportunity for effective altruism, don’t bother with reusable straws either. In general the chance of success is 1/( Number of people using plastic straws X Proportion of time wasted avoiding them )
If we stop doing something that almost all first world humans are doing (say 1 billion people), then our impact will be about a billionth of the size of the problem. Given the size of impact that an effective altruist can hope to have, this tells us why non actions don’t have super high utilities in comparison. If there were 100 000 effective altruists (probably an overestimate), This would mean that all effective altruists refraining from doing X, would make the problem 0.01% better. Both how hard it is to refrain, and the impact if you manage it depend on the problem size, all pollution vs plastic straws. Assuming that this change took only 0.01% of the effective altruists time. (10 seconds per day, 4 of which you are asleep for). Clearly this change has to be something as small as avoiding plastic straws, of smaller. Assume linearity in work and reward, the normal assumption being diminishing returns. This makes the payoff equivalent to all effective altruists working full time on solving the problem, and solving it.
Technically, you need to evaluate the marginal value of one more effective altruist. If it was vitally important that someone worked on AI, but you have far more people than you need to do that, and the rest are twiddling their thumbs, get them reusing straws (Actually get them looking for other cause areas, reusing straws only makes sense if you are confidant that no other priority causes exist)
Suppose omega came to you and said that if you started a compostable straw buisness, there was an 0.001% chance of success, by which omega means solving the problem without any externalities. (The straws are the same price, just as easy to use, don’t taste funny ect.) Otherwise, the buisness will waste all your time and do nothing.
If this doesn’t seem like a promising opportunity for effective altruism, don’t bother with reusable straws either. In general the chance of success is 1/( Number of people using plastic straws X Proportion of time wasted avoiding them )