I gave somebody I know (50yo libertarian-leaning conservative) Doing Good Better by William MacAskill. I told them I think they might like the book because it has “interesting economic arguments”, in order to not seem like a crazy EA-evangelist. I thought their response to it was interesting so I am sharing it here.
They received the book mostly positively. Their main takeaway was the idea that thinking twice about whether a particular action is really sensible can have highly positive impacts.
Here were their criticisms / misconceptions (which I am describing in my own words):
(1) Counterfactual concerns. Society benefits from lots of diverse institutions existing. It would be worse off if everybody jumped ship to contribute to effective causes. This is particularly the case with people who would have gone on to do really valuable things. Example: what if the people who founded Netflix, Amazon, etc. instead went down provably effective but in hindsight less value-add paths?
(2) When deciding which action to take, the error bars in expected value calculations are quite high. So, how can we possibly choose? In cases where the expected value of the “effective” option is higher but within error bars, to what extent am I a bad person for not choosing the effective option?
(12) Example: should a schoolteacher quit their job and go do work for charity?
My response to them was the following:
On (1): I think it’s OK for people to choose paths according to comparative advantage. For the Netflix example, early on it was high-risk-high-reward, but the high risk was not ludicrously high because the founders had technical expertise, a novel idea, really liked movies, whatever. Basically the idea here is the Indian TV show magician example 80000 Hours talks about here.
On (2): If the error bars around expected value overlap significantly, then I think they cease to be useful enough to be the main decision factor. So, switch to deciding by some other criterion. Maybe one option is more high-risk-high-reward than the other, so if a person is risk averse / seeking they will have different preferences. Maybe one option increases personal quality of life (this is a valid decision factor!!).
On (12): This combines the previous two. If one option (teaching vs charity) has an expected value of much larger than that of the other option, the teacher should probably pick the higher impact option. This doesn’t have to be charity—maybe the teacher is a remarkable teacher and a terrible charity worker.
As described in my response to (1), the expected value calculation takes into account high-risk-high-reward cases as long as the calculations are done reasonably. (If the teacher thinks their chances of doing extreme amounts of good with the charity are 1%, when in reality they are 0.0001%, this is a problem.)
If the expected values are too close to compare, the teacher is “allowed” to use other decision criteria, as described in my response to (2).
In the end, what we were able to agree on was:
(*) There exist real-world decisions where one option is (with high probability) much more effective than the other option. It makes to choose the effective option in these cases. Example: PlayPump versus an effective charity
(*) High-risk-high-reward pursuits can be better choices than “provably effective” pursuits (e.g. research as opposed to earning to give).
What I think we still disagree on is the extent to which one is morally obligated to choose the effective option when the decision is in more of a gray area.
I take the “glass is half full” interpretation. In a world where most people do not consider the qualitative impact of their actions, choosing the effective option outside the gray areas is already a huge improvement.
I gave somebody I know (50yo libertarian-leaning conservative) Doing Good Better by William MacAskill. I told them I think they might like the book because it has “interesting economic arguments”, in order to not seem like a crazy EA-evangelist. I thought their response to it was interesting so I am sharing it here.
They received the book mostly positively. Their main takeaway was the idea that thinking twice about whether a particular action is really sensible can have highly positive impacts.
Here were their criticisms / misconceptions (which I am describing in my own words):
(1) Counterfactual concerns. Society benefits from lots of diverse institutions existing. It would be worse off if everybody jumped ship to contribute to effective causes. This is particularly the case with people who would have gone on to do really valuable things. Example: what if the people who founded Netflix, Amazon, etc. instead went down provably effective but in hindsight less value-add paths?
(2) When deciding which action to take, the error bars in expected value calculations are quite high. So, how can we possibly choose? In cases where the expected value of the “effective” option is higher but within error bars, to what extent am I a bad person for not choosing the effective option?
(12) Example: should a schoolteacher quit their job and go do work for charity?
My response to them was the following:
On (1): I think it’s OK for people to choose paths according to comparative advantage. For the Netflix example, early on it was high-risk-high-reward, but the high risk was not ludicrously high because the founders had technical expertise, a novel idea, really liked movies, whatever. Basically the idea here is the Indian TV show magician example 80000 Hours talks about here.
On (2): If the error bars around expected value overlap significantly, then I think they cease to be useful enough to be the main decision factor. So, switch to deciding by some other criterion. Maybe one option is more high-risk-high-reward than the other, so if a person is risk averse / seeking they will have different preferences. Maybe one option increases personal quality of life (this is a valid decision factor!!).
On (12): This combines the previous two. If one option (teaching vs charity) has an expected value of much larger than that of the other option, the teacher should probably pick the higher impact option. This doesn’t have to be charity—maybe the teacher is a remarkable teacher and a terrible charity worker.
As described in my response to (1), the expected value calculation takes into account high-risk-high-reward cases as long as the calculations are done reasonably. (If the teacher thinks their chances of doing extreme amounts of good with the charity are 1%, when in reality they are 0.0001%, this is a problem.)
If the expected values are too close to compare, the teacher is “allowed” to use other decision criteria, as described in my response to (2).
In the end, what we were able to agree on was:
(*) There exist real-world decisions where one option is (with high probability) much more effective than the other option. It makes to choose the effective option in these cases. Example: PlayPump versus an effective charity
(*) High-risk-high-reward pursuits can be better choices than “provably effective” pursuits (e.g. research as opposed to earning to give).
What I think we still disagree on is the extent to which one is morally obligated to choose the effective option when the decision is in more of a gray area.
I take the “glass is half full” interpretation. In a world where most people do not consider the qualitative impact of their actions, choosing the effective option outside the gray areas is already a huge improvement.